What I want to discuss is not the “future” of Isaac Asimov in terms of its entertainment value. Rather I am thinking of the philosophical assumptions which underlay his science fiction. Asimov was the quintessential modernist. For decades he was the quasi-official (and self-appointed) don of an ingenious and rather confident vision of human progress. But since the writer passed away over 20 years ago, the vision has gotten a little frayed around the edges.
To understand a person’s ideal of the future we must understand his view of the past. It is clear that Asimov often used his stories as a vehicle for his social theories. In the final story of I, Robot, “The Evitable Conflict,” the character Stephen Byerly makes a grand survey of history, noting the prolonged and apparently pointless conflicts that have characterized our civilization. There were the confrontations between Bourbons and Habsburgs, Catholics and Protestants, imperialists and nationalists, and finally the “ideological wars” of the twentieth century and the choice between “Adam Smith or Karl Marx.” Yet, in a dismissive Hegelian sweep of his hand, Byerly pronounces all of these conflicts as both “inevitable” and irrelevant in the long run. Old contradictions are overcome and reconciled by new circumstances.
There is a half truth at work in this. Some historical events do not admit of clear good and evil, like the squabbles between the Guelphs and Ghibellines of Renaissance Italy. Even in those that do, there is still a lot of room for muddling and error. That aside, it would be wrong to say that the sources of these conflicts (or their outcomes, for that matter) are entirely irrelevant. The role of theology in our society—so eagerly dismissed by Asimov and has peers as a sort of medieval artifact—has shown itself to be surprisingly important with the resurgence of Islam.
Like the “psychohistory” of Hari Seldon in his Foundation series, I find that Asimov starts at the wrong end. He begins with humanity in the mass, as a bunch of aggregate theories and trends that can be easily manipulated (or so it seems), rather than with the individual in terms of his personal, and highly unpredictable, actions and responsibilities. Nor do I believe that the literal deus ex machina of a robot-run world, which we are coming close to realizing, will help very much. In the 21st century of Stephen Byerly, the superior artificial intelligence of the “positronic brain” has created a technocratic order in which all human conflict has become “evitable” or avoidable. Never mind the obvious downside. Asimov implies that in the long run the trade-off is worth it. This recurring paradox of totalitarian “freedom” is addressed in my last post.