Last week, the Times columnist James Marriott tweeted what he called “one of the most prophetic paragraphs of the twentieth century” — a snippet from Neil Postman’s 1985 book Amusing Ourselves to Death. In it, Postman predicts that the near future will be a Huxleyan rather than Orwellian nightmare: a world in which would-be tyrants have no need to ban books, since people — numbed by too much information and zonked out on cheap entertainment — no longer read anyway.
It is, indeed, prescient. But it also got me thinking: why do we keep going back to Postman? Indeed, why do we keep going back to Alasdair MacIntyre, René Girard, Leszek Kołakowski, Hannah Arendt, Czesław Miłosz, Christopher Lasch, Zygmunt Bauman, and all the other heavyweights of the second half of the twentieth century? Have we really not come up with any fresh ideas since?
It does not seem as if the twenty-first century has offered any insights that weren’t better articulated decades ago
Sadly, I don’t think we have. There’s plenty of good writing, of course. But when it comes to the biggest questions — technology, multiculturalism, the decline of religion, the waning of democracy — it does not seem as if the twenty-first century has offered any insights that weren’t better articulated decades ago. When Alasdair MacIntyre died a couple of months back, commentators rushed to agree that his 1981 book After Virtue gave us the authoritative account of the morally confused world we live in. This wasn’t just opportunism. After Virtue, in my opinion, really did get its diagnosis of modern ethics spot on! I’ve made the same unoriginal point myself. Similarly, I often feel, when I sit down to write, that the most fruitful thing I could do would be just to compose a single sentence: “Go pick up a book of Kołakowski’s essays”.
As I see it, the problem is largely historical. With most of the big issues we face, what we’re really discussing is, at root, our underlying worldview — essentially, secular materialism — refracted through the smaller, immediate prisms of the moment. The seeds of that underlying worldview were planted in the seventeenth century, blossomed towards the end of the nineteenth, and were harvested — often with grave consequences — in the twentieth. In other words, the ramifications of our new intellectual settlement were clear to many by the midpoint of the last century (and to some, much earlier still), leading, in the second half, to a furious period of both creativity and criticism. There is, for the most part, now precious little to add.
Take artificial intelligence — on the face of it, the most radically new development of our time. Fantasies about artificial intelligence go back, in some form, at least as far as the Greek myths. But in its modern form, the story begins in the seventeenth century, with Descartes. Descartes dreamed of making all human knowledge as precise and as certain as maths. His ambition was methodological: true knowledge, he thought, could only be achieved if we restricted ourselves to reasoning, like mathematicians, in a step-by-step manner, from certain premise to certain premise. He wrote:
Those long chains, composed of very simple and easy reasonings, which geometers customarily use to arrive at their most difficult demonstrations, had given me occasion to suppose that all the things which come within the scope of human knowledge are interconnected in the same way. And I thought that provided that we refrain from accepting as true anything which is not, and always keep to the order required for deducing one thing from another, there can be nothing too remote to be reached in the end or too well hidden to be discovered.
Our picture of reality was to be constructed like a jigsaw puzzle: starting with a single piece, we would add only immediately adjacent pieces, one by one, checking each time that we had chosen the right one, until we had fleshed out the full, glorious image of the universe.
This dream was only possible, however, if the world, and the mind comprehending it, really could be reduced to simple building blocks. This, obviously, was a symptom of a bias towards atomistic theories of the physical world. But philosophers began to dream of analysing human thought, too, down to granular, discrete chunks. Leibniz spoke of an “alphabet of human thoughts”. Hume tried to explain sense perception as the cumulative result of millions of isolable atoms of experience. Hobbes wrote: “By reasoning I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract”.
By the nineteenth century, the logician George Boole spoke confidently of a “mathematics of the human intellect”. Charles Babbage, meanwhile, invented the first mechanical computer, the “difference engine”. These two lines naturally converged. At the 1956 Dartmouth Conference at which the term “artificial intelligence” was first actually coined, the organisers claimed that human “intelligence can in principle be so precisely described that a machine can be made to simulate it”.
All to say: the central arguments for artificial intelligence, and therefore the key arguments against it, were well-established by about 1960. Alan Turing, Marvin Minsky, John McCarthy, and many others had already by then laid out sophisticated (if, in my opinion, wrong) philosophical arguments for why a computer ought to be, in principle, capable of replicating the human mind. In 1965, the RAND corporation invited the philosopher Herbert Dreyfus to produce a counter-case, which resulted in his paper “Alchemy and Artificial Intelligence”, later expanded into a book, What Computers Can’t Do, published in 1972.
Dreyfus argues that the dream of humanlike AI rests, ultimately, on a set of faulty philosophical assumptions: i.e., that the world can be reduced to atomic physical facts, that human reasoning can be reduced to a set of explicit rules, and that both can be in principle described perfectly precisely. These assumptions are, of course, really just those that emerged in the seventeenth century. They are the same assumptions that Blaise Pascal challenged, at the very time, in his Pensées, when he distinguished between the esprit de géométrie (mathematical thought) and esprit de finesse (intuitive or perceptive thought): “Mathematicians wish to treat matters of perception mathematically, and make themselves ridiculous.… the mind… does it tacitly, naturally, and without technical rules.”
You could argue that it took until the twentieth century for the full force of these arguments — on both sides — to land. Turing, Minsky, and Shannon understood and explored the logical implications of our underlying worldview with astounding clarity and inventiveness. Husserl, Heidegger, Wittgenstein, and other critics of reductionism understood, in turn, the limitations of such a worldview. What is harder to argue is that, for all the surface developments in AI, anybody has added much to the arguments since (sorry Yuval). Even the wackier strands of technological utopianism, like transhumanism, go back to at least the 1970s and 80s — just look up FM-2030.
It’s possible that all of this is about to change, and that genuinely new ideas are around the corner. But we do seem to have stumbled into an unusually inert moment in history when universal truths about humanity — that we don’t really know what to do without religion, that we have an unquenchable desire to abstract ourselves away from experience, that we hubristically believe ourselves to be fully capable of dominating nature — have become painfully clear. I am reminded, not for the first time, of Kołakowski’s brilliant essay “Modernity on Endless Trial”, published in 1986:
We experience an overwhelming and at the same time humiliating feeling of déjà vu in following and participating in contemporary discussions about the destructive effects of the so-called secularisation of Western civilisation, the apparently progressive evaporation of our religious legacy, and the sad spectacle of a godless world. It appears as if we suddenly woke up to perceive things which the humble, and not necessarily highly educated, priests have been seeing — and warning us about — for three centuries and which they have repeatedly denounced in their Sunday sermons. They kept telling their flocks that a world that has forgotten God has forgotten the very distinction between good and evil and has made human life meaningless, sunk into nihilism. Now, proudly stuffed with our sociological, historical, anthropological and philosophical knowledge, we discover the same simple wisdom, which we try to express in a slightly more sophisticated idiom.
Perhaps, to sound a little Whiggish, we really have reached a historic peak of self-awareness — self-awareness about our perennial flaws, at least — and now, on that front, there just isn’t that much more to add. If so, the question is what, if anything, we can usefully do with that knowledge.