This article is taken from the May 2025 issue of The Critic. To get the full magazine why not subscribe? Right now we’re offering five issues for just £10.
A tsunami of dystopian headlines and predictions foretell that artificial intelligence (AI) will put us all out of jobs and one day rule over humanity. I have been tracking AI since the 1980s, taking as my guide the pioneer and Bletchley Park code-breaker Professor Donald Michie (1923–2007) who told me in a 1985 interview:
AI is not a race of super-clever Daleks, unfathomable to man, that will eventually dominate the globe. In fact, what AI is about is exactly the opposite: making machines more fathomable and more under the control of human beings, not less.
The struggle to make AI more fathomable is a task undertaken anew in law futurologist Richard Susskind’s latest book. Here he lays out several philosophical arguments about AI’s future, with the caveat that we also have other technologies to worry about.
In fact, the book contains caveats at every turn, inviting readers either to think more profoundly or to sink even deeper into perplexity. This is not an insult but a caution to read through this small volume with care and not jump to conclusions or fall back into preconceived bias.

Here are two key reasons to listen to Susskind. He has written a string of books on AI in the legal field stretching back some four decades. He also served as technology advisor to the Lord Chief Justice of England and Wales from 1998 to 2023, calling for the establishment of a UK institute for AI to strengthen the UK’s hand.
A flurry of activity by the last government has been followed by ambitions to use AI in creating a more agile state under Keir Starmer. If the Prime Minister shows the same witless understanding of technology as he did as Director of Public Prosecutions in respect to the Post Office Horizon scandal, then we’re in trouble.
Susskind argues for less anthropomorphic AI language and a discourse about an AI-based digital world yet to be invented. Using Walter Ong’s anthropological hypothesis, which identified previous ages of orality, script and print, Susskind argues that we are in a revolutionary era, in the process of moving from print to an AI-enabled society. He posits a fifth revolution, transhumanism, and then conceivably a sixth, with AI replacing humanity altogether.
An arc of five AI hypotheses is then parsed: Hype, Generative AI+, Artificial General Intelligence (AGI), Superintelligence and Singularity. This arc ranges from criticism that this is all hype and faddishness through to singularity as a merger of AI and humans, with different and fast-shrinking timescales as to exactly when we become AI toast.
Unhappy with each of these, Susskind focuses on a sixth: an AI Evolution leading towards a takeover of humans. Arguably matters may already be out of our hands, requiring urgent interdisciplinary discourse if AI is to deal with these threats and benefit humanity.
Susskind suggests that the genie is currently half out of the bottle but only for about another decade. He redefines AI as “massively capable systems” that can ultimately outperform humans, and we have a moral duty to regulate how AI is deployed by all “users”.
I would inject that, oddly, the illegal drugs trade is the only other business referring to its customers as users. Perhaps AI is the new opiate of the gullible masses who believe in AI-generated social media output, alienated from reality through their deep attachment to AI-enabled smart devices.
The context of such regulation is an emerging AI arms race. The EU ambition is to lead AI by balancing commercial and rights-based interests in the form of the EU AI Act. An attempt to deliver an adjacent AI product liability directive fell flat under the weight of its own speculation. Other major powers are making more direct approaches: America has the tech bros pushing out product, whilst China’s DeepSeek was rolled out for a fractional cost of the Silicon Valley investments.
Retooling C.P. Snow’s Two Cultures, Susskind references a new divide represented by Noam Chomsky, who looks at AI outcomes; and Henry Kissinger, who looks at AI processes. He uses these two reference points to navigate commercial, legal and philosophical waters without getting cast adrift by geopolitical concerns.
However, we should take Vladimir Putin at his word when he said back in 2017 that in the AI race “whoever becomes the leader in this sphere will become the ruler of the world”. Our AI problems are closer and much more human than we think.