AI has been about to make us all redundant for some time now. In fact, the development of AI that can match and exceed the human intellect, bringing about the so-called “singularity”, has been predicted by computer scientists for decades. In 1965 Irving John Good, colleague of Alan Turing, predicted that such a machine would exist in his own century. By 1988 futurist and computer scientist Hans Moravec suggested that we would have an AI superintelligence by 2010. Computer scientist Vernor Vinge writing in 1993 suggested that we might have one between 2005-2030, whilst in 2005 computer scientist Ray Kurzweil confidently predicted that we would have a human level AI by 2029, a prediction he stuck to in his 2024 book, The Singularity Is Nearer. Depending on who you believe, the singularity is either about to happen, or already should have happened.
Regardless of whether you buy the most far flung fantasies of the Silicon Valley cult, AI is certainly here, and the main worry most people have isn’t that it will achieve sentience, but that it will be able to replace many of the functions currently filled by white collar workers and professionals in sectors from law to finance to medicine. If automation has killed off manufacturing jobs, is it about to decimate the service economy too?
An earnest, possibly AI-written essay by AI entrepreneur Matt Shumer went viral playing off these concerns. The technology he said, had changed beyond all recognition. “The models available today are unrecognizable from what existed even six months ago. A “partner at a major law firm” was telling him the technology was doing the work of a “team of associates “and expects it’ll be able to do most of what he does before long”. And of course, singularity style “AI is building the next AI”.
In a typical convergence of clammy-handed US therapyspeak, LinkedIn virality and AI hype, the rambling diatribe concludes “And I know that you deserve to hear this from someone who cares about you, not from a headline six months from now when it’s too late to get ahead of it.”
The irony of this post about the revolutionary potential of AI, is that whether actually written by an AI or not, it epitomises the kind of world digital spaces and AI are converging on: a realm of emotionally manipulative, lowest-common-denominator slop.
Slop is one of the great coinages of the AI age, perfectly describing the reconstituted, soulless nature of the “content” produced by Claude, Grok, ChatGPT et al. As with previous waves of digital marketing, AI is adept at recruiting ordinary consumers as hype merchants. Social media has recently been taken over by user-generated videos courtesy of ByteDance’s “Seedance” AI, which produces hyper-realistic videos in any style you care for. The internet has been full of clips of famous actors riding around on snails, or videos of Seinfeld characters kicking each other through walls, and similar surrealist fare. Half of these videos seem to be captioned by individuals claiming that “Hollywood is over”.
The world is increasingly being divided into those who can see slop for what it is, and those who simply don’t understand the distinction
Except, of course, the videos are slop in their purest form. Quite apart from continuing issues of weird visual tics, unreal movements, and a CGI-like quality, which can presumably be improved upon over time, none of this gets around the deeper issue of slop. The clips are meaningless. It’s not just that content is only as good as its prompt, it’s that it’s unfailingly worse. There is no magic trick, no magician’s wand that can replace the tens of thousands of hours that animators, actors and writers spend making films. The greatest films are made by people who obsess over every frame, and even the most generic of Hollywood movies will involve multiple reshoots and an intensive editing process. It’s not that automation somehow replaces this process, it bypasses it, and there is a huge difference. You don’t want to remove the effort in art, because it is the effort — the conscious attention to every detail — that produces the highest craft, the best art. It is literally impossible to automate this, and all that even the most sophisticated AI can do is predictively amalgamate material produced by human hands and minds.
The world is increasingly being divided into those who can see slop for what it is, and those who simply don’t understand the distinction. Many people are convinced that a sufficient technical leap is round the corner, a point of no return after which technology will be able to do anything.
This is dangerous because AI is revolutionary, yet in a perhaps unprecedented moment in human technological history, the inventors of AI do not understand their own technologies true potentials and limitations.
It is important to realise that whilst many brilliant minds go into computer science, they are working within a narrow field and a highly specific culture, highly influenced by a Californian countercultural utopianism, long ago diagnosed in the seminal essay, “The Californian Ideology”. Computer scientists working in AI are the sort of people with the talent and the personality to be attracted to it — awkward young men obsessed with sci fi, good at maths, and often on the spectrum. In their field, and in their experience, the world is a programmable mechanism, and there is nothing special, let alone supernatural, about human consciousness.
But even if you are a materialist, human consciousness is in fact a profound challenge for philosophers and biologists, who are very far from understanding it, labelled the “hard problem” by philosopher David Chalmers. But let’s go one step further. Let’s accept that intelligence and consciousness is a purely material system which can be fully understood scientifically, even if we haven’t yet mastered it. Everything that we do know about intelligence suggests that it belongs exclusively to biological beings, and not to computers, however complex or sophisticated.
Brains themselves are not binary electronic systems, but radically decentralised electrochemical networks, showing high levels of plasticity. Still more fundamentally, intelligence is embodied, with language and even mathematical reasoning emerging through the tactile exploration of our environment. Something as simple as a single-celled organism has more “consciousness” than an AI, because it has a body.
Having a self is a pretty fundamental basis for knowing things, otherwise there is no “self” to know. And how you get to an answer is as important as getting there. The AI may spit out the right answer if you ask it if 2 and 2 make 4, but it can’t actually “do maths” in the same way we do. Importantly, even within human intelligence there are qualitatively different ways to get to the same answer. A mathematician can use calculus to chart the trajectory of a ball, but a good athlete can do the same thing without any sums.
But like the gulf between those who can and can’t see slop, there is a cognitive divide between those who can understand that human and machine “intelligence” are qualitatively different, and those who can’t.
This matters, because it makes computer scientists, especially those who are trying to hype up their technology, or who have drunk the Kool-Aid on the singularity, remarkably poor at predicting where their own technology is going to end up, and how it is likely to be used.
Because, for one thing, Silicon Valley boardroom bores aren’t the only ones who can’t seem to see the difference between slop and art, or AI and human intelligence. Many people, especially those already conditioned by short-form video content and social media feeds to be reactive, passive consumers, will accept AI the same way they have unthinkingly accepted crappy Marvel movies and Taylor Swift.
In our race to the bottom, profit-maximising, wage suppressing neoliberal economy, the question of whether we should automate, outsource, deregulate or gigify a part of our economy is rarely asked, and never answered in the negative. It’s treated as a technological inevitability, rather than the result of law and policy. Slop arrived in the physical realm a long time ago, in the form of Temu junk and Net-a-Porter fast fashion, an AI is only going to make things worse.
The technology it produces will not be used to enhance, but to degrade human life
We will absolutely automate many white collar roles, but the results will be a far worse product for most people, with basic customer service increasingly a luxury product. This isn’t particularly new, and it matches changing norms across many industries, like air travel, where aspects of the service which were once free have been monetised as optional extras. The lowest common denominator — stag parties and free market economists — celebrate the Ryanairification of the service industry, accepting small cost savings in return for a punitive, enshittified and dehumanised consumer experience.
It is hard to predict the precise forms that the latest digital dystopia, or newest circle of hell, will take, but the direction of travel is clear. Vast speculative capital, money that could be going into breakthroughs in energy, transport, or medicine, will instead be monopolised by a parasitic virtual economy. The technology it produces will not be used to enhance, but to degrade human life. The sloppening is here, and here to stay.











