How AI is making us more stupid | University Challenged

This article is taken from the July 2025 issue of The Critic. To get the full magazine why not subscribe? Right now we’re offering five issues for just £25.


Artificial intelligence is neither artificial (made by skill) nor intelligent (capable of reasoning). And yet the cleverest people in the land — those controlling our great universities — have been utterly bamboozled by its challenges.

The problem is simple: students have found that the laborious tasks of a degree — reading books and articles; taking notes; referencing sources; drafting, writing and refining essays — can be outsourced to AI freely, rapidly and sufficiently successfully. They have also discovered there is no reliable means of proving that AI produced their work; unsurprisingly, almost every student in Britain now uses it.

AI was long dismissed as a crude tool any true expert could always spot. Then the tsunami of ChatGPT and other Large Language Models hit. In 2024 academics at the University of Reading found that 94 per cent of AI submissions for a Psychology assessment went undetected, with the majority performing better than human students.

Not only do “AI detection tools” miss most cases, they turn up false positives. So dire is our educational system that good literacy and clear argument are now red flags. Cambridge University Press & Assessment have produced a jaw-dropping checklist for telltale signs of AI cheating: “sophisticated, multisyllable and Latinate vocabulary”; “paragraphs starting with “however” followed by a comma”; “repetitions of words or phrases and ideas”.

The real figure of AI malfeasance is too high for prosecution

The technology improves daily, tuned by millions of users worldwide. Many students even use “humanising” software to add suggestive hints of a personality. Already no arbiter, alive or digital, can police the technology confidently. Those that have tried unearthed chaos: in 2024, cases of AI misconduct increased fifteen-fold on the previous year at both the University of Wolverhampton and the University of Sheffield.

The real figure of AI malfeasance is too high for prosecution: you can’t penalise a whole cohort, especially given the rise of lawyers who defend cheating students on the grounds of insufficiently clear guidance.

Indeed, there is no consensus about what constitutes AI abuse versus use. Whilst many universities have hit the panic button and banned all undeclared use of AI, Cambridge has endorsed it “as a collaborative coach … supporting time management”.

Understandably, levels of suspicion and cynicism amongst students are sky-high, as the decreasing number of good-faith actors see their peers drift through the course successfully.

Examinations are in disarray. In the last two decades — supposedly for “inclusivity” — the proportion of coursework has grown, whilst take-home examinations have increasingly become “best practice”. Yet with cheating endemic, even progressive academics realise that change is essential.

This summer, remarkably, more in-person exams are being sat than since Covid struck. This age-old format tests the ability to think clearly under pressure, evaluate new information against prior knowledge, draw on memory and construct persuasive arguments efficiently — the skills which equip people for lives that require thought and communication.

However, the integrity of theses and coursework can be tested only by in-person viva. For many institutions that is logistically impossible, so three options present themselves: hire more staff, decrease student numbers or continue the cash-for-degree fraud in the hope no-one complains. Don’t hold your breath.

Source link

Related Posts

Load More Posts Loading...No More Posts.