This article is taken from the November 2025 issue of The Critic. To get the full magazine why not subscribe? Get five issues for just £25.
Never have we been so surrounded by intelligence. It’s everywhere, on tap and largely free. The only problem is that it’s Artificial Intelligence, incapable of reasoning, and in many senses quite impossibly thick.
So, in this brave new world of ubiquitous AI, how are our bastions of hard-won wisdom, the universities, responding to the challenges it poses?
Let us begin with the student reality. Accurate numbers are elusive, given the shadowy nature of using this technology in private, but even last year the Higher Education Policy Institute’s survey of undergraduates at British universities found that 92 per cent were making use of generative AI (GenAI) in their work.
Around 40 per cent said they used it to “suggest research ideas”, and the same proportion used it to “structure my thoughts”. Since things move very fast in this world, and honesty is not in the students’ best interests, no one really knows how much higher these figures are right now.
So the evidence tells universities that AI has already become a central part of the student experience. Meanwhile, self-appointed spokesfolk from the tech world say that since AI is the future it needs to be incorporated in education.
Those professional academics who have taken their head out of the sand are utterly divided on whether AI more helps or hinders the learner. What, then, is being done?

Proper etiquette bids us to turn first to the Russell Group, that self-electing club of Britain’s 24 “most elite” universities. Unfortunately, whenever higher education is in crisis, and whatever the particular topic, we can be assured that this cabal will offer up middle-of-the-road, mealy-mouthed statements that sound substantive but are usually bereft of principle and purpose.
Sure enough, their statement on AI — the most existential crisis for education in our lifetimes — treats its inclusion as it if were a student whose learning difficulties require special dispensation:
Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access … All staff who support student learning should be empowered to design teaching sessions, materials and assessments that incorporate the creative use of generative AI tools where appropriate.
Thus have spoken our leading seats of learning, even before the current impact of AI, to say nothing of its longer-term consequences, has been assessed.
Lest one suspect any dissent on the topic, rest assured: all 24 universities of the Russell Group signed up to these principles in unison. No university in the country, by contrast, is maintaining the conservative option of banning use of AI until the dust settles.
The University of Bristol’s new policy for “academic integrity” allows the use of AI tools for specific tasks, such as generating initial drafts in creative writing or assisting with language translation in linguistic courses.
King’s College London recommends that AI can be used to “help generate ideas and frameworks for study and assessments”.
The institution does not require students to reference GenAI as an authoritative source in their list of references. Instead they need merely add the following vague declaration:
I declare that parts of this submission has [sic] contributions from AI software and that it aligns with acceptable use as specified as part of the assignment brief/guidance and is consistent with good academic practice.
Well, there we go. Up in Edinburgh, there is similar optimism:
The University trusts you to act with integrity in your use of generative AI for your studies … the University understands that there are ways in which you may wish to use it to support your studies. This might include using it to: brainstorm ideas; get quick definitions of concepts; overcome writer’s block through dialogue; check your grammar; organise or summarise information; re-format your references.
Since no specifics are required, the dividing line between human and AI input to coursework seems to be lost for good.
Things are on a different scale at the University of Cambridge, where a rapidly growing behemoth called the Blended Learning Service calls the shots. Its guidance begins by saying, “students are permitted to make appropriate use of GenAI tools to support their personal study, research and formative work.”
Academics, by contrast, are advised not to make use of AI detection software “as it is not proven to be accurate or reliable and provides no evidence to support investigations into the use of GenAI”.
So what should they do if they suspect their students are not learning but cheating? Cross their fingers and stop complaining.











