When OpenAI released ChatGPT in 2022, it set off a firestorm among educators. Here was a tool that, with a few lines of direction, could gather reams of information, compose human-like sentences, and spit out an answer to seemingly any question. Students, they thought, would certainly use it to cheat.
As artificial intelligence chatbots’ popularity has ballooned, so, too, has alarm over its potential misuses. In March, The Wall Street Journal told parents, “There’s a Good Chance Your Kid Uses AI to Cheat.” New York Magazine declared that “Everyone Is Cheating Their Way Through College.”
For many students, those headlines ring true. But not for all.
Why We Wrote This
As artificial intelligence intertwines itself with daily life, some students are pushing back. Their reasons range from profound to practical, and speak to preserving a sense of community – and humanity.
“What’s the point of going to college if you’re just going to rely on this thing to give you the right answers?” says Marie Norkett, a junior at St. John’s College in Santa Fe, New Mexico. “You’re not improving your mental capabilities.”
Ms. Norkett is among a cadre of students who choose not to use AI in their studies. They give reasons both profound and practical. Ms. Norkett, for example, worries not only about how cutting corners might dull her critical thinking skills, but also about the accuracy of what AI bots, which pull vast sums of information from the internet to mimic human cognition, produce.
Such students are in the minority on campuses. In a September survey of college students by Copyleaks, the maker of an AI-powered plagiarism detector, 90% of respondents said they use AI for school work. Of course, not all of those students were using it to cheat: The most common uses reported were brainstorming (57%) and drafting outlines (50%).
Still, like many educators, some AI abstainers worry the bots make cheating easier. In an internal report on ChatGPT use by OpenAI, roughly a quarter of 18- to 24-year-olds, the most active of the bot’s more than 700 million weekly users, said they used it for “exam answers.” A September report from Discovery Education found that 40% of middle and high school students have used AI without their teacher’s permission, and nearly two-thirds of middle and high school teachers say they’ve caught students using chatbots to cheat.
The true extent of the cheating problem remains a matter of some debate. Victor Lee, an associate professor of education at Stanford University, says decades of research has put the rate of cheating between 60% and 80%. That has “stayed fairly stable” since ChatGPT thundered onto the scene.
Regardless, it’s clear that students use the technology – often. That reflects a variety of tensions. Students feel immense pressure to succeed academically as they juggle school with extracurriculars, jobs, and social commitments.
“There’s also some situations where [students] just aren’t clear what the line is for acceptable or not acceptable,” Professor Lee adds.
Still, some students have resisted the pressures leading their peers to use AI – whether legitimately or illicitly. They have charted a path to a more old-fashioned education that, for them, is fulfilling, meaningful, and decidedly human.
“The full expression of a human being is not a robot. It’s a creative, interactive force,” says Caleb Langenbrunner, another junior at St. John’s. Just taking the answers AI provides, he says, “doesn’t seem like fully what it means to be human.”
Maintaining a sense of community
Unlike many college campuses, students at St. John’s say they rarely see their classmates use AI. That might come down to the school’s unique teaching methods. It offers only one degree, in liberal arts, and its entire curriculum comprises a four-year reading list of what the college calls “the greatest books” in history. Titles include tomes like Plato’s “Republic” and Aristotle’s “Politics.”
Yet it’s not just St. John’s students who see an overreliance on AI among their peers as a problem. Ashanty Rosario, a high school senior from New York, says she doesn’t use AI, and she wishes that her classmates wouldn’t, either.
“I do think that we lose a sense of community in the classroom if we aren’t actively engaging with whatever work we are given,” she says. When students use AI instead of turning to their peers, it’s “not only harming the person using it, but it’s harming others who could very well be gaining a different perspective that enhances their learning.”
The meteoric rise of AI-generated writing and art has also exacerbated worries about the future of the humanities. The technology has entered the scene at a troubling time for creative disciplines. The number of students graduating college with humanities degrees plummeted by 24% between 2012 and 2022, according to the American Academy of Arts and Sciences.
“A big part of the humanities and the arts is original thinking [and] creativity,” Ms. Rosario says. “That’s something that can’t be replicated, especially by a machine. So, I think that in order to keep that cycle going – of art, and getting culture out there – it has to come from within.”
Credibility question
Abera Hettinga, a junior studying philosophy and psychology at the University of New Mexico, says he doesn’t use AI because it would be “doing a disservice” to his future self. He also took a logic and critical thinking class that shaped his view. Students in the class, he says, investigated the accuracy of ChatGPT’s answers to different questions, and the chatbot did not impress him.
Sometimes, when ChatGPT gave him a dubious answer, he pushed it to explain its logic. Mr. Hettinga found the bot was often “just predicting what you’d want it to say.”
OpenAI has acknowledged that older models tended to tell users what they wanted to hear, even if that meant providing incorrect information. “That’s shaped my faith in it as far as its credibility,” Mr. Hettinga says. OpenAI says it has updated ChatGPT’s software to address “sycophancy.”
A writing tutor for the University of New Mexico’s Center for Teaching and Learning, Mr. Hettinga has firsthand experience with how an overreliance on chatbots can deprive students of learning how to craft a cogent argument.
“[AI] takes away from being able to structure an argument,” he says. “You lose that crucial ability – to brainstorm, to organize a paper, to know where to put your arguments, how to formulate a thesis, [and] other crucial writing skills as well.”
Professor Lee of Stanford says charting a path to more-sustainable AI use might start with how schools approach the tools – though he acknowledges that it might feel difficult for educators juggling the learning needs of dozens of students. Some teachers have already turned to old-fashioned testing methods, such as having students put pen to paper and hand-write essays in class.
Another strategy “is to develop students’ AI literacy to help them learn how to use it responsibly, and what its capabilities and limits are,” he says.
Students interviewed say that AI bots do have potential beneficial uses. For example, they can be a useful place to start research, because they compile and summarize vast amounts of information quickly.
At the end of the day, Mr. Langenbrunner, from St. John’s, says he enjoys learning and working answers out for himself – and he doesn’t want to miss out on a good time.
“You know, I think [AI is] rather boring,” he laughs. “If I were to use AI to write all my papers – that takes all the fun out of it.”











