‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI | AI (artificial intelligence)


Lea Pao, a professor of literature at Stanford University, has been experimenting with ways to get her students to learn offline. She has them memorize poems, perform at recitation events, look at art in the real world.

It’s an effort to reconnect them to the bodily experience of learning, she said, and to keep them from turning to artificial intelligence to do the work for them. “There’s no AI-proof anything,” Pao said. “Rather than policing it, I hope that their overall experiences in this class will show them that there’s a way out.”

It doesn’t always work. Recently, she asked students to visit a local museum, look at a painting for 10 minutes, and write a few paragraphs describing the experience. It was a purposefully personal assignment, yet one student responded with a sophisticated but drab reflection – “too perfect, without saying anything”, Pao said. She later learned the student had tried to visit the museum on a Monday, when it was closed, and then turned to AI.

As artificial intelligence has upended the way in which students read, learn and write, professors like Pao have been left to their own devices to figure out how to teach in a transformed landscape.

Many faculty members in the hard sciences and social sciences have pointed to the “productivity boost” AI can offer, and the research potential unlocked by its ability to process and analyze vast amounts of data. AI’s most enthusiastic proponents have boasted the technology may help cure cancer and “accelerate” climate action.

But in fields most explicitly associated with the production of critical thought – what is collectively referred to as the “humanities” – most scholars see AI as a unique threat, one that extends far beyond cheating on homework and casts doubt on the future of higher education itself in a fast-approaching, machine-dominated future.

Lea Pao. Photograph: Courtesy Lea Pao

American degrees often cost up to hundreds of thousands of dollars and result in decades of debt and recent years have seen a freefall in public confidence in US higher education. With the potential AI to increasingly substitute independent thought, a pressing question becomes even more urgent: what exactly is a university education for?

The Guardian spoke with more than a dozen professors – almost all of them in the humanities or adjacent fields – about how they are adapting at a time of dizzying technological advancement with few standards and little guidance.

By and large, they expressed the view that reliance on artificial intelligence is fundamentally antithetical to the development of human intelligence they are tasked with guiding. They described desperately trying to prevent students from turning to AI as a replacement for thought, at a time when the technology is threatening to upend not only their education, but everything from the stock market to social relations to war.

Most professors described the experience of contending with the technology in despairing terms. “It’s driving so many of us up the wall,” one said. “Generative AI is the bane of my existence,” another wrote in an email. “I wish I could push ChatGPT (and Claude, Microsoft Copilot, etc.) off a cliff.”

“I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential,” said Dora Zhang, a literature professor at the University of California, Berkeley. “What is it doing to us as a species?”

A ‘soulless’ education

AI criticism – or “doomerism”, as the technology’s proponents view it – has been mounting across sectors. But when it comes to its impact on students, early studies point to potentially catastrophic effects on cognitive abilities and critical thinking skills.

Michael Clune, a literature professor and novelist, said that already, many students have been left “incapable of reading and analyzing, synthesizing data, all kinds of skills”. In a recent essay, he warned that colleges and universities rushing to embrace the technology were preparing to “self-lobotomize”.

Ohio State University, where he teaches, has begun requiring every freshman to take a class in generative AI and pitched itself as the first “AI fluent” university, pledging to embed AI “across every major”.

“No one knows what that means,” Clune said of the plan. “In my case, as a literature professor, these tools actually seem to mitigate against the educational goals I have for my students.”

That’s the crux of what many professors in the humanities fear: that technology that may well be a cutting-edge tool in other fields could spell the end of their own.

Michael Clune. Photograph: Courtesy Michael Clune

Alex Karp, the Palantir cofounder and CEO, stoked those anxieties when he said in a recent interview that AI will “destroy humanities jobs”. On the other hand, Daniela Amodei, Anthropic’s president and co-founder – who was a literature major – said the opposite: that “studying the humanities is going to be more important than ever”.

A number of tech and finance companies have recently said that they are looking to hire humanities majors for their creativity and critical thinking skills. Indeed, enrollment data at some universities suggests that the long-struggling humanities might have begun to see a resurgence in the age of AI, with early signs pointing to a reversal in decades-long decline in English majors in favor of Stem ones.

Some caution that the humanities will survive – but as a province of the few. When he predicted the end of the humanities, Karp assured that there would be “more than enough jobs” for those with vocational training. Indeed, several professors spoke about concerns that AI will exacerbate a widening divide in US higher education and that small numbers of elite students will have access to a more traditional, largely tech-free liberal arts education, while everyone else has a “degraded, soulless form of vocational training administered by AI instructors”, said Zhang.

“I fully expect that we will start seeing a kind of bifurcation in education,” said Matt Seybold, a professor at Elmira College in New York, who has written critically about “technofeudalism”.

Many professors talked about keeping the technology out of the classroom as a battle already lost. As many as 92% of students have reported resorting to the technology in their school work, recent surveys show, and the numbers are rapidly increasing even as growing numbers express concerns about the technology’s accuracy and the integrity of using it. Reliance on AI among faculty is also on the rise, with observers pointing to the dystopian possibility that the college experience may soon be reduced to AIs grading AI-generated homework – “a conversation between two robots”.

Alex Karp, CEO of Palantir, during the AIPCon conference in Palo Alto, California, in March 2025. Photograph: Bloomberg/Getty Images

Some universities have adopted AI detection software to catch artificially generated work; others prohibit faculty from directly accusing students of having used AI – as they can often be wrong.

Professors said they resorted to oral interrogations, handwritten notebooks, and class participation for grading purposes. Some require students to submit transparency statements describing their work process. Others have reportedly injected random words like “broccoli” and “Dua Lipa” into assignments to confuse learning models – exposing students who did not even read the prompts before pasting them into AI.

Many professors spoke of their frustration at having to sift through students’ artificially generated homework. “It creates hours of additional labor,” echoed Danica Savonick, an English professor at the State University of New York Cortland. “And makes me feel like a cop.”

Some allow students to use AI for research – to a point. Karl Steel, an English professor at Brooklyn College, said that AI has helped make students’ presentations richer and more interesting – but that while they may use it to prepare, he has them speak from minimal notes and stand in front of a photo of a text they annotated by hand. He also assigns written responses to texts only after the class has discussed them. “I suppose they could use their phones to record the conversation, feed a transcript into a chatbot, and produce a paper that way,” he said. “But that is more trouble, I think, than most students would take.”

Left to their own devices

Many universities’ administrations are embracing AI for instruction, research, and evaluation. In some cases, AI has guided decisions about which programs to cut at times of austerity in the education sector.

More than a dozen universities have partnered with OpenAI on a $50mi initiative that the company has said will “accelerate research progress and catalyze a new generation of institutions equipped to harness the transformative power of AI”. California State University has joined several of the world’s largest tech companies to “create an AI-powered higher education system”, as the university put it. Multiple universities have introduced AI majors and masters.

The plans are lofty but offer little guidance on what professors are supposed to do with students who can’t read more than a couple paragraphs at a time or turn in essays generated in seconds by a machine. Left largely to themselves, some are trying to articulate clearer lines around AI use, and organize a more coordinated effort against its encroaching dominance.

Last year, the American Association of University Professors, which represents 55,000 faculty nationwide, published a report warning that universities were adopting the technology “uncritically” and with little transparency. Some university unions have begun incorporating protections against AI in their contracts to establish oversight mechanisms and give faculty greater input – and to protect their intellectual property from feeding machines that may soon take their jobs.

But a lot of organizing against AI remains informal and word-of-mouth, with faculty-led initiatives like the website Against AI, which offers resources to those trying to shield students from the intellectual ravages of outsourcing elements of their education to a machine.

“Materials here are intended as solidarity solace for educators who might find themselves inventing wheels alone while their administrators, trustees, and bosses unrelentingly hype AI,” reads the website, which offers a list of assignment ideas to mitigate AI use – from oral exams, to requirements students submit photographic evidence of their notes, to analog journals.

Many of the professors interviewed by the Guardian said they ban AI in their classrooms altogether – but recognize their hardline approach is discipline-specific.

Megan McNamara, who teaches sociology at the University of California, Santa Cruz and created a guide for faculty across disciplines to deal with AI-related academic misconduct, noted that “cultural” differences in the humanities versus Stem disciplines, or in qualitative social sciences versus quantitative ones, tend to shape faculty members’ responses to students’ use of AI.

“I think that’s just a function of one’s individual relationship with writing/reading/critical analysis,” she wrote in an email.

Several professors spoke of using the issue as an opportunity to get students to think critically about technology.

When she suspects someone has used AI, McNamara talks to them about it, treating the incident as an “opportunity for growth, restorative justice, and enhanced authenticity in student-instructor relationships”, she said.

Eric Hayot, a comparative literature professor at Penn State University, said he tries to convince his students that tech companies are trying to make them “helpless” without their product.

“These companies are giving these technological tools away partly because they’re hoping to addict a generation of students,” Hayot told the Guardian. “This is part of every single class I teach now, talking to students about why I’m not using AI, why they shouldn’t use AI.”

‘We can decide that we want to be human’’

Several professors noted that they have also begun to see mounting discomfort from students against the technology – and technology’s dominance in their lives overall.

Clune, the Ohio State professor, said students have become more curious about his flip phone, which he started using after realizing his smartphone was “destroying” his attention.

“I think the current crop of gen Z students are seeing that they are the guinea pigs in this giant social experiment,” said Zhang, the Berkeley professor.

“There’s a broader and increasing sense from students that something is being stolen from them,” echoed Seybold, the Elmira College professor.

Seybold pointed to students’ mounting disillusion with tech more broadly. Those who are rejecting AI, he added, are often driven by environmental concerns, and suspicion of companies they view as partially responsible for shrinking democracies and a more violent world.

In Michigan, for instance, that has spurred activism. The University of Michigan recently announced plans to contribute $850m toward a datacenter to provide AI infrastructure in collaboration with the Los Alamos National Laboratory – at a time when it is cutting funds for arts and humanities research and on the heels of anti-war protests on campus. A spokesperson for the university said that the planned facility would be smaller and consume less energy than a “typical datacenter”.

As pushback grows, so does an emphasis on those intrinsically human qualities that differentiate people from machines – the very qualities a humanistic education seeks to nurture.

“There’s kind of defeatism, this idea that there’s no stopping technology and resistance is futile, everything will be crushed in its path,” said Clune, the Ohio State professor. “That needs to change … We can decide that we want to be human.”

That idea has also been key to Pao’s approach to teaching in the age of AI.

“You plant seeds and you hope,” Pao said, of efforts that at times feel like fighting windmills. “You hope that in the long term you’re helping them become happy human beings, who are able to take a walk, and experience things, and describe things for themselves.”



Source link

  • Related Posts

    How Pokémon Go is giving delivery robots an inch-perfect view of the world

    “Visual positioning is not a very new technology,” says Konrad Wenzel at ESRI, a company that develops digital mapping and geospatial analysis software. “But it’s obvious that the more cameras…

    AI Could Change Homebuying. How Listings and Negotiations Are Already Shifting

    With the housing market as hot as it’s been, aspiring buyers can turn to new technology to help navigate their search.  I’ve used ChatGPT to help redesign my bedroom, and while…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    First Nations chiefs go to Alberta legislature, pressure province to end separation debate

    First Nations chiefs go to Alberta legislature, pressure province to end separation debate

    Ex-MSU star Kristin Haynie has Central Michigan women’s hoops relevant again

    Ex-MSU star Kristin Haynie has Central Michigan women’s hoops relevant again

    How Pokémon Go is giving delivery robots an inch-perfect view of the world

    How Pokémon Go is giving delivery robots an inch-perfect view of the world

    Long lost George Michael film and live album set for release later this year | George Michael

    Long lost George Michael film and live album set for release later this year | George Michael

    Hilton is launching an AI tool to help plan trips — so we put it to the test

    Hilton is launching an AI tool to help plan trips — so we put it to the test

    Chase Infiniti Goes Monochromatic in Plum Coords at Louis Vuitton

    Chase Infiniti Goes Monochromatic in Plum Coords at Louis Vuitton