Two years ago, at the age of 39, I began training to be a school teacher. I wanted to teach English – to help young people become stronger readers, writers and thinkers, with a deeper connection to literature. After 15 years of working as a freelance writer and as a novelist, I felt confident that I had something to offer. But the further I progressed in my training, the more uncertain I felt. One particular question taunted me for my lack of an answer. What to do about artificial intelligence?
The immediate dilemma: what does it mean for English instruction that all pupils now have access to free online chatbots that can produce fluid, fairly complex prose on demand? This question sits atop a teetering pile of timeless pedagogical quandaries: What are we actually trying to do in school? How should we go about doing it? How do we know if we’ve succeeded? I was a newcomer, negotiating all of this for the first time. Throwing AI into the mix felt like downing a coffee in the middle of a panic attack.
I started frantically seeking out perspectives on AI and the English classroom wherever I could find them: pedagogy podcasts, pedagogy Substacks, pedagogy YouTube channels. My algorithmic feeds picked up on this interest and started catering to it, serving me an apparently endless supply of content – including endless advertising from tech companies – that promised to help me think through these urgent questions and ensure I did right by my students.
I quickly learned that this was a world of heated, often acrimonious, debate. On one side (to simplify a bit) were the AI rejectionists: teachers and education pundits for whom AI was nothing less than an existential assault by rapacious tech companies on the defining activities of the classroom. What students needed, they argued, was to learn how to push themselves through difficulty: to read complex texts and develop complex arguments. They needed to learn that these were processes full of friction and uncertainty, and they needed to learn how to embrace that fact, rather than running away from it. Access to a one-click writing machine made it too easy to run away.
AI rejectionists shared horror stories of students handing in AI-generated papers about which they couldn’t answer the simplest questions, or citing nonexistent sources their chatbots had “hallucinated”. They posted studies suggesting that chatbot use dulled students’ reasoning faculties, or even impeded the physical development of their brain. They raised ethical concerns, including AI’s environmental costs, chatbots’ reliance on copyrighted writing, and the oligarchal leanings of big tech companies. For most rejectionists, the solution was to build a classroom that AI couldn’t touch. They talked about shifting toward in-class essays, perhaps written by hand. They debated the feasibility of reviving oral tests and quizzes.
On the other side were the AI cheerleaders. I’m not talking about their crazy uncles, the mostly male tech execs who spoke maniacally about how AI would soon mean the end of schooling as we knew it, or already meant that reading books was a waste of time. I’m talking about teachers and pundits who argued – often quite passionately – that, for all AI’s pedagogical risks, it also carried great potential. Instead of cheating machines, chatbots could be powerful assistant teachers, able to engage with every student in a classroom simultaneously, making sure everyone got personalised feedback exactly when needed, carefully nudging each student down their particular path to maximum learning. From the cheerleaders’ perspective, the rejectionists’ instinct to shun AI tools represented a lack of understanding about their possibilities; it also did a disservice to their students, who would leave school without having acquired tech skills they could use to their advantage at university and in their future careers.
As I waded through arguments between the rejectionists and the cheerleaders, attempting to parse their duelling deployment of statistics and academic studies, my anxiety increased. I’ve noticed something about teachers, including myself. Because we take our responsibilities so seriously, we often fear doing the “wrong” thing: using ineffective or discredited teaching strategies, failing to give our students what they need. We believe, often from experience, that good teachers can change people’s lives; we know really bad teachers can leave a mark, too, especially in English, where they are often a culprit in what the teacher and writer Kelly Gallagher calls “readicide”: the killing off of good feelings about reading. We long to be in the right category, and dread being in the wrong one.
Beneath this fear, I think, is a more fundamental one: the fear of being seen as – not to mention the fear of actually being – out-of-touch losers, hiding with children in the classroom because there’s nowhere else in the ever-changing adult world we quite fit. I know this fear well. I was resolved not to get suckered by tech hype, but I also didn’t want to sucker myself by refusing to even consider a potentially useful new tool.
All I needed was a provisional ruling. I didn’t need to decide if AI was an evil scam or the future of everything. I didn’t need to decide what AI meant for the future of education, writ large. What I had to decide was what AI meant for the high-school English classes I was on the verge of teaching. I nervously downloaded more podcasts, clogged my inbox with still more Substacks and watched more YouTube videos, hoping that by absorbing more materials on the subject I could increase my chances of getting it right, or at least tamp down my terror of getting it all wrong.
Last spring I started spending 15 hours a week observing a veteran English teacher in a large school in a Chicago suburb: the type of place that families move to specifically “for the schools”. My host teacher – let’s call her Emily – taught two age groups: 14-year-olds just starting high school and 18-year-olds almost done with it. What I saw in her classroom immediately disposed me to join the rejectionists.
I witnessed all the disruptive effects you read about in articles about AI and the classroom: fully AI-generated papers; AI-hallucinated quotes; tense student-teacher conversations about what exactly was provable. I sat with Emily while she marked papers and joined her in stressing over ambiguous cases, trying to sort student nonsense from AI nonsense, student improvement from AI-powered polish.
I’d become a teacher in large part because I wanted to spend time with young people’s writing, honouring it with close attention. Watching over Emily’s shoulder, I saw how AI’s presence (or even its potential presence) interfered with this process. I became acquainted with the unique variety of despair produced by looking at a paper and, rather than figuring out how to best respond to it, trying to divine its origins. I also saw how teachers are themselves constantly bombarded with offers of AI assistance, not just via email and social media advertisements, but also – more, actually – from AI tools integrated into their schools’ email and gradekeeping software.
Emily’s students all had school-issued laptops, and her computer had a program that allowed her to surveil the content of every one of her students’ screens; they all appeared on the screen simultaneously, in a grid that recalled a bank of CCTV monitors. Using this program was always discomfiting – Big Brother, c’est moi – and always transfixing. Some students didn’t use AI at all, at least in class. Others turned to it every chance they got, feeding in whatever question they were working on almost as a reflex. At least one student was in the habit of putting every new subject into ChatGPT, having it generate notes that he could refer to if called on. Often, I saw students getting funnelled toward AI use even when they hadn’t necessarily been looking for it. I got used to watching a student Google a subject (“key themes in Romeo and Juliet”), read the AI-generated answer that now appears atop most Google search results, click “Dive deeper in AI mode” – and suddenly be chatting with Gemini, Google’s chatbot, which was always ready to advertise its own capabilities. “Should I elaborate on one or more of these themes? Should I draft a first paragraph for an essay on the subject?”
Emily told me that most of the reading she assigned now had to happen in class and that she read much of it aloud, especially toward the beginning of the year. I was shocked. Yes, I’d read countless newspaper features on the “contemporary reading crisis” but it was still dismaying to encounter the diminished baseline state of teen reading in the wild. When I decided to become a teacher, my head had been filled with romantic visions in which I led students (“O captain, my captain!”) into battle with literary complexity and its connections to life. In these visions, the reading itself took place mostly off-camera, beyond the walls of the classroom. What did it mean for my teacherly ambitions that so many of my students appeared unequipped to read on their own – and that, when it came time to write, so many of them turned reflexively to AI? I wondered, depressively, if I’d signed up for something that unstoppable forces of history were on the brink of wiping out.
But then I watched Emily read to the class and my spirits lifted. For a writer, describing alleged classroom magic is a bit like describing sex; so often, the attempt produces sentences that are both cringe-inducing and unconvincing. And yet: I feel obliged to tell you that reading time was sometimes magic.
Shortly after I’d arrived, the younger classes started All Quiet on the Western Front. Students began by expressing disbelief: We’re really reading another whole book? Then, with Emily’s help, they got their bearings: first world war, young German soldiers, trench warfare, the loss of innocence, the psychological toll of daily proximity to death, the disconnect from the home front. Laptops were away, as were phones. (Per school policy, they were in pouches by the classroom door.) Everyone knew they could raise a hand any time to ask for clarification or make an observation. Sometimes, Emily stopped to highlight moments that she suspected were producing confusion that students might be afraid to admit to, or misreadings they weren’t even conscious of, or sentences ripe with multiple possibilities for interpretation. Day by day, and mostly in imperceptible micro-movements, the book transformed from an imposing monolith into a familiar companion.
At some point the students stopped complaining and started getting into it: expressing a desire to know how it all turned out, gasping at dramatic turns, wondering aloud, and with feeling, why characters were doing what they were doing. Why had Erich Maria Remarque written it like that? And then, one day, it happened: a room full of American 14-year-olds in 2025 was inside a story about German 19-year-olds in the 1910s, simultaneously viewing the book through the lens of their lives and their lives through the lens of the book. I could feel it on my skin: the room quietly crackling with the crisscrossing lines of energy between students and teacher and words first committed to paper almost a century before.
The AI shenanigans I’d witnessed had been depressing: the AI-free teaching I’d witnessed had been inspiring. Before my observation period ended, Emily let me lead some of the readings myself, and a couple of times I experienced a full-body high. I felt ready to scream it from the rooftops: I’m an AI rejectionist – and proud of it!
Over the summer, though, my doubts came creeping back. As stirring as reading time in Emily’s classroom had been, I knew it hadn’t actually answered all (or any) of my questions about AI and the classroom. I knew that in the fall I would be returning, this time as a student teacher, taking most of the responsibility for lesson planning and marking. I had more decisions to make, centrally about writing. What, given my concerns about chatbots, would I have students write? And when, and how?
Because I’d consumed – and was continuing to consume – so much content devoted to AI and teaching, I was capable of staging an internal debate, in my head, between radically different takes.
Me: “Reading together as a class without any AI or devices felt great. I know that for sure. I want to use that as my starting point.”
Also me: “But what did the students really learn? How do you know?”
Me: “Well, I got to hear their thoughts evolving in real time.”
Also me: “But did every single student participate?”
Me: “Well, no. But they all did a lot of writing afterward – in the classroom, by hand – and I was able to read that.”
Also me: “Having read what they wrote, do you really think every student learned as much as they theoretically could have? Did they all learn everything you wanted them to?”
Me: “Well … I guess not. Not all of them. Not everything.”
Also me: “What if, after your AI-free reading and discussion, when students sat down to write, they each had access to an AI chatbot that could give them feedback tailored exactly to their existing comprehension level and learning style? What if you, the teacher, could train that chatbot, aligning its behaviour precisely to your goals for the assignment and the class overall?”
Me: “Well, that’s already my job – to give them personalised feedback.”
Also me: “But how much time do you have for that? Can you really intervene every single time it would be useful? What about when your students are writing at home? What about when it’s the night before an assignment is due and they’re off to a completely wrong start? Why wouldn’t you want them to know that?”
Me: [sweating profusely]
In the name of due diligence, I started playing around with AI chatbots, including those designed specifically for classrooms, or with some kind of “student mode” included. First, I evaluated their ability to do the Worst Thing: take one of my assignments, add a few simple instructions – “This should sound like was written by a 15-year-old student”, “Please insert a realistic sprinkling of common typos and grammatical errors”, “Don’t make it too smooth” – and generate something I could not distinguish from student writing. In the halcyon days of 2023, it was a reassuring article of faith that machine writing was instantly detectable by a teacher. I can report that, for better or worse, that’s simply no longer the case.
Next I tested these chatbots on less obviously poisonous uses, such as making comments on drafts, or answering clarifying questions about assignments. Performance varied from bot to bot, but some were very good at it. In fact, I was impressed enough that I started occasionally feeding these same bots drafts of my own magazine pieces, now and then getting instant feedback that felt truly useful. Sitting at my computer, I felt an imaginary squad of cheerleaders gathering behind me, ready to claim a victory.
I kept returning to my memories of reading time in Emily’s classroom, trying to analyse what had felt so special. Part of it, I decided, had to do with how the activity structured everyone’s attention. Because all the laptops and phones were away, everyone was fully engaged at all times. It was truly astonishing to see.
I’m kidding. It was school. Some shifting amount of the class’s collective attention was on all the things teenagers have to think about. Next period’s test. Their plans for the weekend, or worrisome lack thereof. Whether their crush liked them back. The fight they heard their parents having the night before. The presence of ICE officers in the neighbourhood. But, thanks to the architecture of reading time, the possibility of paying attention was always close at hand. A student could find their way back to it without being waylaid en route by the temptations of a bright, scrollable screen, an always-on portal to more distractions.
It was good – I was sure of it – to have some enforced separation between the learning and the temptations of tech. My reflex was to enforce, to the extent possible, that same separation on their writing processes. Is it possible to design a chatbot that gives reliably useful writing feedback? Maybe. Can the frequency of chatbot feedback be regulated so that it doesn’t become a crutch? Probably. Can a chatbot be ordered not to offer students one-click rewrites? Yes. But every high-school student – busy, overwhelmed, nervous about writing, eager to be done with school work for the night or weekend – knows that, on the public internet, these labour-saving options sit a mere click away.
I couldn’t wipe chatbots from their world, any more than I can wipe phones. All I could do was decide how much I would steer students toward them and how much I would nudge them toward other experiences.
Me: “So … I think in the fall I’ll try making things as AI-free as possible. I think what the students need most are sustained experiences of reading and writing – with all the friction and uncertainty those processes involve – without tech distractions in the mix.”
Also me: “But learning to deal with tech distractions is part of life. And surely they’ll need AI, in the future, to supercharge their thinking and be competitive workers.”
Me: “Maybe. But can you supercharge your thinking when you haven’t learned how to think yet? Aren’t I always reading interviews with Silicon Valley executives where they describe strictly limiting their own kids’ access to the web and screens?”
Also me: “Any chance you’re projecting some of your own concerns about how much time you waste online, and what a better, more successful writer you want to think you’d be if someone would just turn them off on your behalf?”
Me: “That’s possible, yes.”
Teaching, according to Freud, is one of the “impossible professions”. It is never possible to declare total success, or even know for sure the full effects of what you are doing. (Worse: “One can be sure beforehand of achieving unsatisfying results.”) Through the fall I reminded myself of this idea daily, trying to make myself feel better about how profoundly unsure I felt about almost everything I did.
When I devoted class time to reading, it felt great. But then I worried that because it felt so great I was doing too much of it, the teacherly equivalent of trying to be healthy by eating only spinach. When I had students write their essays entirely in class, I felt virtuous for having banished big tech’s brain-rotting shortcut machine. (The image of Ian McKellen-as-Gandalf, standing firm in the face of the monstrous, towering Balrog, bellowing “YOU SHALL NOT PASS!” became a companion.)
Then, at night, looking over the battles of the day, I would worry that, by confining work for written assignments to class time, I wasn’t exposing students to the very aspects of writing that I valued most: the intertwined frustrations and pleasures of picking apart what you’ve written and reassembling it, the movement from draft to draft, the experience of living with a piece over time, your engagement with it colouring and being coloured by the rest of your life. When I set more ambitious assignments, and gave students the extra time that ambition required – including, by necessity, unsupervised time – I would feel virtuous again. Then my mind’s eye would be invaded by visions of my students at home, pasting my instructions into ChatGPT, into Gemini, into Claude, into Copilot, into Grammarly.
I spent a lot of time trying to come up with outside-the-box writing assignments that were so well constructed – so damn interesting, so not the rigidly formulaic essays of yesteryear – that students would feel no desire to skip them.
Imagine you work in Hollywood: the book we’ve just read is being made into a movie and you have to select the soundtrack; explain which songs go with which scenes and why, and by doing so demonstrate that you understand those scenes’ tone and role in the overarching story.
Write your own version of Binyavanga Wainaina’s satirical essay How to Write About Africa, replacing “Africa” with something important to you that you feel is often misrepresented, and by doing so demonstrate your understanding of Wainaina’s rhetorical choices.
I loved reading these assignments. I loved learning how students understood what we were reading. I loved hearing their music. I loved learning about their relationships to gender, their cultural backgrounds, their neighbourhoods, making notes about my responses. But this love didn’t stop me from worrying.
And who knows – maybe chatbots could have helped. I’m sure in a few cases they did. For every assignment, I caught a few people using them to cheat. When I floated the question, the culprits tended to admit it right away, claiming a combination of time pressure and failure to understand what I’d asked them to do. I implored them: when you don’t understand, just let me know! But I couldn’t help thinking: what if I’d trained a chatbot to answer their questions in ways that I approved? Might fewer of them have done the Worst Thing? (Did I even know how many actually had?) Might their writing have got better, faster? Or would more of them, set at the foot of the garden path to full-blown cheating, have merrily traipsed down it? I wanted to trust them; I felt sure I had to set limits. The decisions felt impossible, and it was of limited consolation that an Austrian psychoanalyst with a fondness for cocaine had said as much in 1937.
Besides reading, there was one other type of classroom activity that felt relatively safe from this hovering cloud of doubts. These were the times when we talked directly about AI – when I tried to explain my thinking on the subject (including my uncertainty) and also to solicit the class’s thoughts. I gave my older students AI questionnaires, prompting them to describe what AI tools they used for what, how long they’d been using them, and how they felt about it. A few of them told me they’d never used AI and never wanted to – that it creeped them out. Some expressed concern about what it meant for jobs. Others described using chatbots to generate flash cards and test review questions, to get advice on what to wear, to edit their social media posts, as a replacement for Google searches, to get cooking advice, to get athletic training advice, to get health advice, and to get health advice for their pets.
Almost everyone who filled out the questionnaire expressed some fear (or at least recognition) that AI could erode their capacity for original thought. I recognise that some of them, having intuited my rejectionist leanings, might have been telling me what they thought I wanted to hear. I also knew some of them were likely leaving out things they understandably didn’t want to tell me, such as that they used chatbots to alleviate loneliness. Still, their concerns about their own cognitive lives felt genuine.
It wasn’t always clear, though, that the students understood the nature of original thinking well enough to understand when it was being bypassed. More than one expressed firm resolve to develop their own thinking abilities – then, a few lines later, shared examples of “responsible” AI usage that, from my perspective, trashed exactly what they were hoping to cultivate. I’ll have AI give me a thesis statement, but then I’ll write the paper. I’ll have AI give me a few thesis statements, then I’ll pick one and have AI do the outline. I’ll have AI write a first draft, then go in and change things to make it original.
Only one student said that he used AI to complete, start to finish, assigned writing that he didn’t want to do. He meant no offence to me personally, he explained, but his life was busy and “some teachers” were in the habit of giving repetitive assignments that he felt confident weren’t worth his time. This same student’s father approached me at a parents’ night to tell me that, while he understood where I was coming from with my AI policies, he was also worried. In his own professional life, he saw how much employers emphasised AI fluency in discussions about hirings and promotion. Shouldn’t his son’s education be encouraging that fluency?
I got a distinct sense that, even among students who used AI the most, contextual knowledge about the technology was extremely low. One day, I spontaneously offered a much-too-large heap of extra credit to anyone who could produce (without looking at a screen) a plain-language account of how chatbots generate text. No one could. I also shared an email I’d received from the US Authors Guild, explaining how to determine my eligibility for compensation from a class-action lawsuit brought on behalf of book writers against the AI firm Anthropic, creator of Claude, a chatbot some of them had identified as their favourite. On what grounds, I asked, might Anthropic owe writers like me money? Silence.
So I tried to talk about it. It felt a little awkward. My own plain-language explanation of chatbot text provenance was, I quickly realised upon sharing it, not as plain as I’d hoped. But it also felt good. I sensed my students’ attention – and, frankly, my own – slipping into higher gear as we took on questions about the world and our place in it.
I suspect that in the future I’ll be seeking out more opportunities to bring the subject of AI into the classroom, even as I maintain an extreme caution about doing the same with AI tools. I want students to get better at thinking about literature, yes – but also about all the language they encounter, including in advertisements, politicians’ speeches, newspaper op-eds and social media content. If these language machines are going to be a major part of how they’re interfacing with the world, I want them to be able to ask questions about the machinery. I want them to be able to explain the business models of AI companies, what those business models can mean for how chatbots behave, and the role played in chatbot outputs by low-wage workers. I want students to know about, and respond to, the experience of people for whom chatbot interactions end in self-harm, psychosis and suicide. I want them to know that multiple AI executives have openly predicted that AI growth will eventually result in the surface of our planet being mostly covered by data centres, and I want to hear what they think about it.
On my last day of student teaching, I stayed late, grading a pile of my younger students’ work. We’d spent several weeks reading short stories on the complicated relationships we humans have with our teachers, mentors and role models. In place of essays, I’d asked them to write short stories where they plucked characters from across the unit and came up with original scenarios that brought them together in ways that reflected the unit’s themes.
I’d allowed these students to work on these stories outside class, and to submit them digitally. But I also had them work on them in class time and made them meet me to describe their choices. Only one or two, that I could tell, had obviously tossed the task over to chatbots (which, if you’re wondering, did a pretty serviceable job).
Overall, I was delighted by the inventiveness and quality of my students’ stories, and the depth of understanding of other authors’ work that they demonstrated. To my surprise, many of them drew on a story that, in class, had been widely dismissed as “too weird”: Mark Twain’s The Mysterious Stranger. In the version we read (Twain re-wrote it at least three times), a group of young men falls under the sway of an angel named Satan – not that Satan, he assures them; that’s his uncle. This Satan, whoever he is, knows all kinds of cool magic, which at first the boys find totally delightful. In the end, though, it’s a horror story. For all Satan’s surface charms, he is revealed to view humanity with a combination of indifference, scorn and hostility. The more the young men interact with him, the more they risk unthinkingly absorbing a similar attitude.
Multiple students had their Satans act in ways that, it was impossible to miss, mirrored the behaviour of the latest chatbots. Satan offered to do characters’ homework, to take work they’d done and make it more polished, to free up their time for more immediately pleasurable activities. They did this, I swear, without any prompting from me. Despite my rejectionist inclinations, this way of looking at Twain’s Satan had never occurred to me.
The hours I spent reading those stories were a joy, and mostly uncomplicated by the AI anxieties that had colonised my mind for so much of the semester. The biggest threat to this joy was the steady stream of solicitations from the AI tool embedded in my word processing software, from the AI tool embedded in my email inbox, and the AI tool embedded in my digital assignment-management tool. Did I want the machine to give me notes on my students’ stories? To grade them for me? To put them in categories based on similarities it detected among them?
I didn’t. I wanted to read what my students had written. I’d been telling them all semester that writing was a gift humanity had made for itself, a way for us to know ourselves and each other across space and time. What would it mean if, after all that, I gave over the task of responding to their writing to an algorithm? I printed the remaining stories out and shut my computer.
Did I clock every single instance of AI cheating? I’m sure I didn’t, and I’m sure some teachers out there – rejectionists and cheerleaders alike – are shaking their heads right now at my naivety. But I knew my students; that was the job, wasn’t it? I’d watched their drafts’ progress in class; I’d made them explain their stories – their weird, hilarious, touching stories – to my face. Surely all that counted for something. I was aware of the possibility that I was fooling myself. But I felt surprisingly at peace. I’d done what I thought was right for the semester. In future semesters, the approach will surely change in ways I can’t yet predict. That, too, is the job. I picked up my pen, grabbed the next story from the pile, and began to read.







