Published on 02 Oct 2025

What our body’s response reveals about AI in the classroom

Not long ago, one of us found ourselves trying to resolve what seemed like a simple issue with their mobile phone bill. Expecting a brief call with a customer service officer, they were instead directed to a chatbot.

What followed was an interaction defined less by resolution than by repetition. With every response typed, the chatbot offered suggestions that missed the point entirely. As the minutes ticked by, they could feel their heart rate climbing, their palms growing warmer and their patience thinning. In short, their physiological system was staging a quiet protest.

This experience proved serendipitous, for they had just published a systematic review of empirical research on the use of Generative AI (GenAI) tools – such as AI-based writing assistants and chatbot tutors – in classrooms to support students’ literacy development.

These tools are increasingly being used to support students’ reading and writing in language classrooms around the world. While the findings across the literature generally affirm the value of these tools, they raise questions: What do students actually feel when engaging with GenAI? Do they respond to a chatbot the same way they do to a teacher? And if not, what is lost, and what is gained?

That curiosity led to a project we initiated, titled “Does the Teacher Matter? University Students’ Physiological Responses in Online Interactions with an AI-Chatbot and a Human Teacher”, at the National Institute of Education, Nanyang Technological University, Singapore.

We sought to answer two research questions: 1) What are the similarities and differences in students’ physiological and emotional responses when engaging with GenAI chatbots versus teachers during brainstorming for academic writing? 2) How does the quality of their essay outlines compare across these two conditions?

To answer these questions, we employed a multimodal research design and worked with 30 graduate student participants. In addition to self-report questionnaires, we used physiological measures – heart rate, skin temperature and subtle changes in sweat activity – to capture students’ affective states during a 25-minute brainstorming and writing task. The rationale was simple: what students say about their learning matters, but so too does what their bodies say.

Emotional reactions

The results were telling. After the brainstorming session, participants across both groups experienced a lift in mood. But those who worked with human teachers reported stronger positive emotions, particularly feelings of being motivated, inspired, empowered and engaged. More crucially, they experienced a decrease in negative emotions over time.

Those who worked with the chatbot, by contrast, showed only modest emotional gains – and in some cases, slight increases in frustration or uncertainty.

It is noteworthy that students who worked with the chatbot reported being more motivated and engaged, albeit at a lower level than those who interacted with the teacher. Human teachers, through their responsiveness, empathy and adaptability, create a sense of safety and encouragement that AI – at least in its current form – struggles to replicate, although the gap may be closing.

And yet, the picture is more complex. When it came to the quality of essay outlines, both groups performed comparably on average. However, students who interacted with the chatbot showed a wider spread of results: some excelled, while others submitted markedly weaker outlines.

This polarisation suggested that more self-regulated or metacognitively savvy learners were better able to navigate and benefit from the chatbot’s Socratic questioning. But those who needed more guidance or reassurance fared poorly without the human touch.

Follow-up focus group discussions a month later deepened our understanding. Students who had interacted with the teacher could still recall specific details: a piece of advice, some encouraging words, even a personal anecdote. In contrast, most students who worked with the chatbot struggled to recall any part of the interaction.

But we should not be too quick to dismiss the chatbot. Several students reported that they actually preferred the AI interaction. They felt less judged. They could voice their thoughts more freely, without worrying about whether their answer was ‘right’.

There was a liberating quality to the absence of social evaluation. Indeed, our discourse analysis of chatlogs revealed fewer politeness markers and interpersonal cues in student-chatbot conversations, but a higher density of content related to the topic. In short, less small talk, more substance.

Humans matter, but AI can help

What do we make of all this?

The human teacher still matters – profoundly so. But if we expect teachers to provide personalised, emotionally attuned guidance to every learner, we must also acknowledge the physical and emotional toll this takes. As our long-suffering teacher-collaborator quipped after giving individualised consultation to 15 students across different sittings: “You should’ve measured my blood pressure instead!”

Herein lies the opportunity: a blended pedagogical model that pairs the emotional intelligence of the teacher with the cognitive provocations of the chatbot. While not a replacement, the chatbot proved to be a valuable pedagogical aid – boosting motivation, broadening perspectives, even helping stronger students turn an A into an A+.

Importantly, the adept use of GenAI tools can also enable the teacher to pay more attention to the learning process – the stages in writing an academic essay, rather than just the learning product, that is, the essay itself. The key lies not in whether we use GenAI, but how we use it.

This nuanced stance also offers a counterpoint to a widely circulated study by Nataliya Kosmyna and colleagues from MIT Media Lab, which warns of an “accumulation of cognitive debt” when students use AI writing assistants.

According to the study, students who relied on AI reported lower ownership over their writing and retained less of what they had written. But in our view, the findings reflect a narrow use case – where students passively consumed AI-generated paragraphs rather than actively co-constructed knowledge through dialogue.

Provocateur, not ghostwriter

If GenAI tools are used merely to outsource thinking, the results will inevitably disappoint. But if used as a brainstorming or feedback partner – as a provocateur rather than a ghostwriter – students can still be the agents of their learning.

So, does the teacher matter? Undoubtedly. But so too does how we imagine the teacher’s role in the age of GenAI. The question is no longer whether to use technology, but how to do so meaningfully and equitably in the design of students’ learning experience.

To dismiss GenAI entirely is to throw the proverbial baby out with the bathwater. But to embrace it uncritically is to risk undermining the very foundations of human learning: emotion, meaning and trust.

We must not forget that education is not simply about information transfer – it is about connection, discernment and becoming. And for that, teachers are still irreplaceable. But with GenAI as a teacher aide, perhaps our teachers can have more space to offer support for students who need it most – and breathe a little easier too.

Victor Lim Fei is associate professor and deputy head (research) in the English language and literature department at the National Institute of Education, Nanyang Technological University, Singapore. Yee Jia’en is research fellow in the English language and literature department at the National Institute of Education, Nanyang Technological University, Singapore. Jerrold Quek is lecturer in the Language and Communication Centre at the School of Humanities, Nanyang Technological University, Singapore.

They acknowledge the support from the National Institute of Education Senior Academic Administrators (RS-SAA) Grant for their study on ‘Does the Teacher Matter? University Students’ Physiological Responses in Online Interactions with an AI-Chatbot and a Human Teacher’.

This article is a commentary. Commentary articles are the opinion of the authors and do not necessarily reflect the views of 
University World News.

Read the original article here.

Copyright: 2025 University World News