GenAI helps students think fast. It’s time universities teach them to think slow
Nobel Prize-winning psychologist Daniel Kahneman proposes that we think using two systems: one fast and intuitive, the other slow and deliberate.
System 1 handles familiar tasks automatically – like solving 2 × 2 or sensing a friend’s mood. System 2 kicks in for effortful mental work – like evaluating the logic of an argument or finding the right words to comfort someone.
While this summary barely scratches the surface of Professor Kahneman’s work, his framework offers a useful lens to examine a growing tension in higher education: the rise of generative artificial intelligence (Gen AI).
University is fundamentally about learning – and learning demands system 2 thinking. Students must pause, reflect, and actively make sense of information.
But with the introduction of AI, increasingly, the temptation is to bypass this learning process. AI provides fluent, polished responses that feel correct. Under pressure, many students engage with AI the way system 1 thinks – quickly, intuitively, and without scrutiny.
And if we are not careful, students may begin to use Gen AI as a shortcut without developing the analytical muscle that education is supposed to build.
Before AI use becomes autopilot, students have to be trained to engage with it deliberately.
Take the calculator as an example. When it was first introduced into schools in the 1970s, it required conscious effort: how to use it, when to use it, and what to input. Students had to deliberately practise with it, guided by teachers and curriculum structures, before it became routine.
Over time, using a calculator shifted from system 2 to system 1. But crucially, that shift was supported by a deep prior understanding of mathematical principles and guidance from experts at the time.
The same should apply to AI. But in the rush to adopt the newest tools, or prohibit them, many institutions may be skipping crucial steps, and students are left to fend for themselves.
Why AI use in higher education needs guidance
In June, three university students in Singapore received zero marks for essays that were allegedly generated using AI, despite clear guidelines that disallowed it. Yet in many other modules, AI tools are not only permitted, but encouraged. The message is inconsistent.
Worse, it places the burden of decision-making squarely on students, many of whom are still learning how to learn in this new age of AI. . In the absence of guidance, they rely on what feels intuitive – the system 1 approach – using ChatGPT to produce answers quickly rather than think critically with it.
But that is not a reason to ban AI. If the university is meant to train the mind, then AI should not just be permitted, but also taught explicitly and critically. Because only then will students stop using AI to bypass thinking, and start using it to become better thinkers.
So, how should we think about AI in higher education?
The prevailing wisdom is that AI excels at tasks best handled by system 1, such as generating summaries, producing drafts, sorting through information, while humans should retain control over system 2 processes like interpretation, ethical judgment and critical thinking.
This is a convenient and effective process for students as well. They get inspiration, essay structures from AI, while making sure the end product is still somewhat theirs.
But what if that framing misses the opportunity altogether? What if, instead of simply dividing labour, where AI handles the routine, and humans handle the thinking, we used AI to train the very thinking we want to cultivate?
The opportunity, then, is not just in making tasks easier, but in using AI in a structured way to strengthen deliberative thinking. By forcing students to iterate, evaluate and reflect, AI can serve as a tool that not just saves time but also develops system 2 habits.
AI as a sparring partner
Prof Kahneman’s research shows that human beings are naturally inclined to rely on heuristics and shortcuts, particularly when under pressure. If AI use simply reinforces these habits, students become more entrenched in automatic thinking.
But if AI is used to challenge assumptions, surface blind spots and demand reflection, it could help students practise resisting system 1 impulses, and build stronger system 2 responses.
Consider a student writing a policy paper with AI. If they use a chatbot to draft an outline and stop there, they have engaged system 1 – outsourcing the heavy lifting without reflection.
But if they prompt the tool with different framings, ask for counter arguments, spot hallucinations, verify citations and rewrite weak sections, that is system 2 in action. AI becomes less like a shortcut, and more a cognitive sparring partner.
Breaking the habit to lapse into system 1, however, does not happen overnight. Universities must do their part in guiding students, and create structures and assessment methods that encourage and reward deeper engagement.
Dr Jean Liu, director at the Centre for Evidence and Implementation, said universities must orient students to the rules of engagement, like helping them understand best practices and ethical considerations in using AI, such as the misuse of intellectual property or the technology’s environmental impact. In practice, she added, institutions can create collaborative forums where students and educators exchange ideas and refine their use of AI together.
Another approach is to offer courses in prompt engineering, which teaches students how to craft effective queries for Gen AI. A 2025 journal study found that such training enhances output quality, meta cognitive control and transferable skills such as problem decomposition, iterative thinking and evaluative judgment.
Assessment design matters too. The Gen AI Assessment Scale, developed by international academics, outlines six levels of acceptable AI use – from idea generation to editing – helping universities set clear expectations. The University of Sydney’s two-lane system is another example: one lane features AI-free, in-person exams, while the other allows AI use with proper citation and transparency.
Assessments can also evaluate both product and process. Students could submit AI-generated drafts alongside their revisions and a reflection explaining how they identified hallucinations, addressed bias or made edits.
Alternatively, they might compare their work with an AI version, answer reflective prompts like “What did AI miss?” or respond with counter arguments. Such practices support system 2 habits.
The goal is not the answer, it’s deeper thinking
A study by Associate Professor Victor Lim from the National Institute of Education found that students demonstrated deeper thinking when they challenged chatbot responses, refined the points offered and added culturally or personally relevant insights.
Deliberate teaching of these habits turns AI from a shortcut into a tool for growth.
Prof Lim also notes that learning with AI may offer added benefits, such as psychological safety, and allowing students to ask “embarrassing” questions and clarify doubts without fear of judgment.
Students should recognise, and be taught, that the real educational value lies not in the answers provided by AI, but in how they analyse the responses.
Just as teachers once guided calculator adoption, they must now model and scaffold thoughtful AI engagement – helping students learn how, when and why to use it, and what good use looks like.
The question is no longer whether AI should be integrated in education – it already is. What matters now is the “how”.
Banning it outright or allowing unchecked use denies students the opportunity to practise what education is meant to cultivate: the ability to think critically, reflectively and well.
System 2 is like a muscle – it atrophies without use and strengthens with challenge. AI, if framed properly, can provide that challenge.
Not because it gives us the right answer, but because it gives us something to push against.
Read the original article here.
Source: The Straits Times © SPH Media Limited. Permission required for reproduction.


