It’s time our exams caught up with the future
Every June in China, nearly 13 million students sit the gaokao, a high-stakes college entrance exam often dubbed the “most competitive exam in the world”. In 2025, however, something unusual happened: Generative artificial intelligence (AI) tools vanished during the exam, and it was on purpose.
Major Chinese tech companies like ByteDance, Alibaba and Tencent all voluntarily disabled or restricted their AI services during the exam period. AI chatbot tools refused to answer questions related to the exam syllabus, image recognition functions were suspended and, in some cases, entire services went offline.
The move was widely interpreted as a coordinated act – it was less about plugging genuine loopholes in the system, and more a performative signal: that AI should stay out of this sacred domain of meritocratic assessment.
Students were told, symbolically: “You’re being watched.” Platforms were reminded: “You are not above regulation.”
Why such a strong response? Because examinations in China, and across much of Asia, are more than just administrative procedures or academic formalities. They are deeply symbolic rituals, bound up with ideas of fairness, meritocracy and upward mobility.
For generations, these high-stakes tests have been seen as the great equaliser, where academic strength is earned through effort, not inherited privilege.
The mere presence of AI risks unsettling those assumptions, raising uncomfortable questions about what counts as genuine ability, and who gets to define it.
As global headlines from The Washington Post to The Guardian have noted, this also opens up a deeper, more uncomfortable question: Are we protecting examinations from AI, or shielding examinations from change?
Singaporeans, too, know something of the ritual and rigour of national exams – from PSLE to O and A levels. But as generative AI reshapes how knowledge is accessed, processed and created, we must ask: Are our assessments evolving quickly enough?
Recent incidents at local universities have drawn attention to the evolving challenges of integrating AI into student assessment, highlighting the importance of ongoing dialogue, clear guidance, and thoughtful adaptation in educational practices.
A few decades ago, open-book exams were often seen as less rigorous and viewed with suspicion by many universities.
The assumption was simple: if you weren’t working entirely from memory, it wasn’t a real test.
Yet, over time, educators realised that assessing recall alone was neither authentic nor productive. In the workplace, no one forbids you from checking your notes or using tools. The challenge is applying knowledge – interpreting, evaluating, innovating.
Today, open-book exams are increasingly common, and many universities favour continuous assessment over single-sitting exams. This isn’t merely a concession to convenience; it reflects an educational shift towards developing higher-order thinking, collaboration and adaptability.
AI takes the test
Now, with AI tools like ChatGPT, Claude and Google Gemini at our fingertips, a new inflection point is upon us.
These models don’t just retrieve information – they generate it. They draft essays, solve equations, create software code, even design graphics.
What’s more, they are becoming embedded into daily workflows – in schools, at work and across disciplines. In such a world, the idea of a student completing a task “without help” feels quaint. The idea of unaided intelligence is becoming a historical relic. The help is omnipresent – and improving rapidly.
So how should education respond?
One option is to double down on restrictions. Ban AI in classrooms, police usage during assignments, conduct exams under lock and key. But this creates a false dichotomy: AI as the enemy, exams as the last bastion of human purity.
It’s unsustainable – and arguably misguided.
Instead, we must reframe what it means to be competent in the AI era.
Cognitive scientist Edwin Hutchins noted that intelligence in the old days was perceived as the ability to perform a task without external help.
Hence, he proposed that intelligence is the ability to use available tools wisely, and to collaborate effectively.
The “smart” person is not the one who knows everything, but the one who knows how to find, filter and apply the right knowledge at the right time – with the right tools.
That’s a profound redefinition and it challenges the core of how we assess students.
If AI can solve standardised problems with ease, should we persist with standardised problems? If AI can write decent essays, shouldn’t we teach students to evaluate, critique and improve AI output?
If students will live and work in a world where AI is ever-present, shouldn’t exams simulate that world instead of pretending it doesn’t exist?
Singapore has already begun exploring these questions.
The National University of Singapore and Nanyang Technological University
allow students to use generative AI in some courses,
under clear guidelines, in line with institutional efforts to support responsible AI use.
Some junior college teachers have even begun experimenting with AI-aided assignments, asking students to annotate or critique.
Understandably, large-scale exams must uphold fairness and objectivity.
Yet fairness must also evolve. It’s not just about giving everyone the same test, but ensuring everyone has the opportunity to learn how to use the tools that shape their world.
Fairness in the AI age is not merely about uniform conditions, but about equitable access to the cognitive tools that define the future of work and learning.
Teachers are a crucial part of this shift. Yet the slowest part of any educational reform is often professional development.
Many educators still feel unprepared to incorporate AI into their teaching, let alone into assessment. Meanwhile, students – digital natives in every sense – are racing ahead, sometimes guided more by Reddit threads than classrooms.
That is why the most urgent task may not be redesigning exams, but redesigning our mindset. We need to see AI not as a cheat code to be suppressed, but as a catalyst to raise the bar of what students can achieve – with the right guidance.
The recent AI blackout during China’s gaokao is telling. It wasn’t really about stopping cheating.
It was a mirror, showing us how dependent assessment systems still are on closed, individualised testing. In a world of open, collective intelligence, that model is nearing its expiry date.
The question is no longer whether AI belongs in assessment, but how to design assessments that teach and test what truly matters in an AI-augmented world.
Future assessments must distinguish between outsourcing and augmenting. In other words, are students letting AI do the thinking for them – or thinking better because of it?
The goal is not to test what they can do without AI, but what they can do with it – while still exercising judgment, integrity and originality.
Because, in the end, the real risk isn’t that AI is too smart – but that we remain stuck pretending it doesn’t exist.
Professor Looi Chee Kit is Emeritus Professor of Education, National Institute of Education (NIE), Nanyang Technological University (NTU), and research chair professor of the Education University of Hong Kong. Dr Wong Lung Hsiang is senior education research scientist at the Centre for Research in Pedagogy and Practice at NIE, NTU.
Read the original article here.
Source: The Straits Times © SPH Media Limited. Permission required for reproduction.