AI in the Classroom: How Business Schools Can Teach Students to Think Critically with Generative Tools
Why It Matters
As AI tools like ChatGPT become common in classrooms, business schools must rethink not just what students learn but how they learn. This case study shows how structured prompting and guided reflection can turn AI into a tool for deeper thinking.
Key Takeaways
- A three-step assignment structure using AI prompts helps students build higher-order thinking skills.
- Prompt design, collaboration, and reflection are key to engaging students critically with GenAI tools.
- The approach can inform AI integration in business education more broadly, from assessment to curriculum planning.
Rethinking How Students Learn in the Age of AI
With generative AI tools such as ChatGPT becoming widely accessible, many educators worry that students may use them to bypass learning altogether. But instead of banning these tools, Nanyang Business School (NBS) at Nanyang Technological University, Singapore, asked a different question: What if we taught students to work with AI in a way that deepens learning?
To explore this, Dr Kumaran Rajaram, Senior Lecturer in Leadership and Management, designed a pilot programme involving around 600 undergraduate and postgraduate students. The goal was to embed generative AI (GenAI) into coursework — not as a shortcut, but as a way to develop critical thinking and problem-solving skills.
The project tested how GenAI could be integrated meaningfully into assignments, focusing not just on content creation but on analysis, discussion and reflection. At its core was the belief that students can learn to use AI responsibly and effectively — if given the right guidance.
A Three-Step Framework for Learning with AI
The pilot introduced a structured, three-part assignment:
- Pre-class: Students worked individually with faculty-designed AI prompts to explore a case study. These prompts were intentionally varied to encourage different outputs.
- In-class: In small groups, students compared their AI-generated results and debated which approach led to stronger insights.
- Post-class: Each student then submitted an individual reflection analysing the strengths and limitations of the GenAI outputs discussed in class.
This method encouraged “guided failure”, where students confronted the limitations of AI — such as hallucinated facts or superficial analysis — and learned to identify them. It moved beyond content consumption to help students question, critique and synthesise information.
Rubrics adapted from Bloom’s Taxonomy were used to assess how well students demonstrated higher-order thinking. These included criteria for evaluating reasoning, comparing perspectives, and forming evidence-based conclusions.
What the Pilot Revealed About AI in Education
The pilot uncovered valuable lessons — not just for students, but for institutions looking to scale AI in education.
One key challenge was designing effective prompts. While some initial variations were too dissimilar to allow meaningful comparison, refining them for clarity helped produce better learning outcomes. Another challenge was student mindset: those from technical backgrounds often preferred clear-cut answers and needed more support to navigate open-ended, interpretive tasks.
Technical limitations also surfaced. The pilot relied on the free version of ChatGPT, which mimicked real student use but lacked customisation features. Despite this, the pilot proved useful in identifying where and how GenAI could support learning—and where further investment would be needed.
As Dr Rajaram explains, the focus was never about replacing human judgment or creativity. Instead, it was about reshaping pedagogy so that AI becomes a tool for developing analytical thinking. “It’s not just about giving answers,” he notes, “it’s about challenging students to analyse, evaluate and synthesise information critically.”
Business Implications
As more companies adopt AI tools to support decision-making, business schools have a role to play in preparing graduates to use these technologies thoughtfully. This project provides a practical model for integrating AI into curriculum and assessment design in a way that promotes — not undermines — rigorous thinking.
For organisations, this approach also offers a roadmap for internal training programmes. Using structured prompts, collaborative review, and reflection can help employees engage more effectively with AI tools in areas like strategic planning, market research or policy analysis.
Importantly, the study highlights the need for ethical and critical awareness when working with AI-generated outputs — skills that are increasingly vital in a tech-driven workplace.
Authors & Sources
Author: Dr. Kumaran Rajaram (Nanyang Technological University)
Original Case Study: Graduate Management Admission Council
---
For more research, click here to return to NBS Knowledge Lab.