Published on 21 Aug 2025

Navigating the AI tide

Generative AI innovations from NTU could help address complex problems and security concerns.

ChatGPT on phone

Advancements in artificial intelligence (AI) , particularly generative AI (GenAI), are moving at breakneck pace. As a global university founded on science and technology, NTU is at the forefront of the AI revolution. In the U.S. News & World Report Best Global Universities rankings published in 2025, NTU placed second in the world for AI. In the same year, the University was ranked fifth globally and first in Asia for Data Science and AI by the QS World University Rankings by Subject.

Leveraging NTU’s strong AI ecosystem, researchers from the University are driving new innovations in GenAI, as well as making it more secure and trustworthy. 

Pushing Frontiers highlights some GenAI innovations developed by researchers from NTU’s College of Computing and Data Science that are heralding a new technological era powered by smart and safe AI.

Making AI more accessible

Since the development of ChatGPT, a multitude of AI chatbots with varying strengths, such as DeepSeek, have emerged. Open-source GenAI models, which include large language models (LLMs) that can handle text, images, videos and other types of information, drive further innovations in AI.

According to Prof An Bo, Head of the Division of Artificial Intelligence at NTU’s College of Computing and Data Science and Director of NTU’s Centre of AI-for-Xthese models reduce the cost of deploying high-performance AI chatbots and make the use of GenAI more accessible.

“However, there is a long way to go before the widespread deployment of GenAI. It is still a challenge for AI to effectively integrate different types of information to produce accurate outputs,” says Prof An, who is also President’s Chair in Computer Science and Engineering.

Safeguarding AI from attack

At the same time, there are growing safety concerns around such AI systems. For example, hackers could design adversarial images that resemble actual visual inputs to trick AI models into creating harmful output for nefarious purposes, such as misdiagnosing patients and causing self-driving vehicles to get into accidents.

Training LLMs on adversarial examples improves the robustness against these attacks, but such training is computationally costly and not practical for LLMs optimised to be efficient.

In light of this, President’s Chair in Computer Science Prof Ong Yew Soon and his team have developed new modelling methodologies that enhance the resilience and reliability of LLMs in the face of adversarial attacks.

Their methods outperformed others at enabling LLMs to generate accurate captions for tasks that involve an understanding of visual information, even for images that have been doctored to mislead.

“Open-source AI models like DeepSeek make AI more accessible but they are also vulnerable to attacks. To maintain trust in AI systems, it is essential that we address and resolve these security concerns proactively,” says Dong Junhao, a PhD student under Prof Ong’s supervision who led the research in developing the methods.

Ensuring accuracy in the AI age

Another issue that threatens trust in AI is the propensity for AI chatbots to hallucinate and make up false information. For instance, they have been reported to fabricate fictitious references that seem legitimate but do not exist.

To boost the trustworthiness of GenAI, Asst Prof Wang Wenya has developed techniques that train chatbots to generate relevant citations, while ensuring that their responses are correct. She showed that chatbots trained with a framework that provides rewards for individual components of the output instead of a single reward for the entire result outperform ChatGPT when generating correct responses supported by accurate citations.

Asst Prof Wang’s analysis of the various fact-checking pipelines that chatbots use to identify misinformation also provides insights into reducing hallucinations by AI chatbots.

“With enhanced accuracy, the AI chatbots of tomorrow could function as intelligent assistants, excelling at complex tasks such as interacting with customers, helping in healthcare or education, and even accelerating scientific discoveries,” she says.

AI that understands stories

Ultimately, the potential of AI to transform society and industries rests on its ability to understand the real world. 

Unlike humans who make sense of the world by understanding causal relationships, most AI systems cannot distinguish between causal and non-causal correlations. As a result, they may behave in ways that lack common sense.

Breaking new ground in this area is Nanyang Assoc Prof Albert Li, who is enhancing AI’s abilities to understand causal relations between everyday events and to use such understanding to comprehend story content.

These enhanced abilities would enable the AI system to explain the cause of observations in the past, plan for desirable outcomes in the future, devise surprising story twists that are believable and differentiate between legitimate and flimsy excuses.

To help AI understand cause-and-effect, Nanyang Assoc Prof Li and his team extracted causal knowledge from LLMs and applied it to boost the performance of AI in understanding tasks, such as evaluating story quality and matching textual stories with their video depictions.

“As the use of AI becomes more widespread, it becomes ever more important that we understand its strengths and limitations. Eventually, the security of LLMs should be built on top of their ability to understand the real world,” he says.

The article appeared first in NTU's research & innovation magazine Pushing Frontiers (issue #25, August 2025).