Published on 07 Nov 2025

NBS Knowledge Lab Webinar: Building Trust in Artificial Intelligence (AI): Human, Cultural, and Governance Perspectives

On 21 October 2025, the NBS Knowledge Lab hosted a webinar on a critical question in today’s digital world: how can trust between humans and AI be established? Moderated by Associate Professor Georgios Christopoulos from Nanyang Business School (NBS), the session brought together three experts: Sherie Ng, Executive Director, Singapore, Singlife Philippines; George Seah, Director, SEA Data & AI, Capgemini Invent; and Manprit Singh, Distinguished AI Architect, Engineer, Mentor, Celebal Technologies. Their conversation highlighted that trust in AI goes beyond algorithms and governance. It also requires understanding human psychology, cultural expectations and leadership choices. 

Key Takeaways for Leaders 

  1. Trust grows through understanding AI’s capabilities, limits and impact. Education, transparency and communication are essential to overcoming fear and building confidence in AI. 

  1. Governance must enable innovation. Adaptive frameworks and co-creation encourage accountability while driving experimentation. 

  1. Leadership and culture shape adoption. Culturally intelligent leaders who foster learning and collaboration can bridge the gap between humans and machines. 

Human Factors in AI Adoption 

While much debate on AI centres on technical reliability, the panellists agreed that the most persistent barriers to adoption are human. Ms Ng highlighted psychological and economic fears, particularly concerns about job displacement, as major obstacles to trust. “AI adoption is inevitable,” she said, “but trust grows only when people feel empowered, not replaced.” She emphasised that governments play a pivotal role in supporting this transition through AI education, infrastructure and upskilling initiatives. 

Mr Seah observed that mistrust often stems from misunderstanding what AI can and cannot do. “Once employees see how AI empowers rather than replaces them, trust begins to form,” he noted. His experience with organisations across sectors suggests that practical exposure and experimentation are vital to acceptance. 

Adding a practitioner’s view, Mr Singh described how trust develops through experience. In healthcare, radiologists who were initially sceptical of AI came to appreciate it as a collaborator that handled repetitive diagnostic tasks, allowing them to focus on complex analysis. “When AI helps people do what they do best, trust follows naturally,” he said. 

Cultural and Organisational Dimensions 

Culture shapes how individuals and organisations engage with technology. Drawing on her regional leadership experience, Ms Ng explained that attitudes toward hierarchy and uncertainty differ across societies and influence how quickly people adopt AI. “A workflow introduced in Singapore may face less resistance than in the Philippines, where employees often look to leaders for reassurance,” she said. 

She argued that education systems and organisations must embed AI literacy and human–machine collaboration into their training and curricula. “Curricula should not just teach tools but cultivate judgment, creativity and the art of collaboration,” she advised. Within organisations, she urged leaders to promote experimentation and co-creation, noting that “we are playing a new game, and it requires new rules.” 

The conversation also touched on the cultural sensitivity of AI design. Large language Models (LLMs), Ms Ng observed, are global in scope but depend on local data and values for relevance. Designing culturally adaptive systems, she suggested, is key to creating inclusive and trustworthy AI that resonates across societies. 

Governance, Responsibility, and the Ethics of Trust 

Governance was identified as a cornerstone of responsible AI adoption. Mr Seah cautioned that compliance frameworks should not be viewed as restrictive but rather as enablers of innovation. “There is no one-size-fits-all model,” he explained, citing Singapore’s IMDA guidelines as an example of adaptive governance that supports experimentation while ensuring accountability. 

He outlined four guiding principles for effective AI governance: 

  1. Customisation – align frameworks with organisational goals and regulatory environments. 

  1. Co-creation – involve users early to ensure practical safeguards and human oversight. 

  1. Clarity – define boundaries that enable, rather than constrain, innovation. 

  1. Collaboration – integrate governance into daily human–AI teamwork. 

Measuring trust, Mr Seah noted, requires proxy indicators such as accuracy, transparency and reliability, but also the ability to learn from failure. “Governance should make failures visible early so organisations can recover faster and stronger,” he said. 

Mr Singh added that ethical expectations must be proportional to context and risk. “Even human experts rely on intuition,” he remarked. “Expecting AI to explain every decision perfectly is unrealistic. What matters is ensuring fairness and accountability appropriate to the task.” 

The Leadership Imperative and the Path Ahead 

Leadership, the panellists agreed, is pivotal to fostering trust in AI. Ms Ng emphasised that organisations must be redesigned for hybrid workforces, where humans and AI collaborate as co-workers. “This is not just a technological shift,” she said. “It is a cultural one that requires leaders to redefine roles, responsibilities and purpose.” 

Assoc Prof Christopoulos reflected that trust in AI is not a technical endpoint but a behavioural journey. “Experimentation is essential,” he said. “We must learn, adapt and co-create the future of human–AI interaction together.” 

The Future of Trustworthy AI 

Looking ahead, the speakers agreed that societies will continue to adapt to AI, much as they once adapted to electricity. However, they also acknowledged that this is still an unfolding journey, with uncertainties and new challenges emerging as AI becomes more pervasive. The panellists emphasised the importance of maintaining balance so that progress remains ethical, inclusive and centred on people. Mr Seah noted that governance frameworks must evolve as rapidly as AI technologies. Mr Singh highlighted the role of educators and policymakers in preparing the next generation to use AI responsibly. 

As the session drew to a close, Assoc Prof Christopoulos underscored that AI’s purpose extends beyond efficiency. “The goal,” he said," is not only to increase productivity but to enhance creativity, empathy and well-being.” 

Watch the webinar here: