The 1st ACDC & SEA-LION Workshop: Culturally Aligned AI
Call For Abstracts
ACDC and SEA-LION are delighted to announce that the first workshop on Culturally Aligned AI will be held at NTU on 6th March 2026. With the advancement of large language models (LLMs), there is an increasing need for interdisciplinary collaborations to develop human-centred LLMs that accommodate the variety of languages and cultures, as well as evaluate their impact on different societies. This workshop aims to bring together researchers from multiple disciplines at different colleges and institutions to contribute insight to one or more of the following themes:
1) Cultural foundations
Focus: How can culture-related concepts be formally represented and operationalised in AI systems without oversimplification? Topics include:
- Defining unclear culture-related concepts that have been difficult to engineer within AI systems
- Consolidating the vaguely used culture-related terminologies in LLM research
- Developing cultural alignment methodologies (algorithms, architectures, frameworks, prompts) enabling AI to understand linguistic/communicative practices in Asian contexts and, more broadly, across the globe
2) Design and evaluation
Focus: How do we build the pipeline to curate the right data, evaluate LLMs’ cultural performance, and design interaction processes that work for real people? Topics include:
- Designing, creating, and supervising textual, speech, and visual data for the development of multi-language AI models
- Developing evaluation methods for culturally-aligned models
3) Human engagement, social impact, and governance
Focus: How do people interact with AI systems in everyday contexts, and what are the broader societal consequences? Topics include:
- Studying the role of humans in using/deploying/developing AI techniques in everyday life in SEA
- Optimisation of interaction with AI based on various human-related factors, such as their cultural identities, relational needs, and emotional support, etc.
- Exploring how to enhance positive societal impact and avoid harms (cultural aggregation, language extinction, etc.)
- Providing AI governance principles and guidelines for SEA decision-makers
We invite abstracts (max. 500 words, excluding references) that represent the researcher’s current and future working directions. Submissions should clearly indicate the relevance and contributions to one or more of the above topics and highlight interdisciplinary connections where applicable.
Accepted abstracts will be presented as short oral presentations.
Important Dates
Abstract submission: Friday, 6th February 2026
Acceptance notification: Friday, 14th February 2026
Workshop date: Friday, 6th March 2026 (whole day event)
Submission Methods
If you have any enquiries, please contact:
Zoe Xi Chen (SoH): [email protected]
Wei Lu (CCDS): [email protected]