Uncertain and Promising: Reflections on AI, Judgement, and Responsibility

Across the discussions at the IAS Frontiers Conference on AI, held as part of Singapore’s AI Research Week, a clear pattern emerged. Bringing together researchers working across artificial intelligence, learning, ethics, and social impact, the programme consistently returned to questions of limits, responsibility, and judgement.
This was not a day defined by technical spectacle or showmanship. Instead, across talks and discussions, attention repeatedly returned to limits, responsibility, and judgement – and to what it means when artificial intelligence moves beyond the lab and into the fabric of society.
The programme instead centred on a more difficult set of questions: how decisions are made under constraint, how systems behave over time, and how ethical responsibility must be understood when technologies are no longer experimental but embedded.

From optimisation to sustained engagement
The opening session set the tone. Milind Tambe, Professor of Computer Science at Harvard University and Director of the Center for Research on Computation and Society, presented work on using AI to support better health behaviours in maternal healthcare. What stood out was not only the technical approach, but the emphasis on long-horizon, adaptive intervention.
Here, AI was positioned not as a replacement for human judgement, but as a tool that supports it over time – learning from behaviour, adjusting to context, and operating within real-world constraints. The work highlighted a recurring idea that would resurface throughout the day: meaningful impact rarely comes from one-off predictions or optimisations. It comes from sustained engagement with human systems.
Incentives, interaction, and where systems settle
That perspective broadened as attention shifted from individual decisions to interacting agents and systems. Michael Wooldridge, Professor of Computer Science at the University of Oxford and a leading authority on multi-agent systems, explored how strategic behaviour unfolds in repeated settings, and how systems tend to stabilise into equilibrium states.
A key insight echoed across the discussion: stability does not necessarily imply desirability. Systems may converge on outcomes that are inefficient or socially suboptimal simply because incentives are misaligned. In such cases, changing outcomes is less about correcting individual behaviour and more about reconsidering the rules and structures that shape interaction.
This framing underscored the importance of institutional and design choices in shaping how AI systems behave once deployed.
Bounded rationality and real-world constraint
Questions of constraint were taken up directly in Shlomo Zilberstein’s discussion of bounded rationality and decision-making under limited time and information. A Professor of Computer Science at the University of Massachusetts Amherst, Zilberstein is known for his work on decision-theoretic planning and reasoning under resource constraints.
Rather than pursuing theoretical optimality, his work focuses on acting well under realistic conditions – including uncertainty, incomplete knowledge, and computational limits. Across several examples, the emphasis shifted from being “right” in hindsight to minimising regret given what is known at the time.
Implicit in this framing was an acknowledgement of what many participants recognised intuitively: there is always a messy layer in real-world deployment – where models encounter ambiguity, context shifts, and human unpredictability. Effective systems, the discussion suggested, are those designed to operate with this messiness, rather than pretending it can be eliminated.
Learning, failure, and persistence
Learning under uncertainty was another recurring theme. Peter Stone, Professor of Computer Science at the University of Texas at Austin and a pioneer in multi-agent learning and robotics, drew on work modelling infant walking to show how learning often depends on failure.
In these simulations, falling was not treated as error to be avoided, but as an essential source of information. The implication extended beyond robotics. Systems that are allowed to explore, err, and recover tend to develop greater robustness than those tightly constrained in the name of safety or efficiency.
In closing, Stone offered a reminder that resonated widely: the hardest problems are sustained not by novelty alone, but by commitment and passion.

Ethics, memory, and integrity
Ethical responsibility came into sharp focus in Sarit Kraus’s presentation, which examined how AI systems engage with memory, testimony, and historically sensitive domains more broadly. Kraus, a Professor of Computer Science at Bar-Ilan University and a leading figure in AI ethics and multi-agent systems, explored domains where questions of optimisation give way to concerns of dignity, care, and representation.
The work highlighted areas in which restraint, context, and human oversight are not optional, but foundational.
As moderator of the final panel, Kraus helped draw together ideas from across the day. Rather than treating ethics as a separate track, she consistently situated it as an organising principle – asking how technical choices translate into social consequences, and how responsibility persists long after deployment.
A question that lingered by the end of the discussion was how the AI community can retain its integrity as these systems become ever more embedded.
What does “good” look like?
That concern was sharpened during the same panel discussion when Adam Gleave, co-founder of FAR.AI, posed a deceptively simple challenge: what does good look like? Speaking from his work on frontier AI security and red-teaming large language models, the question carried particular weight.
Rather than treating evaluation as a matter of benchmarks or performance scores, the discussion reframed it as a question of judgement under pressure – including how systems behave in adversarial settings, who bears risk when things go wrong, and what forms of failure are deemed acceptable.
With broad agreement that LLMs and related systems are here to stay, attention turned toward how they are integrated and governed. The notion of augmented intelligence, rather than wholesale replacement, emerged as a way to keep human judgement visible and accountability intact.
Uncertain – and promising
The discussions at the IAS Frontiers Conference on AI pointed to a clear stance – as AI systems become more embedded in everyday contexts, questions of judgement, responsibility, and evaluation can no longer be deferred.
Intelligence – human or artificial – was consistently framed as bounded, contextual, and social. Progress, in this view, depends less on eliminating uncertainty than on designing systems that can operate responsibly within it.
The implication is straightforward: ethics is not an add-on, integrity is not automatic, and definitions of “good” must be made explicit and revisited over time. Those choices are collective, ongoing, and unavoidable.
The IAS Frontiers Conference on AI 2026 was hosted by the College of Computing and Data Science (CCDS) and co-organised by the Centre of AI-for-X and the Institute of Advanced Studies. It was supported by Singapore’s Infocomm Media Development Authority (IMDA).





