SPEAKER PROFILES

Talk Title
Learning World Simulators from Data

Abstract
Modern foundational models have achieved superhuman performance in many logic and mathematical reasoning tasks by learning to think step by step. However, their ability to understand videos, and, consequently, control embodied agents, lags behind. They often make mistakes in recognizing simple activities, and often hallucinate when generating videos. This raises a fundamental question: What is the equivalent of thinking step-by-step for visual recognition and prediction?

In this talk, we argue that step-by-step visual reasoning has much to do with inverting a physics simulator, that is, mapping raw video pixels back to a structured, 3D-like neural representation of the world. This involves inferring 3D neural representations of objects, parts, their 3D motion and appearance trajectories, estimating camera movements and 3D scene structure and physics properties. We will discuss methods to automatically extract such 3D neural representations from images and videos using generative model priors and end-to-end feed-forward models. We will present methods that inject such knowledge of camera motion and 3D scene structure in modern VLMs and show it improves their ability to ground language and control robot manipulators.

How can we scale up annotations for such simulator inversion? We will discuss methods that use generative models of language and vision to automate development of 3D simulations in physics engines. Additionally, we will discuss our efforts in developing faster and more general physics engines. The integration of physics engines with generative models aims to automate the replication of real physical environments within the physics simulator, enabling more accurate and scalable world simulation data for sim-to-real learning of 3D perception and action. We believe such real-to-sim and sim-to-real learning paradigms are very hopeful for developing robots that can see and think accurately, step-by-step.

Speaker
Katerina Fragkiadaki
JPMorgan Chase Associate Professor

Biography
Katerina Fragkiadaki is the JPMorgan Chase Associate Professor in the Machine Learning Department at Carnegie Mellon University. She received her undergraduate degree in Electrical and Computer Engineering from the National Technical University of Athens, and her Ph.D. from the University of Pennsylvania. She subsequently held postdoctoral positions at UC Berkeley and Google Research. Her research focuses on enabling few-shot and continual learning for perception, action, and language grounding. Her work has been recognizedby the Best Ph.D. Thesis Award, NSF CAREER Award, AFOSR and DARPA Young Investigator Awards, as well as faculty research awards from Google, Toyota Research Institute, Amazon, NVIDIA, UPMC, and Sony. She served as a Program Chair for ICLR 2024.

Talk Title
Security at the Frontier: Lessons from Red-Teaming

Abstract
Frontier models have been used by bad actors to launch offensive cyberattacks and help develop improvised explosive devices, and companies assess their models as increasingly capable of developing chemical and biological weapons. This talk surveys the current state of safeguards to prevent such misuse, drawing on on our experience testing models from major developers including Google, OpenAI and Anthropic. We will show there has been rapid improvement in model safeguards: the most robust models now take weeks rather than hours for us to jailbreak in the most protected domains. However, safeguard deployment has been inconsistent between developers, with safeguards narrowly targeted, leaving significant gaps in model security. We will conclude with a survey of the technical and policy mitigations needed to safeguard models to enable increasingly capable models to be safely deployed.

Speaker
Adam Gleave
Founder & CEO, FAR.AI

Biography
Adam Gleave is CEO and co-founder of FAR.AI, an AI safety research institute working to ensure advanced AI systems are safe and beneficial. His research focuses on AI security, red-teaming frontier models including GPT-5 and Opus 4, and developing scalable oversight techniques to address deception in AI systems. Adam's work has been cited in congressional testimony and featured in the Financial Times and MIT Technology Review, with papers receiving best paper awards at leading conferences. He serves on the boards of the Safe AI Forum (SAIF), Model Evaluation and Threat Research (METR), and the London Initiative for Safe AI. Adam holds a PhD from UC Berkeley, where he was supervised by Stuart Russell, and previously worked at Google DeepMind.

Assoc Prof Stefano Ermon

Talk Title
Smarter Together? Assisting Humans in a World of Intelligent Agents 

Abstract
The rapid advancement of intelligent systems—ranging from large language models (LLMs) to autonomous drones and robots—has unlocked remarkable capabilities while introducing new challenges for human interaction and coordination. As these systems grow in autonomy and complexity, humans may find it increasingly difficult to collaborate effectively or maintain situational awareness.

Intelligent advising agents can ease cognitive load and enhance decision-making by explaining AI-generated recommendations and helping users judge when to act on them. Our goal is to ensure that AI + Human > max(AI, Human): that the partnership outperforms either alone.

This presentation examines challenges in human–AI collaboration and presents methods for LLM-based agents to align goals, share understanding, and adapt interactions. We propose strategies for designing assisting agents that improve human performance, trust, and satisfaction in human–AI–robot teams.

Speaker
Professor Sarit Kraus
Bar-Ilan University 

Biography
Sarit Kraus is a Professor of Computer Science at Bar-Ilan University. Her research focuses on intelligent agents and multi-agent systems that integrate machine learning (including LLMs) with optimization, logic, and game theory. She develops agents capable of interacting effectively with people and robots.

She has received many honors, including the IJCAI Computers and Thought and Research Excellence Awards, ACM SIGART and Athena Lecturer Awards, the EMET Prize, and two IFAAMAS Influential Paper Awards. A Fellow of ACM, AAAI, and EurAI, she also received an ERC Advanced Grant and a commendation from LA for the ARMOR system.

Kraus has published over 400 papers, co-authored five books, chaired IJCAI-2019, will chair IJCAI-2027, and is a member of the Israel Academy of Sciences.

Talk Title
SOFAI: thinking fast and slow in AI

Abstract
SOFAI (Slow and Fast AI) is a cognitive architecture that is based on "fast" and "slow" solvers and a metacognitive component that provides the governance for the solving processes. I will describe both the general architecture and some of its instances, including experimental results that show how combining the fast and slow solvers greatly helps in decision quality, resource consumption, and efficiency. They also show that LLMs, combined with a metacognitive feedback loop, can be more performant that LRMs, thus saving in resources at inference time.

Speaker
Francesca Rossi
IBM

Biography
Francesca Rossi is an IBM Fellow and the IBM Global Leader for Responsible AI and AI Governance. She works at the T.J. Watson IBM Research Center, New York, USA. She is a fellow of both AAAI and EurAI. She has been president of IJCAI and of AAAI. Her current research focus is on neuro-symbolic AI and cognitive architectures.

She is a founding member of the board of the Partnership on AI, she co-chairs the OECD Expert Groups on Trustworthy AI Investments and AI Agents. She co-chairs the IBM Responsible Tech board, which provides the governance on all AI ethics activities for IBM.

Talk Title

Abstract

Speaker
Professor Peter Stone
University of Texas at Austin, USA

Biography
My main research interest in AI is understanding how we can best create complete intelligent agents. I consider adaptation, interaction, and embodiment to be essential capabilities of such agents. Thus, my research focuses mainly on machine learning, multiagent systems, and robotics. To me, the most exciting research topics are those inspired by challenging real-world problems. I believe that complete successful research includes both precise, novel algorithms and fully implemented and rigorously evaluated applications. My application domains have included robot soccer, autonomous bidding agents, autonomous vehicles, and human-interactive agents.

Talk Title
Generative AI for Social Good: Towards Scaling Impact by Solving the Deployment Bottleneck

Abstract
For nearly two decades, my team’s work on AI for Social Good (AI4SG) has focused on optimizing limited resources in critical areas like public health, conservation, and public safety. We highlight field results from India, where deployed bandit algorithms significantly improved the world's two largest mobile health programs. We also present ongoing work on network-based HIV prevention in South Africa. These and other projects across Africa expose a critical deployment bottleneck to AI4SG scaling, defined by three core challenges: the Human-AI Alignment Gap (integrating dynamic stakeholder preferences), the Observational Scarcity Gap (missing critical input data), and the Policy Synthesis Gap (generating feasible combinatorial policies).

This talk investigates the potential for Generative AI to accelerate the AI4SG deployment cycle. We explore how LLM Agents address the Alignment Gap by integrating expert guidance into algorithmic planning, resulting in resource allocation strategies that reflect real-world priorities while maintaining optimization guarantees. Furthermore, we examine how Diffusion Models may offer solutions for the Observational Scarcity and Policy Synthesis Gaps by generating synthetic social networks and efficiently synthesizing complex policies. We conclude by discussing this path toward making optimization solutions truly scalable and human-aligned.

Speaker
Professor Milind Tambe
Harvard University

Biography
Milind Tambe is Gordon McKay Professor of Computer Science at Harvard University; concurrently, he is also Principal Scientist and Director for “AI for Social Good” at Google Research. He is recipient of the AAAI Award for Artificial Intelligence for the Benefit of Humanity, AAAI Feigenbaum Prize, IJCAI John McCarthy Award, AAAI Robert S. Engelmore Memorial Lecture Award, AAMAS ACM Autonomous Agents Research Award, INFORMS Wagner prize for excellence in Operations Research practice, Military Operations Research Society Rist Prize, Columbus Fellowship Foundation Homeland security award and commendations and certificates of appreciation from the US Coast Guard, the Federal Air Marshals Service and airport police at the city of Los Angeles. He is a fellow of AAAI and ACM.

Talk Title
Rethinking Multiagent Systems in the Era of LLMs 

Abstract
The original metaphor for the field of multi-agent systems was that of a team of experts, each with distinct expertise, cooperating to solve a problem that was beyond the capabilities of any individual expert. “Cooperative distributed problem solving”, as it was originally called, eventually broadened to consider all issues that arise when multiple AI systems interact. The emergence and dramatic success of Large Language Models (LLMs) has given new life to the old dream, and ``agentic AI'' is currently one of the most hyped areas in the most hyped technology of the century to date. A raft of LLM-powered agent frameworks have become available, and standards for LLM-agents such as MCP and A2A are rapidly gaining traction. A range of promising applications of multi-agent LLMs have been reported, such as DeepMind's co-Scientist, where a complex problem solving system is structured in exactly the way that was envisaged decades ago. So, what lessons can we take from the three decades of research into multi-agent systems in the new era of LLM agents? In this talk, we’ll survey the main approaches, opportunities, and challenges for multi-agent systems in new world of LLM-based AI.

Speaker
Professor Michael Wooldridge 
University of Oxford 

Biography
Michael Wooldridge is the Ashall Professor of the Foundations of AI at Oxford University. He is a founder of the field of multi-agent systems, and has published more than 500 articles in the area, which have attracted more than 95,000 citations. He has won multiple awards for his work including the Royal Society Faraday Prize, the BCS Lovelace Medal, the ACM Autonomous Agents Research award, and the AAAI Patrick Henry Winston Outstanding Educator Award. He is president elect of AAAI.

 

Talk Title
The Journey Toward Long-Term Autonomy: Lessons Learned and the Next Frontier of Autonomy

Abstract
Long-term autonomy refers to the ability of intelligent agents to operate independently for extended periods, adapting to changing environments, managing limited resources, and maintaining reliable performance with minimal human oversight. This talk presents a series of research contributions from my lab that address the fundamental challenges of long-term autonomy. These include metareasoning mechanisms for managing deliberation before action; concurrent reasoning and execution for responsiveness in real time; planning and acting under uncertainty and partial observability; operating in complex open-world settings with an unbounded number of other agents; learning self-competence models to guide action selection and determine when to seek assistance; and techniques for aligning agent behavior with human values and objectives through rich, interpretable feedback such as demonstrations and explanations. The ultimate goal of long-term autonomy is not to eliminate human involvement, but to enable scalable and cost-effective collaboration between humans and intelligent agents.

Speaker
Professor Shlomo Zilberstein
University of Massachusetts Amherst 

Biography
Shlomo Zilberstein is professor of computer science and former associate dean of research at the University of Massachusetts, Amherst. He received his Ph.D. from the University of California, Berkeley. A fellow of ACM, AAAI, and AAIA, his research centers on the foundations and applications, in theory and practice, of resource-bounded reasoning, enabling complex AI systems to act responsibly while navigating uncertainty, incomplete information, and limited computational resources. Zilberstein is recipient of the ACM/SIGAI Autonomous Agents Research Award, UMass Chancellor’s Medal, IFAAMAS Influential Paper Award, AAAI Distinguished Service Award, and NSF CAREER Award. He has served as editor-in-chief of JAIR, president of ICAPS, and councilor of AAAI. His is a trustee of IJCAI and the chairman of the AI Access Foundation.