Visiting Researcher Talk: Prof Lukasz Szpruch | 27 MAY 2025
Talk Title: From AI Assurance to AI Insurance
Speaker: Prof Lukasz Szpruch
About the Speaker: Prof Lukasz Szpruch is a Professor at the School of Mathematics, University of Edinburgh, and serves as Programme Director for Finance and Economics at The Alan Turing Institute. At Turing, he provides academic leadership for partnerships with the National Office for Statistics, Accenture, the Bill & Melinda Gates Foundation, and HSBC. He is the Principal Investigator of the FAIR research programme on responsible AI adoption in financial services, a Co-Investigator at the UK Centre for Greening Finance & Investment (CGFI), and an affiliated member of the Oxford-Man Institute for Quantitative Finance. Before joining Edinburgh, he was a Nomura Junior Research Fellow at the Institute of Mathematics, University of Oxford.
Description: The rapid integration of Artificial Intelligence (AI) systems, especially those with autonomous, goal-directed capabilities, introduces novel liability and financial risks that are not addressed by conventional insurance policies. We define algorithmic insurance as a new class of financial coverage for harms caused by AI systems operating within their declared scope. This form of insurance explicitly covers algorithmic errors, biases, and failures—risks typically excluded from existing contracts. Unlike cyber insurance, which addresses threats like data breaches and denial-of-service attacks, or traditional product liability, which covers defects missed by quality control, algorithmic insurance focuses on failures that arise when AI systems operate in unpredictable or unintended ways. As autonomous software and robotic systems become more widespread, this coverage gap is poised to grow, creating a critical need for sharing risk equitably while aligning incentives for safe AI deployment at scale. We outline the unique underwriting challenges posed by AI, including the absence of historical claims data, opaque and evolving failure modes, and significant information asymmetries. To address these, we propose adapted insurance mechanisms such as performance-based underwriting, usage-based pricing, parametric triggers, and bonus-malus schemes. We also highlight emerging technical tools—like AI model evaluation, robustness certification, and adversarial stress testing—that can act as effective enablers for risk assessment. Finally, we discuss how algorithmic insurance intertwines with AI assurance practices, highlighting that auditability, transparency, and uncertainty bounds are prerequisites for insurability. This creates an opportunity for insurance to act as an effective mechanism to reinforce public safety and regulatory goals by rewarding verified best practices and creating market-driven incentives for robust, well-governed AI systems.
![]() | ![]() |

