Who’s Liable When Machines Make Mistakes?
Tort Law in the Age of AI, Robots and Digital Risks
Why It Matters
From robot surgeons to self-driving cars, technology is moving faster than the law. Courts are now grappling with questions of who should pay when machines malfunction – and the answers could shape innovation, safety, and trust in new technologies.
Key Takeaways
- Emerging technologies like AI, telemedicine and autonomous vehicles are testing traditional legal frameworks.
- Courts face the challenge of protecting people and society without stifling innovation.
- Businesses must prepare for greater liability exposure as technology becomes deeply embedded in daily life and entrenched in business processes.
Technology Is Outpacing the Law
Tort law – the body of rules that allows people to claim compensation when harmed by others – has always adapted to social and industrial change. When mass manufacturing caused consumer harm in the early 20th century, courts created new duties of care.
But today’s “Fifth Industrial Revolution”, where humans and AI systems collaborate seamlessly, raises far trickier questions. Who is responsible if a medical robot gives the wrong diagnosis, or if an autonomous car kills a pedestrian? Should liability rest with the programmer, the manufacturer, or even the AI system itself?
Courts in Singapore and beyond are experimenting with answers. They have shown willingness to recognise new harms, such as “loss of genetic affinity” in IVF negligence cases, while legislatures are stepping in to define the limits of responsibility in areas like harassment and data protection. The balance between innovation and accountability is becoming the defining legal challenge of our time.
When Medicine Goes Digital
Healthcare is at the forefront of technological change. Artificial intelligence now outperforms doctors in some diagnostic tasks, while telemedicine has made remote consultations routine. Patients benefit – but when things go wrong, liability becomes less clear.
One landmark Singapore case, ACB v Thomson Medical, involved a mix-up during IVF treatment where sperm from the wrong donor was used. The court rejected claims for the full cost of raising the child but created a new category of damages: compensation for “loss of genetic affinity”. This illustrates how judges are innovating within tort law to address new realities. As genetic technologies like CRISPR develop, courts may face similar claims when treatments fail or yield unexpected results.
AI in diagnostics poses another challenge. Studies show AI systems can outperform human doctors in interpreting scans. If AI systems become the accepted standard, could it be negligent for doctors to not use it? Conversely, who is responsible when AI systems get it wrong – the doctor, the hospital, or the software developer? Such dilemmas may even force a rethink of the “reasonable person” test in negligence law.
Who’s at Fault in a Driverless Crash?
Autonomous vehicles (AVs) are perhaps the clearest test case for tort law in the AI era. The technology combines hardware (sensors, radars, cameras) with complex software, often powered by machine learning. Yet accidents are already happening: an Uber AV test car killed a pedestrian in Arizona.
Legally, proving negligence is difficult. Developers often refuse to reveal proprietary code, and machine-learning systems are “black boxes” that even their creators struggle to explain. Training data may be inadequate or flawed, making it nearly impossible for accident victims to prove a breach of duty.
Traditional negligence law simply won’t work here. Regulators may need to impose strict liability on manufacturers, or create new product liability regimes. Otherwise, victims will face an uphill battle to secure compensation, and public trust in AVs could collapse.
Business Implications
For companies, the message is clear: emerging technologies carry emerging liabilities. Healthcare providers, tech developers, logistics firms and platform businesses must anticipate tort risks early. Compliance cannot stop at technical safety standards; it must extend to privacy, data security, algorithmic transparency and consumer trust.
Firms that get ahead of regulation by adopting robust accountability practices will build public confidence and reduce litigation risks. Conversely, those that don’t may find themselves unprepared when courts, or regulators, reshape the rules.
Authors and Sources
Authors: Kumaralingam Amirthalingam (National University of Singapore), Gary Chan Kok Yew (Singapore Management University) and Hannah Yee-Fen Lim (Nanyang Technological University)
Original Chapter: This summary is based on Chapter 11, Tort Law, from Law and Technology in Singapore (2nd Edition)---For more research, click here to return to NBS Knowledge Lab.





