Published on 13 July 2023

Self-Driving Car and AI-assisted Medicine



Professor Joseph Sung
Dean, Lee Kong Chian School of Medicine

I am reading a book entitled Machines Behaving Badly: The mortality of AI by Toby Walsh, one of the leading researchers in Artificial Intelligence from Australia. Walsh said, “I begin with what is perhaps the only property that makes AI special. This is autonomy. We’re giving machines the ability to act in our world somewhat independently. And these actions can have an impact on humans. Not surprising, autonomy raises a whole host of interesting ethical challenges.”

Think of AI tools in diagnosis and management of patients as a self-driving car. The development of self-driving or autonomous cars is in fact an AI-empowered robot. The car might not look like a robot but indeed it is. A robot is a machine that can sense, reason and act. A self-driving car can sense the situations on the road, and the other users of the road and traffic conditions, and act according to environment based on their “senses” (algorithm). Today, technology has advanced to such a level that you can sit in a self-driving car and say “take me home”, and it will. Almost all road accidents on our roads are caused by human error, not mechanical failure. Statistics tell us that a self-driving car is actually safer (or cause less accidents) than a human driving car. Self-driving cars don’t get tired, and they don’t drink and drive. It is predicted that by 2050, most of us won’t drive. In fact, most of us won’t be able to drive. Young people won’t be bothered to learn driving as they can go anywhere with a self-driving Uber. Driving in 2050 will be much like horse-riding today. Horses used to be a major form of transportation in the past, and now, it is an expensive sports for leisure. Driving a manual car will be more or less the same. 

Some of us may remember in 2016, there was the first road accident in which a self-driving car killed a person. Joshua Brown, a 40-year old man, in his Tesla Model S was driving autonomously down a highway in Florida. An 18-wheel truck turned across Joshua’s path. The Tesla might have confused the tall vehicle for an overhead sign. The camera in his car likely confused the white trailer for the sky. As a result, the car didn’t see the truck, did not brake, and drove into the 53-foot-long truck killing Joshua. It was reported that throughout the journey, Joshua’s hands were on the wheel for only 25 seconds of the 37 minutes the car was on the road. He was suspected to be watching a Harry Potter movie in the car at the time of the crash. Full autonomy was given to the Tesla Model S and disaster occurred.

This is not the most discussed case of a self-driving car accident. Imagine a self-driving car with two people on board. The car turns round a corner and crossing the road a short distance away is an old lady. There is no time for the car to stop, so the robot has to make a decision: to run over the old lady, or to run off the road into a brick wall killing both passengers. This hypothetical accident has created a moral dilemma that is known as the “trolley accident” (made up by an English philosopher Philippa Foot in 1967). The trolley problem is not new. In fact, every day while we are driving on the road, we might be facing such a dilemma and a decision has to be made. So how much autonomy should we give to the motor vehicles that determine not just our transportation, but also our life in some cases? And how much autonomy should we give to the AI tools or robots in Medicine that can certainly determine our management of patients that may have a life-and-death consequence?

Last month in Singapore, a group consisting of AI-developers (engineers), AI-users (medical practitioners including physicians and surgeons) and AI-regulators (ethicists and administrators) formed a Working Group to present a series of Positions Statements on the use of AI in Medicine. The objective of this endeavor was to arouse public and professional interest through dialogue, and to promote ethical considerations when implementing AI technology. The Statements laid down some principles of ethics and suggested policymakers/health authorities to take into account these notions when approving and regulating the use of AI tools.

Among the 14 statements, there is an important one about human and machine autonomy:

 

“AI tools should only assist but not determine decision making of doctors and patients in clinical medicine and healthcare service.”

The use of AI tools could lead to the decision-making process being transferred to machines, eroding the autonomy of doctors and their patients. Take the example of managing a complicated case of inflammatory bowel disease, a chronic remitting-and-relapsing condition that can affect every aspect of patients’ life. The choice of dietary therapy, immunosuppressive medication or radical surgery is often a difficult decision. In the management of a complex condition like this, pathological and psychological elements should be considered together. This process should be a joint decision making act between the doctor and his/her patient after explaining the risks and benefits of each therapeutic option and offering reasonable alternative options. In this process, humans should retain control of the final decision. AI tools should play an assistive role to help doctors in making informed decisions and it should not dictate the choice of action. Healthcare strategies should ensure that the clinician can override decisions made by AI tools. Machine autonomy should be restricted and reversible.

Along with the debate on machine and human autonomy, there should also be a serious discussion on liability. In the case of the misdiagnosis or mistreatment of a patient involving the use of AI, the sharing of liability among actors varies depending on issues such as the scope of duty of care, causation and remoteness of damage and, the degree of autonomy or control over the machine by its user. Depending on the specific facts and circumstances of each case, the degree of control exercised by each party over the AI system and the decision-making process can be an important factor in determining allocation of responsibility. The Working Group’s position is that “Liability should be apportioned to the AI developer, healthcare provider and health-care system according to their level of control in the decision-making process.”

AI-assisted Medicine is coming in a big way. Are we ready for it?