Published on 18 April 2023Chat GPT and future doctors
Professor Joseph Sung
Dean, Lee Kong Chian School of Medicine
In the past few months, the world is shocked by the unimagined capabilities of ChatGPT-4.
Bill Gates said that this development of AI is “as revolutionary as mobile phones and the Internet”. “It will change the way people work, learn, travel, get health care and communicate with each other. Entire industries will reorient around it. Business will distinguish themselves by how well they use it”. In my opinion, he has pointed out only the tip of iceberg, not mentioning the potential pitfalls and chaos GPT-4 can bring.
A study from Microsoft and Open AI indicated that GPT-4 taking the test on United States Medical Licensing Examination (USMLE) exceeded the passing score by over 20 points and outperform earlier general-purpose model GPT-3.5 significantly. Machine can do better in knowledge and hence can make clinical decisions better than doctors? If so, how should we train and assess future doctors? Schools and universities are struggling on how to revamp their curriculum in almost every subject (including art and literature) and repair what is largely seen as a broken assessment system. The shock is so severe that even the people who funded AI development such as Elon Musk called for a temporary holding release of the technology to give humanity time to sort out the controversies and potential disasters it might create.
Large language models (LLMs) have demonstrated a remarkable ability to interpret and generate sequences across a wide range of domains including natural language, computer code and biological informatics. Researchers have investigated benchmarks that provide insights into how LLMs encode clinical knowledge that might be used in medical practice. GPT-4 can interpret not only text, but also image and video. Incorporating knowledge that exists in the public domain, GPT-4 and beyond become the most knowledgeable professional experts who are not getting tired answering questions about clinical problems, providing answers for diagnosis and suggesting treatment options. I can imagine that one day, in a not too distant future, our patients will be sitting in front of a ROBODOC, describing their symptoms and presenting their tests results. In a matter of few seconds, his/her blood chemistry will be analyzed, CT scan read and reported, histological findings interpreted and treatment options offered on the table, all by a machine that can speak to a human patient.
Any new technology that is so disruptive is bound to make people uneasy, and that is certainly true with AI. It raises hard questions about job losses, privacy, human right, legal proceedings and liability, ethical issues such as equity, autonomy, biases and more. Although at this stage, AI makes factual mistakes and experience “hallucinations”, its ability is expanding every minute and every hour of the day. What would be the roles of future doctors and allied health workers? To be honest, I don’t have an answer for that.
I can only suggest three things that our medical profession and medical schools should consider:
1. To understand what is AI, what AI can do, and cannot do (at every stage of its development).
2. To study and experiment on how human and machine can interact with each other, to work as a team in the management of patients.
3. To define the role of human and machine in patient care, and draw a line where AI, no matter how powerful it is, cannot cross in clinical decisions. Medical curriculum should include an important subject “Artificial Intelligence and Medicine”. Students should be taught basic principles and mechanisms of machine learning, neural network, data security, data bias, medical ethics and legal issues. Early engagement of doctors, nurses, pharmacists and other allied health workers in clinical practice to be exposed to AI technology is a crucial part of their training. In the ideal world in future medical practice, a clinical diagnosis should be based on the clinical data interpreted by machine, taking into consideration the feelings and worries of the patients and their family by a human doctor, and a recommendation then given after clear explanation to the patient, with a human touch. Without proper training and early engagement of healthcare workers, TRUST between human and machine is difficult to build. Without trust, doctors and nurses will be reluctant to use AI in their practices and patients will find it difficult to choose between powerful new technologies and the doctors whom they have always trusted and rely on. Finally, we will have to decide when and where we are going to stop AI making decisions for us in clinical practice, maintain autonomy of care providers as well as care receivers. The right of accepting or refusing to use AI, or AGI (artificial general intelligence) in healthcare decision should be preserved for our patients and their family.
Engagement of healthcare personnel as well as patients and their family plays a key role in implementation of AI in Medicine. We should educate, train and listen to the feedback from the providers and receivers of healthcare. Besides technological and scientific evidence, we need to look into the implementation strategies of AI into Medicine. Our medical school will launch the first International Conference on AI in Medicine this summer, 5 – 7 August 2023 at the Lee Kong Chian School of Medicine Novena campus. We open this opportunity to medical students in LKCMedicine and beyond, clinicians as well as AI engineers from all clusters, industries and universities. Get in touch with us for further information. I hope to see you there, and may we start the new and brave journey together.