Developing regulatory compliance for artificial intelligence

Robotic hand pointing to a judge hammer on screen

Every day, numerous artificial intelligence (AI) algorithms make crucial decisions with minimal human oversight in areas such as autonomous vehicles, medical systems, and trading systems. Since they can be self-programming, intelligent algorithms’ behaviour can evolve in unforeseen ways.

Some organisations are becoming increasingly concerned that their algorithms could cause reputational or financial damage, and ‘algorithm audits’ are being introduced in response to regulations and legislation. AI technology is emerging as a significant law specialisation, and it is becoming ever more important for lawyers to fully understand how it works.

Professor Hannah Yee-Fen Lim at Nanyang Technological University, Singapore, has contributed a chapter on the regulatory compliance of AI, the first ever published on the topic in the English-speaking world, in the recently published book titled ‘Artificial Intelligence: Law and Regulation.’ In the book, she analyses how the numerous principles and frameworks for regulating AI and Trustworthy AI have been constructed at the international level, but many of them are too vague or lack the necessary practical understanding of AI and how both AI and machine learning work. These deficiencies are observed in regulations formulated by organisations ranging from the European Union to the Organisation for Economic Co-operation and Development (OECD). Lim analyses and critiques these and other international efforts. She argues that besides being vague, these frameworks and principles do not deliver a straightforward description of what is expected from the AI players and developers. Moreover, they do not help governments ensure regulatory compliance.

Cube-shaped wooden blocks with AI and law icons

The nature of AI

Lim begins by examining the nature of AI, establishing what AI is in a technical sense, and why it needs to be regulated. Considering the technology, she contrasts traditional hard-coded software with AI algorithms and machine learning – where computers can execute functions even though they haven’t been explicitly programmed to do so. AI algorithms do not learn in the same way as humans. Instead, they are trained using vast datasets. Even if the algorithm is mathematically sound, the size and quality of the training data can influence how it performs and whether it meets the required standards. Machine learning can take the form of unsupervised machine learning, supervised machine learning, and reinforcement machine learning, each with their own drawbacks.

Regulators are challenged with encouraging innovation while protecting people from harm. For instance, autonomous vehicles can be deadly, both for passengers and the public. Other AI systems may cause financial or physical harm to individuals.

Robot and a man standing on a balanced scale

International AI ethics

Lim’s research examines the ethics code and principles that governmental and intergovernmental organisations currently have in place. After reviewing the ethics principles and frameworks in Australia, the United States, the European Union, and OECD, Lim observes that these ethics principles are voluntary and vague, so they aren’t helpful for either developers or those who deploy the systems and want to ensure that their AI systems are compliant.

While some early works focused on developing ethics principles, it is time to move on to creating solid legal frameworks and regulations. Around the world, governments are grappling with how to create AI laws and regulations. Many take an incremental approach with broad-stroke policies that aim for ‘damage control’ and minimise the fall-out from using AI. They include concepts such as transparency, but then fail to explain what such transparency would involve.

Robotic hand pointing to the words Some organisations are becoming increasingly concerned that their algorithms
could cause reputational or financial damage.

AI’s impact on general laws

The European Union has started to systematically review its substantive laws to determine which areas should be updated due to the use of AI. These areas include criminal law and consumer protection laws, where people can fall victim to illegal, deceptive practices resulting from AI applications. Data protection law is also affected due to AI’s reliance on data and big data. The United Kingdom Information Commissioner’s Office has issued practical guidance on how organisations’ use of AI can comply with data protection laws. In the area of civil liability laws, however, without a thorough understanding of AI technology, legislators and regulators have adopted poorly thought-out ways to apportion liability.

Some industries are always highly regulated for reasons such as safety, risk, and protecting interests. These include banking and finance, medical and healthcare, and the transportation industry. The World Health Organization (WHO) has since released the ‘living’ WHO Guidance on Ethics & Governance of AI for Health, for which Lim was an appointed External Expert Reviewer. It is hoped that this will lead the way for AI in the medical sector. Being a ‘living’ document, it will continue to evolve. In the transport industry, however, there is still no international consensus on the trialling and regulation of autonomous vehicles. Nor are there international standards regarding the AI technology for autonomous vehicles.

Autonomous vehicles use artificial intelligence to make real-time decisions.

Regulatory compliance using AI

The chapter concludes with a description of how AI can assist in complying with regulations. Legislators and regulators can’t assume they understand AI technology, as descriptions of the technology are often interpreted in different ways by those trained in different disciplines. To deploy ethical and trustworthy AI, they will have to seek assistance from those trained in both computer science and law.

The development of this emerging area of regulatory compliance is likely to continue for some time. A substantial body of literature has been created over the past five years, but substantive legal rules are only starting to take shape now. Moving forward, rules and regulations will likely be created for individual industries, particularly the high-risk industries. These can then be generalised to cover other use cases of AI.

Note: This book was published by Edward Elgar Publishing, on 17 March 2022.

Hannah Yee-Fen Lim is uniquely qualified with double degrees in computer science and law. She is an internationally recognised legal expert on all areas of technology law, including AI, data, blockchain, cryptoassets, Fintech, and cybersecurity. She serves WHO, UNCITRAL, UNIDROIT & UK Law Commission as a legal expert advising on areas including AI and Cryptoassets.

This article is based on the book chapter written by Hannah Yee-Fen Lim: 
Artificial Intelligence: Law and Regulation Chapter 6: Regulatory Compliance

This article was first published by Research Features:
Developing regulatory compliance for artificial intelligence