Asst Prof Althaf Marsoof
College of Business (Nanyang Business School)
Division of Business Law
Assoc Prof Andres Carlos Luco
Associate Chair (Academic)
School of Humanities
Artificial Intelligence (AI) has, and will continue to, transform how humans interact and transact, especially in the online space. Major social media and e-commerce platforms, and even governments across the world, are increasingly relying on AI to track users and regulate content on the internet. While AI brings with it significant advantages such as automation and expedition, there are also concerns about how the use of such AI-embedded technologies could interfere with and suppress the rights, including human rights, and interests of individuals and communities.
As humans, we share a set of core values, of which some have been regarded as so fundamental and inalienable, that they have been written down and set out in international instruments and treaties. These rights include the right to hold opinions, free speech and information, the freedom of association and privacy, as well as social and cultural rights including the right to education. Aside the aforesaid fundamental rights, we also enjoy certain personal or proprietary rights, such as Intellectual Property (IP) rights, that are no less important in fulfilling our socio-economic aspirations. But, when we seek to enforce our rights, more often than not, they conflict with the rights and interests of others, and therefore, there is a need to engage in balancing acts. These tensions between the various competing rights and interests are magnified in the digital world we live in. For instance, in the context of IP rights, the rights of copyright and trademark owners are subject to limitations so that the interests of third parties in making fair use of copyrighted material or trademarks in their creative or commercial expressions are safeguarded. Similarly, in the context of news and media, including social media, although governments have a legitimate interest in preventing misinformation or “fake news”, such interests cannot be exercised to displace the individual’s right to engage in legitimate speech. These limitations in the exercise of our rights and government regulation that, in effect, operate as safeguards against abuse, are crucial in ensuring just and proportionate outcomes.
It is within this backdrop that online platforms, such as Facebook, eBay and YouTube, operate to provide us with a space to interact and transact. However, they provide these services assuming the risk, and consequential liability, for facilitating the dissemination of content the legality of which may be called into question. Content could be unlawful at many levels – some may violate proprietary or personal rights (such as IP rights or privacy), whereas others, such as fake news, may be contrary to public policy. In fact, in some jurisdictions, the law already requires online platforms to take positive steps to implement technology to trace and remove content that is unlawful – e.g. Art.17 of the new EU Copyright Directive. In view of these pressures, online platforms are being pushed hard to make use of AI-embedded technology to aid them in avoiding liability and in areas where the legality of content is grey, they are likely to err in the side of caution to avoid liability at all cost.
Despite the trust we may place on AI, we must, however, be open and ready for the possibility that AI may not always arrive at accurate conclusions or conclusions that are fully capable of capturing the nuances of human judgement. In fact, it is already well established that AI is subject to various biases on the part of those who develop and train the AI. This is especially problematic where decisions have to be made in circumstances where competing rights or interests come into play. For that reason, it is necessary to ensure the following. First, technology that embeds AI must, as far as practically possible, be developed and trained taking into our rights and interests, especially those enshrined within our understanding of human rights. What must be reflected in the technology is our shared understanding of these rights and interests free from any biases on the part of those who develop and train AI. Maintaining neutrality is crucial in ensuring AI is not dominated by approaches that may lead to discrimination between individuals both in the private and public contexts. Secondly, when things do go wrong (and this is not at all an impossibility) we must ensure that viable corrective mechanisms are put into place so that our rights and interests are not endangered by the use of technology.
In light of the above, this research aims to (1) understand how AI is being used by online platforms to regulate content; (2) determine whether and, if so, to what extent the use of AI interferes with personal, proprietary or fundamental human rights; (3) understand the key concerns of regulators and (4) propose a viable regulatory framework, having legal effect, that could help ensure the development and use of AI in a responsible manner–that is, in a manner that does not undermine our rights and interests.
This research combines expertise in the fields of ethics and human rights, intellectual property and media law in seeking to provide a holistic perspective of the challenges posed by the use of AI to regulate online content. However, this research is framed in broad terms and addresses the challenges across multiple areas, such as the enforcement of IP rights, fake news and privacy. Its broad formulation is purposeful, as the objective of this research is to allow us to explore these multiple areas, understand the key issues and think of unique solutions. While we hope to develop a policy paper that will be aimed at private regulators of content (e.g. online platforms) and governments in addressing the challenges in the design and use of AI for content regulation, we are confident that our research can lead to collaborations (e.g. with the Human Rights, Big Data and Technology Project in the UK) and also engage the industry and government bodies through consultative roles.
At present, we are in the stage of conducting a comprehensive literature & technology review. The technology review is primarily to”
- Understand how AI is currently being used by online platforms and government bodies for identifying, tracking and removal of unlawful content on the internet and
- Understand the limitations in the technology that potentially could impact accuracy issues.
The literature review is primarily to determine key legal, policy and ethical concerns in the use of AI for the purposes of identifying, tracking and removal of unlawful content on the internet.
We are also in the midst of converging our focus onto a common framework, which will embrace technology, ethics and law & policy, that we can make use of in drafting our report.
We have recruited four Research Assistants (RA) till the end of July 2020:
|Ang Wan Qi||A student at NTU's Renaissance Engineering Programme, is focusing on technology research. She is under the supervision of Asst. Prof. Shafiq Rayhan (School of Computing).|
|Peng Yuqi||A PhD student at the School of Humanities, is focusing on the ethical aspects. She is under the supervision of Assoc. Prof. Andres Luco (School of Humanities).|
|Chua Yong Jian, Kenneth||An undergraduate student at the NBS, is focusing on legal and policy aspects. He is under the supervision of Assoc. Prof. Harry Tan and Asst. Prof. Althaf Marsoof|
|Lynn Htet Aung||A Nanyang Scholar and double degree student of the NBS and the School of Computing, is volunteering as a research assistant. He is currently focusing on fact-finding and will continue to support the team with regard to research on the technological, legal and ethical aspects of the project.|
We ran a "Knowledge Sharing Session" in November 2020 with a few experts in the field to gather their perspectives. We are in the midst of writing the final report.