3rd ACE Call Awards

Towards a Data-Driven, Explainable Framework for AI Start-ups’ Due Diligence

PI: Wen Yonggang (SCSE)
Co-PI: Boh Wai Fong (NBS); S. Viswanathan (NBS); Kwan Min Lee (WKWSCI)


The rapid development of AI in the past few years has spurred the proliferation of AI start-ups globally and within the shores of Singapore. Today, simply being labelled as being in the field of AI could help a start-up attract 15% to 50% more in their funding rounds than other technology start-ups (Olson, 2019). While such hype has poured in funds to support developments in AI, it has led to concerns of an unsustainable AI bubble. Specifically, current approaches used to determine the valuations of AI start-ups are largely a “black box” typically based on established generalised methods. These frequently have been inadequate in evaluating key technical requirements specific to a deep tech like AI.

Here we propose to fill in this gap by developing an AI-based data-driven framework to perform due diligence for AI start-ups. The backbone of this framework is based on the 5 building blocks PRADA (an acronym for Platform, Researchers, Algorithms, Data and Applications) which we have distilled as key indicators for the success of any AI business.The proposed work cuts through the boundaries of a single discipline and requires concerted collaboration between experts of different domains, including computer science, innovation, entrepreneurship and social science; the ACE programme is thus the ideal platform to fund this work.

From https to httpQ: envisioning ethics, security and trust in a world with quantum computers. (Q is for quantum)

PI: Koh Teck Seng (SPMS)
Co-PI: Christina Chuang (SOH)
Collaborator: Chew Lock Yue (SPMS)


Today, quantum cryptography in the form of quantum key distribution is routinely done. It promises unconditional security and privacy in communication, based on quantum mechanical foundations. Companies like IBM and Google have democratized access to first-generation quantum processors through the internet. In the future, fault-tolerant quantum computers promise to solve complex problems that classical computers cannot, such as the simulation of complex molecules for better drugs and optimisation of medical therapies. Yet, unconditional communication security may undermine the value of trust between stakeholders in society, by heightening the tension between individual privacy and national security. Quantum computers render public-key cryptography useless and may lead to loss of public trust. Governments risk losing control should bad actors gain access to quantum technologies. When trust as a mechanism for cooperation in society declines, the alternatives may be stronger regulation and surveillance.

The project aims to understand ethical and social implications arising from quantum technology, using the tools of science and the humanities. Through critical assessment of quantum technology, we design rudimentary ethical scenarios focusing on trust and democracy as key values. By applying conceptual analyses of philosophy to the scenarios, we aim to develop and articulate key ethics concepts, their layers and interactions. Building upon this, we construct agent-based models using simple rules of interaction between stakeholders in society (agents) derived from prior analyses, to simulate complex social dynamics of trust and identify emergent norms and structures. The focus of these virtual experiments is to gain insights, with help from theories of society, and distil lessons that feedback into the design of quantum technology, thus closing the loop. The project is timely because it considers ethical issues at the inception of technology. It boldly studies an emerging form of socio-technological interaction with little written on the topic, and requires truly interdisciplinary knowledge and understanding.

A Study of Adversarial Examples for Proactively Protecting Images against DeepFake and DeepNude

PI: Adams Kong Wai-Kin (SCSE)
Co-PI: Guo Jian (SPMS); Xu Hong (SSS)


AI has a wide range of applications but some of its malicious applications have raised great concerns. The development of advanced deep networks and the availability of large amount of data have made the forged images and videos almost indistinguishable to humans. In this project, we investigate how to use adversarial examples to proactively defend image and video forgery from DeepFake and DeepNude. DeepFake is designed to replace one person’s face in a source video with a target person’s face and DeepNude transforms normal images to non-consensual porn images. The adversarial examples, which are in fact original images with adversarial signals, are designed to influence the functions of DeepFake and DeepNude, such that they cannot generate realistic images. Currently, researchers only study methods for detecting forged images and videos, which is a passive approach. It cannot stop the spread of fake news based on images/videos from DeepFake and cannot stop the harassments from DeepNude. The proposed solution is an active approach, preventing the generation of high quality fake images from DeepFake and DeepNude. Users still can share their images and videos on social media and this approach provides a certain level of privacy protection. Knowledge from different domains will be used in this project. The adversarial examples and the original images should have no difference from the naked eyes, but under the influence of the adversarial signals, the outputs of DeepFake and DeepNude should be extremely low quality images without identifiable faces. To achieve it, visual perception and face perception knowledge from psychology will be used to design the algorithms and objective functions to compute the adversarial examples. Knowledge from cryptanalysis will be used to study whether DeepNude and DeepFake’s users can remove the adversarial signals such that they can keep generating fake images or not.