Bank vaults are generally a safe bet when it comes to securely storing valuables, such as money or jewellery. With thick walls and heavily locked doors, unauthorised users are precluded from entry. However, some of the most valuable commodities today are entirely digital – from personal details to cryptocurrencies – and need more than an iron box to be kept safe.
For example, in the 2019 Capital One incident, a hacker breached the bank’s cyber security defences and gained access to the private and sensitive information of over 100 million customers, which resulted in a hefty fine by United States regulators.
Even as the world increasingly goes digital, research at NTU is at the helm of finding ways to combat the rising sophistication of cyber attacks.
Protecting sensitive data
One of these research areas is in privacy preserving machine learning – a technique used to train artificial intelligence (AI) models while simultaneously protecting user data.
An issue with analysing the reams of data collected from people in the digital economy is that the data often contains sensitive personal information.
Researchers led by Prof Lam Kwok Yan from NTU’s School of Computer Science and Engineering (SCSE) have developed new tools that both encrypt data and preserve user information.
“Even though it is technically and theoretically possible, analysing encrypted or protected data is slow and challenging. But through our research, we have developed a method to analyse or compute encrypted data more efficiently, to the extent that it becomes practical to use,” says Prof Lam, who is Associate Vice President (Strategy and Partnerships) at NTU.
Another advantage of the method developed by his team is that protected data can be analysed more accurately. Previous approaches secure data by distorting it and changing its values. But this makes analysis of the data done later less accurate.
The method by Prof Lam’s team does not distort the data but instead encrypts it to keep it secure from prying eyes. Since no values in the data are changed, analysis of the encrypted data will still be accurate.
And with devices increasingly storing sensitive data, there are also more targets for data theft. Asst Prof Zhang Tianwei’s team at SCSE and the Cyber Security Research Centre @ NTU (CYSREN) were involved in developing a new platform called Hercules, which boosts the performance of privacy-preserving machine learning with multiple users.
With a high-precision training framework, Hercules can withstand coordinated cyber attacks by multiple individuals intending to exploit vulnerabilities in a security system.
Here, almost all training operations are performed locally, eliminating the need for extra servers to store user data.
“Hercules provides better security as it enables secure machine learning even in the presence of coordinated attacks between the server and multiple users,” explains Asst Prof Zhang. “And compared with existing state-of-the-art methods, we attained up to a 4% increase in model accuracy and as much as a 60% reduction in the computation and communication cost.”
Beefing up IOT security
Besides theft of stored data, another issue is that more devices, such as temperature sensors and smart lamp posts, are becoming connected to the Internet to transfer data. But hackers can introduce fake gadgets to trick other devices into sharing data with them.
This means such Internet of Things (IoT) devices need to be secured. To this end, Prof Chang Chip Hong at NTU’s School of Electrical and Electronic Engineering is developing methods that use electronic circuits, called physically unclonable functions (PUFs), in a hardware device to confirm the identity of the data requestor before data transfer can happen.
These circuits are akin to digital fingerprints that are unique to each electronic device due to unpredictable variations in the chip manufacturing process.
When a PUF-secured device is connected to the Internet, a powerful computer server typically verifies the identity of the device by sending a code and checking if the device can give a unique response tied to its PUF. Information on the expected response of potential IoT devices is securely stored in the server.
But if the hardware doing the verifications is also an IoT device, such checks are difficult since IoT devices lack the capabilities of powerful servers. Storing information linked to the checked device within the verifier IoT device also makes the latter a target for crooks.
Prof Chang’s team has found a way to address this.
“Our proposed protocol achieves direct mutual authentication with an automatic exchange of one-time, randomly generated secret codes for secure communication between resource-constrained IoT devices,” says Prof Chang. “This gets rid of a server’s involvement during the authentication process. More importantly, it does not need to store any secrets in both the verifier device and the hardware being checked.”
Driving into threats
Beyond devices like phones and laptops, many modern vehicles are equipped with hundreds of microprocessors and millions of lines of software code, making them susceptible to cyber attacks. Moreover, they are interconnected with the surrounding environment through Wi-Fi, Bluetooth and 5G, making them giant computing network systems on wheels.
As autonomous driving becomes more integrated into society, there will be a greater need to make autonomous vehicles more secure. Assoc Prof Anupam Chattopadhyay at SCSE and CYSREN works on developing countermeasures for preventing cyber attacks on these vehicles.
“Adoption of intelligent and autonomous components, such as a machine learning based advanced driver assistance system, is a perfect stepping stone towards the realisation of fully autonomous vehicles,” Assoc Prof Chattopadhyay explains, adding that security and privacy concerns are often paramount, and have prevented the deployment of such vehicles in real-world scenarios.
Through their research, Assoc Prof Chattopadhyay’s group hopes to provide autonomous vehicle developers with accurate and quantifiable risk characterisation and mitigate these security concerns.
Coming to terms with the limitless possibilities of AI also means confronting cyber security challenges to ensure its safe and reliable deployment.
SCSE’s Prof Liu Yang is among the researchers addressing this dilemma by launching a landmark standard for AI security and establishing a set of principles and best practices for secure AI development.
The standard explains the threats that AI systems may face, measures for evaluating the security of an AI algorithm and the strategies that AI practitioners can adopt to address these attacks.
“By providing advice on the necessary defences and assessments to make AI applications more secure, we aim to create trust in AI as AI practitioners tune and optimise AI solutions to meet the needs of society. This is especially important and timely for recent AI generated content and large language models,” says Prof Liu.