Published on 25 Nov 2025

Yawn, another cyber-security test. Time to rethink them?

It's Friday and an employee is rushing through a mandatory 45-minute cyber-security video, about to miss the deadline set by the company. The tips are predictable: "Don't share your password", "Beware of suspicious links". They let it run in the background, clicking through it as they scroll Instagram -- and pass.

Everyone passes. The company's cyber-security team boasts: "100 per cent compliance". On paper, the company looks secure. In reality, it isn't, because you can have the knowledge, but it doesn't take into account the behaviour of humans and their frailties.

Consider an employee bombarded with phishing simulation e-mails. Irritated, they hit "report phishing" without even reading the messages. Weeks later, when a real attack lands, fatigue takes over. Rushing, they click the wrong link. Training has taught them to tick boxes, not to stay alert.

In both cases, company time and money is outlaid, but behaviours remain unchanged.

There are real consequences. A report by the World Economic Forum suggested that 95 per cent of general cyber-security incidents occur due to human error.

In 2025, the average cost of a data breach is US$4.4 million (S$5.7 million), according to IBM. Phishing is the most frequent type of attack. The emergence of artificial intelligence (AI) is also a double-edged sword. While it can help detect breaches faster, it can also enable more sophisticated phishing and social-engineering attacks.

Unless cyber-security training is designed for those moments of human weakness, it will fail when it is needed most.

Why traditional training fails

We all know we should eat healthily, drivers know they must check their blind spots, and nurses know they should wash their hands before every patient. But under stress, fatigue or distraction, what we know often fails to guide what we do.

Psychologists call this the "knowledge-behaviour gap" -- people know cyber threats exist, but knowledge alone doesn't change their behaviour.

However, current cyber-security training relies heavily on box-ticking and passive workshops, assuming that information will translate into behaviour.

A 2021 study found that security awareness training programmes were designed primarily to meet compliance requirements, rather than addressing the needs of employees and developing an understanding of how to influence their behaviour.

A related issue is a one-size-fits-all approach -- where all departments in a company go through the same training programme without taking into account the challenges and needs of different roles.

Workers are well aware of the limitations of such training. In a 2023 study by University of Adelaide researchers, participants who attended typical cyber-security training described it as "generic and unusable". Even more concerning, some participants came away with the impression that management either had an outdated understanding of cyber security or considered it unimportant.

As a result, cyber fatigue emerges. Repetitive drills and warnings dull attention. People click "yes" automatically to clear the screen or rush through modules half-heartedly.

Because cyber threats are rare events, unsafe shortcuts rarely bring immediate consequences -- and so bad habits stick.

According to a cyber-security report by digital technology giant Cisco, such cyber fatigue affects over 40 per cent of security professionals.

All this comes at great expense. The cost of undertaking cyber-security training can be huge. Developing training materials and phishing simulations is expensive, with estimates ranging between 45 US cents and US$6 per month per employee.

But there is also the hidden cost of employee time. Three hours of training for 10,000 employees can equal millions in lost productivity -- with not so strong evidence of safer behaviour. In addition, overstretched security teams frequently face escalating workloads and alert fatigue, which reduce their capacity to respond effectively to genuine threats.

Security measures can also backfire through risk compensation. When anti-lock brakes were introduced in Germany, accident rates rose. One study found this was because drivers took greater risks, such as speeding, due to this very factor of risk compensation -- the perceived safety of the system, and how it gave them a false sense of security.

In the same way, organisations that boast of a "cyber-trained" workforce may relax their guard, trusting training rather than vigilance. It may also justify them cutting back on smarter defences like adaptive threat monitoring or behavioural testing.

The irony is that such cyber-security training methods could make organisations less safe.

There have been some alternative cyber-security methods that emerged in recent years, such as cyber nudges -- gentle prompts or design features built into systems to remind users to address security vulnerabilities -- and warning prompts.

But too many workplaces still cling to knowledge-based training. And it takes just one tired employee, one careless click, to expose a company to bad actors.

Meanwhile, cyber criminals grow more sophisticated. They exploit loopholes in human behaviour such as fatigue, distraction and making habitual shortcuts -- for example, when you ignore sender details in an e-mail.

Phishing scammers may time their messages strategically or even create artificial workload to reduce vigilance, such as by sending a phishing e-mail immediately after a legitimate one so that users pay less attention to deception cues.

Social engineering needs only one success, and with AI, phishing scams are becoming uncannily personalised. In the US, healthcare provider Elara Caring was subjected to a phishing attack that targeted only two employees -- but gained access to the personal data of more than 100,000 patients.

Higher management can also be victims -- such as the cofounder of Australian hedge fund Levitas Capital, who received an e-mail containing a fake Zoom link. Clicking on the link triggered the creation of fraudulent invoices. The incident led to the collapse of the fund -- then a highly successful firm -- because of the severe reputational damage it suffered.

A leaf from aviation and healthcare

Solutions exist, but they require a change in philosophy. Other high-risk industries faced these same challenges long ago and solved them.

In healthcare, nurses are trained to spot and prevent errors related to patient safety -- such as medication errors, healthcare-associated infections or patient falls -- even while exhausted. In the aviation sector, pilots rehearse rare emergency scenarios repeatedly until their reactions are automatic.

These sectors don't rely on knowledge alone. They build habits, reflexes and automated responses that promote the skills to maintain situation awareness. Where these fields demand rigorous testing, cyber security often stops at building knowledge.

They deliberately evaluate systems under fatigue and distraction. Cyber security must do the same -- that is, test the effectiveness of training precisely when people are least attentive.

Modern behavioural research can detect -- even simulate -- these states of fatigue and inattention to evaluate how well training or warnings hold up under pressure. For example, a robust phishing exercise would send test e-mails not randomly, but strategically -- when users are tired, overloaded, or overconfident -- to see which interventions truly build resilience.

Designing such applications requires technical and behavioural expertise. But they can be both powerful and cost-effective. For example, if we want to nudge people to create stronger passwords, we can use a visual bar that fills up as the password improves -- a simple form of positive reinforcement.

Cyber-security messages may also need to be culturally tailored. For example, "Protect your account" may resonate more in individualistic societies, while "Help protect your colleagues and our company" might be more persuasive in collectivistic cultures.

This also means resisting gimmicks. Gamified training -- like having people "spot a threat" in an e-mail -- may look fun, but may be a waste of time to busy staff who prefer content that quickly gets to its point.

In addition, cyber security currently focuses too much on a deterrent approach -- punishing or threatening to punish employees who are not security-compliant. This treats employees as criminals and not as victims.

Organisations could also not just discourage risky behaviour, but also encourage security good practice -- such as rewarding employees who detect a suspicious e-mail.

Singapore can lead

Singapore wants to be the region's hub for cyber, AI and data. But companies will only bring their data if they trust it is secure. And while cyber security is often treated as a technical problem, the real weakness is people.

Behavioural cyber security -- where we address fatigue, habits and human vulnerabilities -- is essential to this larger vision Singapore has.

Singapore can lead not just in technical security, but also in behavioural cyber security.

That means setting new standards, building simulation labs for cyber-war scenarios and providing empirical evidence on the effectiveness of both training and preventive methods, such as cyber alerts.

Through this, we can better understand when and why people fail under fatigue, prioritise behavioural approaches to cyber-security training -- instead of focusing on knowledge only -- and set rigorous but flexible standards for such training.

Singapore can turn a challenge into an opportunity: becoming the global benchmark for behavioural cyber security.

Georgios Christopoulos is an associate professor at Nanyang Business School, where he is provost's chair in Organisational Neuroscience.

---

Published in The Straits Times