Singapore’s proposed new law to tackle harmful web content may persuade social media titans like Facebook and Twitter to step up internal efforts to deal with the problem, but extensive government regulation in the online world may prove to be a double-edged sword, observers say.
“Regulating online content will always be a gargantuan task. This is due to the grey areas that are involved,” said Carol Soon, a senior research fellow at the Institute of Policy Studies.
The Ministry of Communications and Information, when introducing the bill on Monday, noted that fighting harmful online material was a global issue and there was widespread acceptance that the companies involved have a responsibility to keep users safe.
“While some online services have made efforts to address harmful content, the prevalence of harmful online content remains a concern, given the high level of digital penetration and pervasive usage of online services among Singapore users, including children,” the ministry said.
The Online Safety (Miscellaneous Amendments) Bill would allow the government to disable access to “egregious” online content for users in Singapore; block particular users or accounts from posting content; and in other cases, prevent Singapore-based users from accessing an online platform entirely.
The ministry said this would apply to material advocating suicide or self-harm, physical or sexual violence, terrorism, and those depicting child sexual exploitation. It would also pertain to content posing a “public health risk” or likely to cause racial and religious disharmony.
Corinne Tan, an assistant professor at Nanyang Technological University (NTU), called the bill’s introduction timely as there was “warranted concern” over the exposure of harmful online material to users on social media platforms. “This exposure can be detrimental to individuals and communities, in particular younger users or users who are less informed and hence vulnerable,” she said.
Tan, who has researched and written about social media regulation, noted that there was existing legislation in Singapore that dealt with such content, but it was limited in scope and adaptability.
Jeremy Sng, also a NTU lecturer, whose research focuses on the psychological and behavioural effects of media use, saw social media content as a double-edged sword which can serve as an educational tool but nevertheless comes with a risk of affirming negative behaviour.
For example, when young people see content related to self-harm, they may feel encouraged to try that behaviour. Sng also said it is easy on social media to share harmful content to a wide range of people quickly, “making it a concern compared to other forms of media.”
For the Singapore government, deciding what was “appropriate” or “harmful” could be tricky as different people can have very different ideas about the same content, Sng added.
For example, a forum on body image issues could be a safe space for youths to discuss and share their anxieties, he said, but it could also be seen as a place to reinforce those problems. If the bill was used in this context and such forums were banned, the government may be removing support and recovery tools for some people.
Meanwhile, the authorities could also require online service providers deemed to have a “significant reach or impact” to follow a separate set of elaborate rules.
Those in this category would have to establish systems and processes to prevent Singapore users from accessing content presenting a “material risk of significant harm”.
Firms would also be required to undergo audits to prove their compliance and work with government researchers to allow the authorities to better understand the nature and level of risks, among other measures.
Online service providers are defined as firms that “allow users to access or communicate content via the internet or deliver content to end users”. Failure to comply with the rules is an offence that would result in a fine, the ministry said.
“In light of the fast-evolving nature of harmful online content, the Bill [and the regulations] are important steps towards creating a safer online space for Singapore users, particularly children,” it said.
Jeff Paine, managing director of the Asia Internet Coalition, said the group would work with the government to make the internet safer for people in Singapore. The industry association is made up of leading internet and tech firms.
Sng, the NTU lecturer, said that while the bill might not deter users from looking for harmful content, the onus would be on social media platforms to regulate and police content. Currently, there was a greater reliance on self-governance.
He said this pressure would show these “developers of such technologies that are pervasive in our lives, they have some responsibility to society to ensure their platforms are not used for spreading harmful content”.
Soon, whose research focus includes false information and media regulation, had a similar assessment, noting that there would be a wide range of responses on what constitutes harm. Some, she said, would find the definition too broad while others would find it too narrow.
A possible yardstick would be to assess if content would have an adverse physical and psychological impact on a child or adult of ordinary sensibilities, an approach proposed in Britain, she suggested.
“What is certain is this – online harms will evolve as technology advances,” she said. “A sustained consultative and iterative approach is required to determine suitable benchmarks for the local context during the prevailing climate.”
The governance of online content would be an ongoing process that would require the input of different stakeholders including the government, policymakers, social media firms, users, academia and activist groups, said Tan from NTU.
Natalie Pang, senior lecturer and deputy head at the National University of Singapore’s communications and new media division, said while the recent development was a step in the right direction, regulation alone was not enough to create a safe online environment.
She said: “It’s important to recognise the issue of online harm in an ecological context, that is to say online harm can grow and evolve across multiple platforms, and this often involves instant messaging platforms.”
What remains to be seen is how the law will be used by the government and whether removed content will be considered to be infringing on freedom of expression, Sng said.
Some critics have argued that recent laws in the city state, including one governing online falsehood, could be used to stem dissent and misused given that they were loosely defined.
But Soon said regulating the online space and protecting freedom of expression was not a zero-sum game. “One could argue that with the prevalence of hate speech, misinformation, harassment and exploitative content, one’s freedom to exploit the full potential of the online space safely has been curtailed,” she said.
“However, for regulation to weed out toxicity and improve enforceability, it should answer two Ws – whom are we protecting and what are we protecting them from? This will lower the possibility of regulation being overreaching and vague."
Source: South China Morning Post