An investigation of the Spreading Patterns of Misinformation on Social Media Surrounding the COVID-19 Outbreak in China


Principal Investigator

Dr Zheng Yan Yan
Research Fellow
School of Computer Science and Engineering

Dr Lu Jiahui
Research Associate
Wee Kim Wee School of Communication and Information


Aims and hypotheses

This seed project aims at investigating the spreading patterns of misinformation on social media surrounding the current COVID-19 outbreak in China. We will collect and chronicle misinformation circulated on social media during the outbreak. We plan to pay particular attention to misinformation that would elicit public panic and irrational collective responses. On this basis, we will leverage advanced Artificial Intelligence (AI) techniques to analyze the spreading law of misinformation on social networks. The goal of this project is to facilitate future development of AI solutions to combating misinformation during public health crises.

Background and significance

The current COVID-19 crisis has triggered a mass of misinformation on social media within China. The spread of crisis-related misinformation may cause considerable damage to public health because the misinformation can intensify public panic and mislead the public to adopt irrational or even harmful practices (Swire & Ecker, 2018; Poland & Spier, 2010). Therefore, the countering of misinformation in crisis contexts is critical. Although AI techniques have been utilized to address misinformation on social media, existing studies focused predominantly on automatic detection and filter of falsehoods, neglecting the spreading patterns and contexts of the misinformation (Zubiaga et al., 2018; Vosoughi et al., 2018; Guo & Ding, 2019).

In this seed project, we plan to examine the spreading patterns of misinformation surrounding COVID-19. This initial examination will involve identification and profiling of users who are vulnerable to misinformation as well as discovering the internal (e.g., peer-to-peer communications) and external influences (e.g., crisis-related events, government policies) that affect the spreading of misinformation during the outbreak. These investigations will facilitate our future development of AI solutions to tackling crisis-related misinformation based on existing misinformation correction models. Particularly, we plan to adopt the inoculation theory of misinformation correction which analogizes misinformation as an infectious disease (Cook et al., 2017). It indicates that people can be cognitively prevented from misinformation by receiving pre-emptive messages such as a warning of misleading tactics and a weakened version of the misinformation (Cook et al., 2017; Niederdeppe et al., 2015). More importantly, these messages can also be spread through social networks, which helps inoculate the societal immunity against misinformation (Linden et al., 2017). However, what is the best way to deliver a pre-emptive message for a given type of misinformation in a particular social network is unknown. This seed project is therefore to examine the spreading patterns of misinformation before our application of the inoculation theory into AI solutions.

This seed project will illuminate the success of our end project, which has several contributions. First, it will extend the academic knowledge and provide a new perspective of AI applications to counter misinformation by adopting the inoculation theory of misinformation correction. Second, the overall project will at the end provide advanced AI solutions to combating misinformation during public health emergencies like disease outbreaks, which can be translated into real-world applications. Third, findings from our research can inform governments of better plans to tackle misinformation at the policy level.


This seed project will be conducted in several steps. First, we will access and chronicle misinformation pertaining to the COVID-19 outbreak within China on Sina Weibo. Government press releases and fact-checking platforms will be utilized to identify major misinformation. In particular, we plan to pay attention to misinformation that could cause public panic and irrational collective responses (e.g., panic buy), as this type of misinformation would spread fast and potentially damage authorities’ crisis management plans. Then, we will collect data of the spreading processes (e.g., tags, retweets, comments) of misinformation. Publicly available data on Weibo will be de-identified before downloaded so that the project involves no human subject identifiers. Users who tend to trust the misinformation and further spread them will be analysed and identified. Besides, the internal and external factors that affect users’ decision to spread misinformation will be further investigated. On this basis, we plan to design an algorithm to help those vulnerable people to distinguish the misinformation by leveraging advanced AI techniques (e.g., deep reinforcement learning and influence maximization models).

Future plan

The ultimate goal is to utilize AI techniques to combat misinformation during public health crises, which is a complex social problem rising with the use of social media. After the completion of this seed project, we will examine the optimal AI solutions for countering the spread and influence of misinformation in given social networks, drawing on the inoculation theory of misinformation correction. We will also evaluate the AI algorithms based on simulation studies and data of other public health emergencies. Finally, we aim to develop an application that would apply to real-world crisis situations.


At present, we are in the stage of data collection and analysis. We have collected Weibo data for a specific scenario of misinformation. A sample of data has been human-coded, which will be used for future machine learning algorithms. Meantime, we developed an advanced visualization technology and algorithm for demonstration the spreading pattern of related rumor. Such technology can capture the connection between people during the rumor spreading. We are also in the midst of collecting data for other scenarios of misinformation. After that we will build an new algorithm aiming at identifying the most vulnerable people who are prone to be affected by the rumors in the social network. 

Also, we are preparing a top-tier journal paper and aim to submit the manuscript in October.