J Korean Acad Fam Med Search


Korean J Fam Med > Epub ahead of print
Andrew: Understanding the “Infodemic” Threat: A Case Study of the COVID-19 Pandemic


The coronavirus disease 2019 (COVID-19) pandemic is notable among infectious diseases for its distinctive impact, which has halted millions of livelihoods owing to strict social distancing rules and lockdowns. Consequently, millions of individuals have turned to online sources, particularly social media, to remain informed about the virus. The transition to digital sources has resulted in an abundance of information, including both accurate and misleading or false content being shared and consumed on online platforms, contributing to what is commonly referred to as an “infodemic.” Although these platforms have been valuable tools for healthcare professionals and public health authorities in disseminating crucial public health messages, they have also aided in the spread of misleading and false information. The widespread dissemination of false information has been instrumental in propagating harmful beliefs and behaviors such as vaccine hesitancy, promoting discriminatory attitudes, and endorsing false beliefs about the efficacy of certain therapeutic products for treating COVID-19. False information has undoubtedly become a challenge and burden for governments, health professionals, and the general population. This review has three main objectives: (1) to assess the scope of the “infodemic” issue, including investigating the factors contributing to the spread of false information online; (2) to examine the multifaceted consequences resulting from false information; and (3) to argue that an interdisciplinary, multi-layered approach, encompassing a focus on prevention, deterrence, and education, should be adopted to prevent the conception and dissemination of false information in this modern digital age.


Over the last 50 years, how we access information has changed dramatically. Digital platforms have played a pivotal role in reshaping information consumption by making it more accessible and widely available. For many of us, social media platforms like Facebook and Twitter, among others, have become the primary source of information. However, the ease of access to vast amounts of information has allowed anyone to proclaim themselves as an expert in virtually any field, even on highly specialized topics like the coronavirus disease 2019 (COVID-19). During the COVID-19 pandemic, widespread lockdowns across different regions coupled with restrictions on human gatherings have forced individuals to depend on Internet-based services and platforms for communication, interaction, and access to information to stay informed about the virus [1]. Unfortunately, a handful of individuals have chosen to deliberately create and disseminate false and misleading information regarding the coronavirus, often driven by motives aimed at causing harm or advancing their political, personal, or financial interests [2]. Thus, the rapid influx of information on social media platforms and websites, with varying levels of reliability and accuracy, has presented a significant challenge for individuals to determine what information is trustworthy amidst the plethora of content available online. Undoubtedly, the rapid dissemination of false information during the pandemic has eroded trust in science, heightened public anxiety, and evoked negative emotional and behavioral responses toward public health efforts aimed at containing the virus [3]. This review investigates the scope of the “infodemic” threat, including examining the factors influencing the dissemination of health misinformation and disinformation and the harmful consequences false information can impose on individuals and society. It concludes by stating that improving public health strategies to effectively combat the circulation of mis(dis)information and address future “infodemic” threats necessitate an interdisciplinary, multilayered framework centered on three areas: prevention, deterrence, and education. This review also provides examples of what potential strategies for each area could look like.


A literature search of databases such as PubMed, Medline, and Google was conducted in October 2023. This review aimed to find literature that was relevant to describing the coronavirus public health “infodemic,” including the motives behind the dissemination of mis(dis)information, as well as the role and impact of false information. Literature discussing strategies to mitigate false information was also included. Articles were screened based on specific keyword searches within the paper’s title or abstract, including terms such as COVID-19, coronavirus, misinformation, disinformation, infodemic, anti-vaccination, and social media. The literature that was most relevant to the objectives of this review was included.


The COVID-19 pandemic has often been labeled as a “digital pandemic” due to the widespread dissemination of information facilitated by modern technology [4,5]. These digital tools have facilitated the creation, consumption, and dissemination of false information, thereby making it more challenging for people to differentiate between accurate and false content online [4,5]. Social media platforms have played a pivotal role in disseminating a plethora of information since the beginning of the COVID-19 pandemic. As the number of COVID-19 cases increased, searches for COVID-19-related information on social media also witnessed an exponential rise [6]. Throughout the pandemic, governments and health authorities worldwide utilized social media platforms to disseminate essential health information to their populations. This information encompassed updates on the coronavirus, public health messages aimed at curbing the virus’s transmission and content-debunking misinformation related to mask-wearing and vaccine hesitancy. However, the overabundance and unprecedented amount of information disseminated through multiple communication channels have made it difficult to distinguish between authentic, evidence-based scientific information provided by public health officials and governments and false anecdotal information produced by social media users. Hence, the term “infodemic” was coined soon afterward to describe this phenomenon [7,8]. Unsurprisingly, the spread of false information is not something new, and its roots can be traced back to the Roman Empire [9]. Furthermore, COVID-19 is not an isolated instance when it comes to the widespread dissemination of information on social media platforms; previous outbreaks like Influenza and Zika provoked a similar online response [10,11]. However, the coronavirus has been the first of its kind to captivate the world with its constant media coverage [12]. Certainly, the presence of easily accessible conflicting information has undermined global efforts to control the pandemic. The “infodemic” threat is driven by the spread of false information, which can be categorized into two types: misinformation and disinformation.
Both misinformation and disinformation involve the dissemination of false or inaccurate information, with the difference being the intent behind the dissemination. Misinformation involves the spread of false information without any intention to cause harm, while disinformation specifically refers to information that is created with the malicious intent to mislead and cause harm [13]. Conspiracy theories, rumors, testimonials from politicians, and urban legends are the primary sources of false information [14]. Reasons for the rapid and extensive dissemination of false information in modern society are plentiful. One key factor contributing to the spread is that individuals who share false information online are typically influenced by their originators through heuristics or other contextual cues [2]. When individuals share information based on heuristics and peripheral cues, they do so rapidly and spontaneously, without careful and proper contemplation [2].
Buchanan outlines three crucial variables—consistency, consensus, and authority—that play a role in motivating individuals to share false information [2]. However, the author believes that an additional variable, namely audience, should also be recognized as another significant factor influencing the dissemination of false information. This study will now delve into each of these variables and examine their impact on the dynamics of information sharing.

1. Consensus

Consensus assesses the extent to which individuals perceive their actions to align with the behavior prevailing among the majority [2]. In this context, social media algorithms are strategically designed to deliver personalized content that aligns with users’ preferences and interests [15]. This customized content delivery, coupled with the prevalence of widely circulated messages, serves as a catalyst, motivating individuals to increase the spread of information. Consequently, information is often shared without careful scrutiny in such environments.

2. Authority

The second variable is authority. Authority is concerned with the degree to which communication received by an individual appears to come from a legitimate, trustworthy source [2]. The propagation of false information amid the COVID-19 pandemic can be attributed, in part, to the engagement of politicians and rogue physicians who have endorsed pseudoscience and anti-vaccine organizations [16]. Thus, individuals who perceive their information as originating from a trustworthy and credible source are more inclined to endorse and share these messages with others.

3. Audience

Those who spread false information employ a range of tactics to create deceptive content crafted to manipulate their intended audience. Tailored misinformation and disinformation, deliberately designed to deceive the target audience, can exert a significant influence, especially when directed at vulnerable social groups, such as ethnic and gender minorities. These social groups frequently encounter barriers to accessing employment and education and experience heightened xenophobia [17,18]. Owing to these challenges, the author believes that these social groups are more susceptible to manipulation, displaying a greater inclination to believe and share false information online compared to other social groups. This vulnerability stems primarily from difficulties in assessing the credibility of such information. Moreover, given the political and social divisions faced by these vulnerable individuals, perpetrators further exploit their vulnerability by creating networks of fake personas to bolster the credibility of their messages within their target audience.
Consistency refers to the extent to which sharing information aligns reliably with an individual’s past practices or beliefs [2]. Research indicates that individuals tend to find headlines more credible when they align with their existing beliefs, even if the information is explicitly labeled as false [19]. Another study noted that when false information is tied to topics individuals are personally invested in, they are more inclined to believe and share it, possibly because it responds to their psychological needs, emotions, and fears during times of crisis [16]. Consequently, consistently presenting a narrative that matches an individual’s preconceptions fosters the perception of a widely supported message, enhancing its credibility. This strategy creates the illusion of widespread backing while obscuring the true origin of the message.
In summary, the dissemination of false information can be driven by four key variables: consensus, authority, audience, and consistency. Strategically manipulating these factors in the dissemination of false information has the potential to significantly magnify their impact.


The propagation of false information can have profound consequences, such as fostering stigma toward specific nations and ethnic groups and undermining the credibility of scientific evidence. As the COVID-19 pandemic unfolded, empathy witnessed a noticeable erosion, accompanied by an increase in mutual blame and prejudice targeted at certain countries, like China, and individuals who bear similarities with them for their perceived role in the virus’s spread [20]. An infodemiology study revealed that people chose to input stigmatizing phrases such as “Wuhan Coronavirus” or “Chinese Coronavirus” rather than official names when searching for coronavirus-related information on Google [21]. Moreover, the connection between COVID-19 infection and occupational social class has resulted in the stigmatization and marginalization of migrants, laborers, and healthcare professionals in India, who are often perceived to be actively involved in the transmission of the virus [22].
Besides the social stigma, medical misinformation and disinformation can have significant adverse health effects. Individuals who prefer straightforward explanations may be more susceptible to accepting simplistic answers as explanations of their medical issues. Unfortunately, this inclination leads to a reduced willingness to respond to public health messages and medical interventions, as well as wasteful hoarding of medications and protective equipment. For instance, some countries, such as Panama, started to hoard antimalarial drugs, such as hydroxychloroquine (HCQ), despite it being an ineffective treatment against COVID-19 [23]. The potential shortage of HCQ for its intended purposes has raised concerns about its impact on patients with rheumatic diseases [24]. Additionally, individuals who self-administer HCQ to treat COVID-19 would face an increased risk of experiencing cardiotoxicity, highlighting the potential dangers of unsupervised use [25]. The controversial social media posts discussing the use of HCQ for COVID-19, along with endorsements from politicians and doctors advocating for antimalarial drugs as effective treatments, have undeniably portrayed these medications as crucial remedies against the virus. This portrayal has overshadowed the strict guidelines for their use and the absence of sufficient scientific evidence supporting their widespread use as a treatment for COVID-19 [26].
Susceptibility to health mis(dis)information has been associated with a reduced willingness to comply with public health orders and an increase in vaccine hesitancy [27]. The prevalence of conspiracy theories surrounding the COVID-19 vaccine, including concerns about its safety and the development process, has played an important role in fostering vaccine hesitancy. Despite the well-established effectiveness of the COVID-19 vaccine in preventing severe illnesses and deaths, the dissemination of false information has undermined the credibility of modern medical interventions. This ultimately poses a serious threat to public health, as it increases the risk of preventable disease outbreaks, further worsening existing health disparities and outcomes.


Mitigating the spread of misinformation and disinformation is an ongoing effort that necessitates collaboration among governments, health practitioners, lay activists, and social media platforms. Past research on vaccine attitudes has indicated that convincing vaccination doubters that vaccines are safe and effective is often challenging, with the effectiveness of interventions varying based on existing parental attitudes toward vaccines [28]. Recognizing these difficulties, this article suggests an alternative strategy for combating false information—a comprehensive framework that centers around three key areas: prevention, deterrence, and education. This approach aims to bring about shifts in attitudes and behaviors by focusing on proactive measures rather than persuading skepticism or relying on rebuttals.

1. Prevention

To disrupt the circulation of false information successfully, we must first identify and understand its origins. Social media is perhaps one of the most effective mediums for disseminating online misinformation. Despite the efforts of various social media giants to address false information and raise public awareness, including Facebook’s commitment to combat coronavirus misinformation and curb the dissemination of questionable health claims, these initiatives have proven unsuccessful [29]. More notably, Facebook’s network of third-party expert fact-checkers failed to detect the vast majority (84%) of the health misinformation sampled by Avaaz [29]. Thus underscoring the need to leverage novel technologies, such as artificial intelligence (AI), to enhance the effectiveness of misinformation detection and prevention on social media platforms. AI holds significant potential for combating the spread of false information on social media platforms. AI-powered automated systems can analyze extensive information datasets in realtime. Specifically, machine learning (ML) algorithms can be used to identify patterns and inconsistencies in online content by focusing on elements such as names and dates. By training ML models on known instances of false information, the algorithms would possess the capability to distinguish between credible and unreliable sources and discern “standout” attributes of false information, ultimately allowing digital platforms to more effectively detect and flag potentially misleading or false text-based content, along with visually manipulated content like deepfakes, for subsequent human review. Moreover, incorporating natural language processing techniques into these algorithms enables the models to comprehend and interpret human language. This enhanced understanding helps them to detect subtle nuances and the context in which text-based content is presented, thereby reducing the likelihood of false positives when identifying mis(dis) information. When a group of individuals starts to push a particular post or trend to the top of search results, an AI-powered system would immediately track the sudden surge of the topic and validate the authenticity of the post by comparing the numbers, names, dates, and other details with content containing similar information from other sources. If false information is identified, the system will temporarily remove the content from other users’ feeds and restrict its influence until a human validates the authenticity of the post.
By leveraging AI in the fight against false information, the potential for a more proactive and efficient approach to prevent users from sharing and engaging with false or misleading content can be realized.

2. Deterrence

Governments can adopt diverse strategies to mitigate the spread of false information.
First, governments should work closely with digital media platforms to formulate policies explicitly designed to address the dissemination of misinformation. Firstly, and most importantly, it is crucial to establish a strong and effective process for removing false information from these online platforms. Additionally, governments need to establish protocols that routinely audit the effectiveness and capabilities of fact-checking mechanisms employed by digital platforms in identifying and mitigating false information. Instituting penalties for platforms that fail to uphold these standards is vital in creating a regulatory framework that incentivizes continuous improvement and accountability in the fight against mis(dis)information.
Second, governments should enact legislation that criminalizes the spreading of false malicious information. For instance, in response to the COVID-19 pandemic, some countries, such as South Africa, have implemented laws that criminalize the dissemination of false or distorted COVID-19 information [30]. Breach of such laws incurs hefty penalties, including fines and imprisonment [30]. Enacting laws that criminalize the creation and dissemination of false information, especially when it has the potential to undermine public health measures and social stability, can discourage individuals from spreading fabricated and misleading content. However, it is essential to exercise caution over the restrictive nature of such legislation, ensuring that it does not unduly suppress freedom of speech.

3. Education

The final set of measures that can be implemented focuses on facilitating digital learning. Low digital health literacy among the population is frequently identified as a key factor in the dissemination of false information online [31]. Consequently, enhancing the overall health literacy of the population is essential for empowering individuals to distinguish between accurate and inaccurate information. Countering false information must go beyond identifying and addressing gaps in knowledge, which requires a comprehensive and collaborative patient-clinician relationship that involves multiple disciplines and levels of engagement [32]. Collaboration among social media platforms, public health experts, and researchers is crucial to effectively raising awareness and enhancing digital media literacy skills, especially in vulnerable communities. Social media platforms can utilize their current technologies to enable discussions between physicians and the public about treatment and medical interventions. This approach aims to uncover health-related misinformation. These initiatives, such as live group discussions, webinars, and video conferences, have contributed to raising awareness about medical misinformation. They also educate the public in identifying false information and assessing the credibility of their sources. Additionally, these platforms offer the opportunity for concerned individuals to address their questions. Research on parents’ attitudes toward childhood vaccination found that researchers were able to successfully alter parents’ anti-vaccination views by emphasizing the repercussions of not vaccinating their child [33]. Conversely, interventions designed to refute their anti-vaccination beliefs resulted in vaccine skeptics generating an even stronger negative sentiment towards vaccinations [33]. Therefore, healthcare professionals should avoid directly challenging or debunking the mistaken or false health beliefs held by individuals when providing explanations and should instead focus on introducing fresh and pertinent factual information that assists in shifting their perspectives, while also ensuring that such messages are delivered using inclusive language.


Addressing and mitigating the spread of misinformation and disinformation is often complex and challenging. False information reached a new height during the COVID-19 pandemic, endangering public health and tearing society’s social fabric. The overabundance of information on the Internet has made it increasingly difficult for public health authorities, the medical community, and its advocates to identify and correct inaccurate and misleading information. Novel digital tools, applications, and strategies are required to address the spread of false information. The collaboration of social media platforms with governments and public health authorities serves as the cornerstone to halting the spread of false information, as they are seemingly the gatekeepers to the endless information available on the internet.



No potential conflict of interest relevant to this article was reported.


1. De’ R, Pandey N, Pal A. Impact of digital surge during COVID-19 pandemic: a viewpoint on research and practice. Int J Inf Manage 2020;55:102171.
crossref pmid pmc
2. Buchanan T. Why do people spread false information online?: the effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation. PLoS One 2020;15:e0239666.
crossref pmid pmc
3. Lee SK, Sun J, Jang S, Connelly S. Misinformation of COVID-19 vaccines and vaccine hesitancy. Sci Rep 2022;12:13681.
crossref pmid pmc pdf
4. Davis S, Matsoso P. COVID-19 as a digital pandemic [Internet]. Seattle (WA): Institute for Health Metrics and Evaluation, Think Global Health; 2020 [cited 2023 Oct 25]. Available from: https://www.thinkglobalhealth.org/article/covid-19-digital-pandemic

5. Gao J, Raza SH, Yousaf M, Shah AA, Hussain I, Malik A. How does digital media search for COVID-19 influence vaccine hesitancy?: exploring the trade-off between Google Trends, infodemics, conspiracy beliefs and religious fatalism. Vaccines (Basel) 2023;11:114.
crossref pmid pmc
6. Zhao Y, Cheng S, Yu X, Xu H. Chinese public’s attention to the COVID-19 epidemic on social media: observational descriptive study. J Med Internet Res 2020;22:e18825.
crossref pmid pmc
7. World Health Organization. Managing the COVID-19 infodemic: promoting healthy behaviours and mitigating the harm from misinformation and disinformation [Internet]. Geneva: World Health Organization; 2020 [cited 2023 Oct 26]. Available from: https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation

8. Gabarron E, Oyeyemi SO, Wynn R. COVID-19-related misinformation on social media: a systematic review. Bull World Health Organ 2021;99:455-63.
crossref pmid pmc
9. Strauss B. The long history of disinformation during war. The Washington Post [Internet]. 2022 Apr 28 [cited 2023 Oct 28]. Available from: https://www.washingtonpost.com/outlook/2022/04/28/long-historymisinformation-during-war/

10. Gandhi CK, Patel J, Zhan X. Trend of influenza vaccine Facebook posts in last 4 years: a content analysis. Am J Infect Control 2020;48:361-7.
crossref pmid
11. Klofstad CA, Uscinski JE, Connolly JM, West JP. What drives people to believe in Zika conspiracy theories? Palgrave Commun 2019;5:36.
crossref pdf
12. Nelson T, Kagan N, Critchlow C, Hillard A, Hsu A. The danger of misinformation in the COVID-19 crisis. Mo Med 2020;117:510-2.
pmid pmc
13. Wang Y, McKee M, Torbica A, Stuckler D. Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med 2019;240:112552.
crossref pmid pmc
14. Lewandowsky S, Ecker UK, Seifert CM, Schwarz N, Cook J. Misinformation and its correction: continued influence and successful debiasing. Psychol Sci Public Interest 2012;13:106-31.
crossref pmid pdf
15. Eg R, Tønnesen OD, Tennfjord MK. A scoping review of personalized user experiences on social media: the interplay between algorithms and human factors. Comput Hum Behav Rep 2023;9:100253.
16. Wang Y, Bye J, Bales K, Gurdasani D, Mehta A, Abba-Aji M, et al. Understanding and neutralising COVID-19 misinformation and disinformation. BMJ 2022;379:e070331.
crossref pmid
17. Pager D, Shepherd H. The sociology of discrimination: racial discrimination in employment, housing, credit, and consumer markets. Annu Rev Sociol 2008;34:181-209.
crossref pmid pmc
18. O’Malley J, Holzinger A. The Sustainable Development Goals: sexual and gender minorities [Internet]. New York (NY): United Nations Development Programme; 2018 [citied 2023 Oct 28]. Available from: https://www.undp.org/content/undp/en/home/librarypage/hivaids/sexual-and-gender-minorities.html

19. Moravec P, Dennis AR. Fake news on social media: people believe what they want to believe when it makes no sense at all [Internet]. St. Bloomington (IN): Kelley School of Business at Indiana University; 2018 [citied 2023 Oct 28]. Available from: https://ssrn.com/abstract=3269541

20. Levenson T. Conservatives try to rebrand the coronavirus. The Atlantic [Internet]. 2020 Mar 11 [cited 2023 Oct 30]. Available from: https://www.theatlantic.com/ideas/archive/2020/03/stop-trying-make-wuhanvirus-happen/607786/

21. Hu Z, Yang Z, Li Q, Zhang A. The COVID-19 infodemic: infodemiology study analyzing stigmatizing search terms. J Med Internet Res 2020;22:e22639.
crossref pmid pmc
22. Bhanot D, Singh T, Verma SK, Sharad S. Stigma and discrimination during COVID-19 pandemic. Front Public Health 2021;8:577018.
crossref pmid pmc
23. Torres-Atencio I, Goodridge A, Vega S. COVID-19: Panama stockpiles unproven drugs. Nature 2020;587:548.
crossref pdf
24. Mendel A, Bernatsky S, Thorne JC, Lacaille D, Johnson SR, Vinet E. Hydroxychloroquine shortages during the COVID-19 pandemic. Ann Rheum Dis 2021;80:e31.
crossref pmid
25. Seydi E, Hassani MK, Naderpour S, Arjmand A, Pourahmad J. Cardiotoxicity of chloroquine and hydroxychloroquine through mitochondrial pathway. BMC Pharmacol Toxicol 2023;24:26.
crossref pmid pmc pdf
26. Ferner RE, Aronson JK. Chloroquine and hydroxychloroquine in covid-19. BMJ 2020;369:m1432.
crossref pmid
27. Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman AL, Recchia G, et al. Susceptibility to misinformation about COVID-19 around the world. R Soc Open Sci 2020;7:201199.
crossref pmid pmc pdf
28. Nyhan B, Reifler J, Richey S, Freed GL. Effective messages in vaccine promotion: a randomized trial. Pediatrics 2014;133:e835. -42.
crossref pmid pdf
29. Facebook’s algorithm: a major threat to public health [Internet]. New York (NY): Avaaz; 2020 [cited Oct 29]. Available from: https://secure.avaaz.org/campaign/en/facebook_threat_health/

30. Committee to Project Journalist. South Africa enacts regulations criminalizing ‘disinformation’ on coronavirus outbreak [Internet]. New York (NY): Committee to Project Journalist; 2020 [cited 2023 Oct 28]. Available from: https://cpj.org/2020/03/south-africa-enacts-regulationscriminalizing-disi/

31. Bin Naeem S, Kamel Boulos MN. COVID-19 misinformation online and health literacy: a brief overview. Int J Environ Res Public Health 2021;18:8091.
crossref pmid pmc
32. Schulz PJ, Nakamoto K. The perils of misinformation: when health literacy goes awry. Nat Rev Nephrol 2022;18:135-6.
crossref pmid pmc pdf
33. Horne Z, Powell D, Hummel JE, Holyoak KJ. Countering antivaccination attitudes. Proc Natl Acad Sci U S A 2015;112:10321-4.
crossref pmid pmc
Share :
Facebook Twitter Linked In Google+ Line it
METRICS Graph View
  • 0 Crossref
  • 239 View
  • 18 Download


Browse all articles >

Editorial Office
Room 2003, Gwanghwamun Officia, 92 Saemunan-ro, Jongno-gu, Seoul 03186, Korea
Tel: +82-2-3210-1537    Tax: +82-2-3210-1538    E-mail: kjfm@kafm.or.kr                

Copyright © 2024 by Korean Academy of Family Medicine.

Developed in M2PI

Close layer
prev next