Government's Role in Managing Disinformation
By: Anna Recicka
We are living in a post-truth era. The term was first used by Steve Tesich more than thirty years ago, but it seems that humanity is still not immune to the information tsunami that overflows our everyday reality. Over time cybersecurity strategies have improved, but so have online fraudulent and deliberate disinformation. Cybercrimes in 2020 led to a loss of $4 billion and the misinformation posed a serious threat for people’s wellbeing. The introduction of AI has enabled easier spread of fake news using deepfakes and realistic evidence. The borders between online spaces and reality have become even more blurred. Social media has turned into a battlefield for public support, not only in politics, but in ideologies, lifestyles and values. Psychologists argue that most people are susceptible to bias and unwillingly do the thorough research needed for the content they consume. They also tend to question the information that supports their views less, especially when emotionally appealed to it. People may share false information “to signal their political affiliation, disparage perceived opponents, or accrue social rewards.”
The question of whether the government should limit social media disinformation evolves from the question about freedom of speech. Obviously, the freedom of speech does not mean the freedom to lie. Fraud is a serious crime, whether committed in person or through communication devices. Fake promises or pretending to be someone else cause harm, as well as misleading health advice or fake news about a particular group of people, intended to provoke hatred. Nethertheless, government restrictions are not a perfect solution. All people are biased to some extent, especially in such a politically divided country like the US. Bans and punishment policies can become selective. A better approach for the government is to encourage critical thinking and media literacy.
Examples from other countries show why simply banning misinformation can lead to unexpected results. In Russia, for instance, “spreading fakes in social media” is punished by 15 years in prison. There is even a state-sponsored “Safe Internet League” that claims to protect children from harmful content by simply deleting it. In reality, hundreds of people there are fined and/or arrested for truths about the government and war. The government has its own interests and biases, and a war against misinformation can result in calling the truth “fake” and vice versa.
Another reason why blocking untrue content may be difficult is the technical aspect. There are 5.24 billion social media users in the worldwide, with a significant percentage belonging to the English-speaking segment. Revising all false posts and videos to delete them is impossible for even the largest department, especially since new content is created at an enormous rate. Delivering this job to artificial intelligence or another automated technology may also be ineffective. First, their concept of “truth” would be based on what people consider to be truth, and again, it depends on the beliefs and interests of the programmers and sponsors for the project. Second, the news can be true and still harmful. For example, the so-called cherry picking strategy is a common manipulation tool. When someone publishes facts selectively, without giving the whole background, it also can create prejudice. Moreover, not every lie is harmful and there is something which can’t be checked through research. For example, social media accounts spreading religion can not be categorized as “truth” or “fake”. Restricting access to such content would be a direct violation of freedom of speech.
Instead of controlling departments, an independent, totally objective and unbiased source of truth would be a good solution. Based on research and critical thinking, it would provide people with verified data. However, there are two problems. First, most people would not spend time and energy checking every headline they read. With access to vast amounts of information, internet users may just glance over and click to the next link. Second, 100% crystal-clear truth is a utopia in a post-truth, biased world. By the way, such resources already exist, for example, factcheck.org, a non-profit university project. It highlights misinformation in the speeches of politicians, but the influencers extend far beyond this group. Also, despite relative popularity (800,000 followers on Facebook) it is still too small to reach most Americans. The trust in such resources also is not as high as in popular media, influencers or politicians.
Even without government interference, many online platforms have their own policies regarding restrictions. Most social media platforms block violent, erotic or hateful content, making mistakes sometimes, but overall being effective on protecting users. Instagram, for example, labels fake content and reduces its distribution. Delegating these responsibilities to the government would mean turning private platforms into state-controlled entities.
However, these policies seem to be not effective enough, as disinformation remains an issue in the US. Although victims of online fraud can’t be blamed for being fooled, those who trust and repost misinformations, become a part of the problem, along with those who spread it intentionally and profit from it. All social media users are responsible for their online activity. Checking sources and evaluating their reliability before reposting would make online platforms safer. The inspiring example of Finland, with its fake news recognition classes could guide the world in developing a new approach to social media. By the way, Finland ranks the top of media literacy. Even though some of the topics require specific knowledge (for example, it’s hard to reason about vaccines with no medical education), it’s important to raise awareness about reliable and valid data, source checking, propaganda tricks, and capabilities of AI. When the state tries to control and block everything, its citizens would struggle with producing “immunity” to false information exposure and skills to operate with it. Instead of trying to limit disinformation – which has existed throughout the entire history of communication – the government can focus the effort on decreasing the spread of it by encouraging critical thinking. Perhaps, with these policies, the younger generations would be more resistant to myths in the future.
Bibliography
American Psychological Association. "How and Why Misinformation Spreads." American Psychological Association, last updated March 1, 2024, created November 29, 2023, accessed April 5, 2025. https://www.apa.org/topics/journalism-facts/how-why-misinformation-spreads.
Elizaveta Vereykina. "273 Criminal Cases on 'Fakes' About the Army Have Been Opened in Russia Since 2022, Investigative Committee." The Barents Observer, January 15, 2024. https://www.thebarentsobserver.com/news/273-criminal-cases-on-fakes-about-the-army-have-been-opened-in-russia-since-2022-investigative-committee/109918.
FactCheck.org. "Our Mission." Accessed April 5, 2025. https://www.factcheck.org/about/our-mission/.
Instagram. "Combatting Misinformation on Instagram." About Instagram, December 16, 2019. Accessed April 5, 2025. https://about.instagram.com/blog/announcements/combatting-misinformation-on-instagram.
Kaufman, Marc. "How Finland Is Combating Fake News." CNN, May 28, 2019. Accessed April 5, 2025. https://edition.cnn.com/interactive/2019/05/europe/finland-fake-news-intl/.
Lessenski, Marin. "The Media Literacy Index 2023: 'Bye, Bye, Birdie': Meeting the Challenges of Disinformation." Open Society Institute – Sofia, June 2023. Accessed April 5, 2025. https://osis.bg/wp-content/uploads/2023/06/MLI-report-in-English-22.06.pdf.
Mitchell, Amy, Jeffrey Gottfried, Galen Stocking, Mason Walker, and Sophia Fedeli. "Many Americans Say Made-Up News Is a Critical Problem That Needs To Be Fixed." Pew Research Center, June 5, 2019. Accessed April 5, 2025. https://www.pewresearch.org/journalism/2019/06/05/many-americans-say-made-up-news-is-a-critical-problem-that-needs-to-be-fixed/.
Safe Internet League. Wikipedia, last modified March 31, 2024. https://en.wikipedia.org/wiki/Safe_Internet_League.
Statista. "Worldwide Digital Population as of January 2024." Accessed April 5, 2025. https://www.statista.com/statistics/617136/digital-population-worldwide/.
Tesich, Steve. 1992. “A Government of Lies.” The Nation, December 21, 1992.
U.S. Department of State. "Cybercrime." U.S. Department of State. Accessed April 2, 2025. https://www.state.gov/cybercrime.