Between Free Speech and Public Harm: Navigating Misinformation Under the First Amendment
By: Zixuan Wang
Misinformation disseminated through social media has triggered significant concerns regarding its impact on elections, public health, and national security. While several nations have adopted legal mechanisms to combat these threats, the United States continues to grapple with reconciling the need to curb harmful falsehoods with its longstanding commitment to free speech. Consequently, any legislative response to misinformation must carefully account for fundamental constitutional safeguards, ensuring that measures aimed at protecting the public do not undermine the nation’s core values of open discourse.
The core legal principle guiding this examination originates from the First Amendment of the United States Constitution, declaring unequivocally that “Congress shall make no law... abridging the freedom of speech.” The Supreme Court has consistently underscored the primacy of free expression, particularly political speech, as central to democratic governance. However, the Court has equally recognized that this right is not absolute. Certain categories of speech—including incitement, defamation, true threats, obscenity, and fraud—are excluded from full constitutional protection precisely because their harm to society outweighs their expressive value.
In addressing the specific question of misinformation, the precedent established in United States v. Alvarez provides a foundational interpretive lens. In Alvarez, the Court held that false speech, merely because it is false, is not categorically devoid of First Amendment protection. The plurality opinion, authored by Justice Kennedy, emphasized that the remedy for false speech should typically be “speech that is true,” underscoring an enduring commitment to the marketplace of ideas. Nevertheless, this principle does not preclude regulatory intervention where false speech causes demonstrable and significant harm. Indeed, Justice Breyer, in his concurrence, clarified that the constitutional protection of false speech depends upon weighing the governmental interest against the burden imposed on free expression.
Thus, the crucial test becomes whether misinformation disseminated through social media platforms meets a threshold that justifies governmental intervention under the First Amendment’s doctrinal standards. The Court’s historical method involves a rigorous balancing test: a regulation of speech must (1) serve a compelling state interest, and (2) be narrowly tailored to achieve that interest without overly restricting protected expression.
As the Court has recognized repeatedly, preserving the integrity of democratic elections, safeguarding public health, and protecting national security represent governmental interests of the highest order. Misinformation deliberately propagated to disrupt electoral integrity, undermine public health efforts, or endanger national security may rise to a degree of harm akin to fraud, defamation, or incitement—categories of speech the Court has explicitly recognized as constitutionally regulable. For instance, misinformation that deceives citizens about election logistics (e.g., false claims about voting locations, eligibility, or timing) fundamentally impedes democratic participation, akin to voter intimidation or electoral fraud. Similarly, false medical claims disseminated widely during a public health crisis can directly endanger human life, analogous to fraudulent speech historically unprotected by the First Amendment. Thus, governmental action in these narrow circumstances finds robust constitutional justification.
Even a compelling interest, however, does not empower broad censorship. Under established doctrine, regulations must be specifically drawn to target only the problematic elements of speech. Thus, the critical constitutional inquiry becomes how the government regulates misinformation without excessively intruding upon protected speech.
Supreme Court precedents strongly indicate the constitutional validity of transparency and accountability regulations rather than direct prohibitions of content. In Citizens United v. Federal Election Commission, the Court expressly endorsed transparency measures, recognizing that informed citizens are the most effective antidote to harmful speech. Thus, regulations aimed at revealing sources, methods of amplification (such as algorithmic or automated propagation), and ownership of accounts spreading misinformation would face fewer constitutional objections because they target practices that undermine the marketplace of ideas rather than censoring content per se.
Moreover, emerging empirical research consistently demonstrates that misinformation can spread more rapidly and widely than verified information, propelled by social media algorithms that magnify its virality and reinforce echo-chamber effects. In light of these concerns, narrowly tailored amendments to existing statutory frameworks—such as Section 230 of the Communications Decency Act, which currently grants broad immunity to platforms for third-party content, may provide constitutionally permissible mechanisms for accountability. Rather than imposing broad liability for content per se, statutory reforms could condition immunity upon platforms meeting minimal standards of due diligence and transparency in relation to demonstrably harmful misinformation. Such an approach parallels the Court’s historical tolerance for indirect regulation of harmful expression through procedural or transparency requirements, without directly criminalizing or removing protected speech.
It remains crucial that governmental regulations avoid vague or subjective definitions of misinformation. The Court has been explicit that any regulation affecting speech must provide clear standards to prevent arbitrary enforcement. Laws overly broad or imprecise in delineating what constitutes “misinformation” risk unconstitutional overreach and the chilling of protected speech. Consequently, any permissible regulatory scheme must establish rigorous standards to distinguish clearly and objectively between protected falsehoods—inevitable in public discourse—and deliberately deceptive misinformation that demonstrably threatens core democratic values or public safety.
Finally, any sustainable regulatory framework must be accompanied by strong governmental efforts to promote digital literacy. Consistent with Justice Brandeis’s assertion that “the remedy to be applied is more speech, not enforced silence,” initiatives that strengthen citizens’ ability to critically assess online content are not merely permissible—they are endorsed by First Amendment principles. By equipping the public to evaluate and challenge dubious claims, such educational programs not only bolster the free exchange of ideas but also mitigate the corrosive effects of misinformation, thereby preserving the foundational role of open discourse in American democracy.
Therefore, while broad content-based restrictions on misinformation would be constitutionally impermissible, narrowly tailored regulatory measures addressing demonstrably harmful misinformation—particularly regulations focusing on transparency, accountability, and procedural safeguards—align squarely with established First Amendment jurisprudence. Such an approach would effectively balance robust protection of free expression with the equally vital interest in preserving democratic institutions, public health, and national security, satisfying constitutional scrutiny while respecting the Supreme Court’s enduring commitment to free speech.
Bibliography
47 U.S.C. § 230 (2018).
Allcott, Hunt, and Matthew Gentzkow. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives 31, no. 2 (2017): 211–36.
Ashcroft v. ACLU, 542 U.S. 656 (2004).
Bail, Christopher A., et al. “Exposure to Opposing Views on Social Media Can Increase Political Polarization.” Proceedings of the National Academy of Sciences 115, no. 37 (2018): 9216–21.
Bontcheva, Kalina, and Julie Posetti. Balancing Act: Countering Digital Disinformation While Respecting Freedom of Expression: Broadband Commission Research Report on “Freedom of Expression and Addressing Disinformation on the Internet.” Geneva: UNESCO and ITU, 2020.
Bovet, Alexandre, and Hernán A. Makse. “Influence of Fake News in Twitter during the 2016 US Presidential Election.” Nature Communications 10, no. 1 (2019): 7.
Brady, William J., Molly J. Crockett, and Jay J. Van Bavel. “The MAD Model of Moral Contagion: The Role of Motivation, Attention, and Design in the Spread of Moralized Content Online.” Perspectives on Psychological Science 15, no. 4 (2020): 978–1010.
Broadrick v. Oklahoma, 413 U.S. 601 (1973).
Buckley v. Valeo, 424 U.S. 1 (1976).
Carme Colomina, Héctor Sánchez Margalef, Richard Youngs, and Kate Jones. The Impact of Disinformation on Democratic Processes and Human Rights in the World. Brussels: European Parliament, 2021.
Chaplinsky v. New Hampshire, 315 U.S. 568 (1942).
Citizens United v. Fed. Election Comm’n, 558 U.S. 310 (2010).
de Cock Buning, Madeleine. A Multi-dimensional Approach to Disinformation: Report of the Independent High Level Group on Fake News and Online Disinformation. Luxembourg: Publications Office of the European Union, 2018.
Grayned v. City of Rockford, 408 U.S. 104 (1972).
Holder v. Humanitarian Law Project, 561 U.S. 1 (2010).
Ian Vandewalker, Digital Disinformation and Vote Suppression. Brennan Center for Justice, September 2, 2020, https://www.brennancenter.org/our-work/research-reports/digital-disinformation-and-vote-suppression.
Jacobson v. Massachusetts, 197 U.S. 11 (1905).
Joseph, Andrew M., et al. “COVID-19 Misinformation on Social Media: A Scoping Review.” Cureus 14, no. 4 (2022). Kim, Young Mie. “Voter Suppression Has Gone Digital.” Brennan Center for Justice, November 20, 2018, https://www.brennancenter.org/our-work/analysis-opinion/voter-suppression-has-gone-digital.
New York Times Co. v. Sullivan, 376 U.S. 254 (1964).
Red Lion Broad. Co. v. FCC, 395 U.S. 367 (1969).
Reno v. ACLU, 521 U.S. 844 (1997).
Sadiq, Muhammed T., and Saji K. Mathew. “The Disaster of Misinformation: A Review of Research in Social Media.” International Journal of Data Science and Analytics 13, no. 4 (2022): 271–285.
Simon & Schuster, Inc. v. Members of the New York State Crime Victims Board, 502 U.S. 105 (1991).
U.S. Const. amend. I.
United States v. Alvarez, 567 U.S. 709 (2012).
United States v. Playboy Entertainment Group, 529 U.S. 803 (2000).
Virginia State Bd. of Pharmacy v. Virginia Citizens Consumer Council, 425 U.S. 748 (1976).
Vosoughi, Soroush, Deb Roy, and Sinan Aral. “The Spread of True and False News Online.” Science 359, no. 6380 (2018): 1146–51.
Ward v. Rock Against Racism, 491 U.S. 781 (1989).
Whitaker v. Thompson, 353 F.3d 947 (D.C. Cir. 2004).
Whitney v. California, 274 U.S. 357 (1927) (Brandeis, J., concurring).