The Past, Present, and Future of Section 230

By: Gillian Ho
Edited by: Clark Mahoney and Lauren Levinson

Section 230 of the Communications Decency Act has been in effect since 1996, which protects online platforms from liability for content posted by their users for content posted by their respective users, shielding companies from being deemed publishers of user-generated content, such as social media platforms Facebook and Reddit. [1] Section 230 also allows platforms to moderate and remove content “in good faith” without being held liable for such actions. While the policy was originally enacted to foster free speech and protect online service providers, today, Section 230 has raised debates on its interpretation and applications. For example, courts have used the policy to preemptively end lawsuits and legal actions that would hold providers and users liable for third-party content. As digital communication has grown more complex, the meaning and practical impact of Section 230 have been continually tested. Policymakers and courts now debate how to balance free expression, innovation, and platform accountability, raising questions about whether the law should be reinterpreted, narrowed, or reformed for the modern internet.

Officially, Section 230 is built around two subsections, 230(c)(1) and 230(c)(2), which together define both the scope of platform immunity and the boundaries of content moderation. The text of 230(c)(1) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In practice, this clause prevents courts from holding online platforms legally responsible for most content created by third parties. Congress enacted this language to encourage the growth of online communication by ensuring that platforms would not be forced to assume the traditional legal responsibilities of publishers, such as liability for defamation or other byproducts from user speech. A key aspect of the subsection is its distinction between a publisher, who can be held liable for the content they vet or endorse, and an interactive computer service, which simply hosts user content. Section 230 clarifies that content moderation does not transform computer services into publishers legally. This distinction allows platforms to operate and facilitate online posts without being required to assume traditional editorial liability.

The second major subsection, 230(c)(2), known as the “Good Samaritan” clause, protects platforms that restrict or remove content in good faith. The law explicitly states that providers cannot be held liable for taking action to moderate material they consider obscene, violent, harassing, or otherwise objectionable, even if that material is constitutionally protected. This clause was designed to encourage responsible moderation by preventing platforms from facing free-speech-related lawsuits for attempting to create safer online environments. 

Together, these two sections form the foundation of Section 230’s original intentions—shielding platforms from liability for user posts while giving them broad discretion to moderate content without legal penalty. Nonetheless, Section 230 was enacted for the online communication world of 1996. Today, companies like Facebook, YouTube, TikTok, and X serve billions of users and operate as primary venues for public communication, news consumption, and political discussion. This modern shift has kick-started debates about whether a policy designed for early technology remains satisfactory for governing platforms that now shape global information spheres.

The Supreme Court’s consideration of Reynaldo Gonzalez, et al. v. Google LLC, 598 U.S. 617 in 2023 defined a moment in the modern debate over online platform liability. Gonzalez v. Google arose from a lawsuit filed by the family of Nohemi Gonzalez, an American student killed in the 2015 ISIS attack in Paris. [2] Her relatives alleged that YouTube, owned by Google, had contributed to ISIS’s recruitment and radicalization efforts by recommending extremist videos through its algorithms. The plaintiffs argued that the platform’s recommendation systems played an impactful role in promoting dangerous material. The question, therefore, was whether these algorithmic suggestions constituted actions taken “as a publisher,” which Section 230 shields, or whether they amounted to separate platform conduct not covered by the statute. This distinction challenged the foundational assumptions of Section 230. When the law was enacted in 1996, “publishing” largely meant hosting or distributing user content. Recommendation algorithms were far from the minds of the statute’s authors, yet they are now foundational to the content architecture of social media platforms. The plaintiffs argued that algorithmic promotion is fundamentally different from passive hosting, because it reflects platform-designed choices that influence which content users see and how it spreads. [3] Ultimately, the Court avoided the central Section 230 question. Instead, it sent Gonzalez v. Google back to the lower courts. [4] The Court reasoned that the claims in Gonzalez v. Google were insufficient even without considering Section 230, and therefore declined to interpret the statute at all. Most notably, the Court did not determine whether algorithmic recommendations are protected under Section 230. As a result, the distinction between neutral tools, such as chronological feeds and algorithmic curation, is still legally ambiguous. Lower courts have generally treated recommendations as part of the publisher function, but their reasoning varies, and several judges have expressed unease about extending immunity to increasingly sophisticated, personalized, and profit-driven recommendation engines. [5] Generally, Gonzalez v. Google signals that the future of Section 230 will likely depend on how the law adapts to platform scale and technological sophistication. 

Conversely, the Supreme Court’s decision in Twitter, Inc. v. Taamneh, 598 U.S. 47 in 2023 addressed whether platforms can be held secondarily liable under the Anti-Terrorism Act (ATA) for failing to prevent terrorist organizations from using their services. Twitter v. Taamneh originated from a lawsuit filed by the family of a victim of the 2017 ISIS attack at the Reina nightclub in Istanbul. [6] The plaintiffs argued that Twitter, Google, and Facebook had “aided and abetted” ISIS by allowing the group to maintain accounts, disseminate propaganda, and reach global audiences through their platforms. Unlike in Gonzalez, the plaintiffs did not rely on Section 230; instead, they invoked the Justice Against Sponsors of Terrorism Act, a 2016 amendment to the ATA that allows civil suits against entities that “knowingly provide substantial assistance” to terrorist acts. [7] The core question before the Court was therefore whether the platforms’ failure to eliminate ISIS-related content amounted to “knowing and substantial assistance” of a specific terrorist attack. In effect, they argued that platforms provide an enabling environment for terrorism simply by failing to fully eliminate extremist activity. The Supreme Court unanimously rejected this theory. Writing for the Court, Justice Clarence Thomas held that the ATA requires a concrete and direct connection between the defendant’s actions and the particular attack at issue. [8] Mere awareness that bad actors use a widely available service does not amount to “knowing assistance,” nor does a failure to detect and remove harmful content transform a neutral tool into substantial aid. The Court emphasized that imposing liability for general platform use would dramatically expand the concept of aiding and abetting, potentially exposing countless technology providers to sweeping legal risk.

Although Twitter v. Taamneh did not involve Section 230 directly, its implications for the statute’s future are substantial. First, the decision reinforces the Court’s reluctance to impose broad liability on platforms for user behavior in the absence of clear legislative direction. The decision signals that, even outside the protections of Section 230, courts will require highly specific allegations of intentional misconduct before allowing claims against platforms to proceed. This raises the bar for future plaintiffs seeking to hold platforms liable for harms stemming from user content, whether related to terrorism, extremist activity, or other forms of online harm.

The unresolved questions surrounding Section 230 have prompted a wave of reform efforts at both the federal and state levels. Although lawmakers across the political spectrum disagree on the causes and consequences of platform harms, there is bipartisan consensus that Section 230, written in 1996—long before the rise of algorithmic curation and global social media—requires modernization. The central challenge for policymakers is determining how to update the statute without inadvertently undermining the open, participatory structure of the modern internet.

For one, The EARN IT Act, introduced repeatedly with bipartisan support, seeks to condition Section 230 protections on compliance with best practices for detecting and preventing child sexual exploitation online. [9] Similarly, the bipartisan SAFE TECH Act, aims to narrow immunity by excluding paid content, targeted advertising, and certain algorithmic recommendations from Section 230 protection. [10]

Regardless of modernization, any successful reform must balance innovation with responsibility and preserve free expression while ensuring user safety. Achieving this balance will determine whether Section 230 continues to support an open, dynamic digital environment or becomes a constraint on the next era of internet development.

Notes:

  1. Congress.gov. “Section 230: An Overview,” 2025. https://www.congress.gov/crs-product/R46751

  2. "Gonzalez v. Google LLC." Oyez. 2022.https://www.oyez.org/cases/2022/21-1333

  3. Hasan, Zayn. “Supreme Court Report: Gonzalez v. Google LLC, 21-1333.” National Association of Attorneys General, October 17, 2022. https://www.naag.org/attorney-general-journal/supreme-court-report-gonzalez-v-google-llc

  4. Hamm, Andrew. “Gonzalez v. Google LLC.” SCOTUSblog, April 12, 2022. https://www.scotusblog.com/cases/case-files/gonzalez-v-google-llc/

  5. Congress.gov. “Liability for Algorithmic Recommendations,” 2025. https://www.congress.gov/crs-product/R47753.

  6. "Twitter, Inc. v. Taamneh." Oyez. 2022. https://www.oyez.org/cases/2022/21-1496

  7. LII / Legal Information Institute. “Twitter, Inc. V. Taamneh,” 2023. https://www.law.cornell.edu/supct/cert/21-1496

  8. Golde, Kalvis. “Twitter, Inc. V. Taamneh.” SCOTUSblog, June 8, 2022. https://www.scotusblog.com/cases/case-files/twitter-inc-v-taamneh/

  9. Graham, Lindsey. “S.1207 - 118th Congress (2023-2024): EARN IT Act of 2023.” Congress.gov, 2023. https://www.congress.gov/bill/118th-congress/senate-bill/1207

  10. Mark, Warner,. “S.560 - 118th Congress (2023-2024): SAFE TECH Act.” Congress.gov, 2023. https://www.congress.gov/bill/118th-congress/senate-bill/560

Bibliography:

Congress.gov. “Liability for Algorithmic Recommendations,” 2025. https://www.congress.gov/crs-product/R47753.

Congress.gov. “Section 230: An Overview,” 2025. https://www.congress.gov/crs-product/R46751

Golde, Kalvis. “Twitter, Inc. V. Taamneh.” SCOTUSblog, June 8, 2022.  https://www.scotusblog.com/cases/case-files/twitter-inc-v-taamneh/

"Gonzalez v. Google LLC." Oyez. 2022. https://www.oyez.org/cases/2022/21-1333

Graham, Lindsey. “S.1207 - 118th Congress (2023-2024): EARN IT Act of 2023.” Congress.gov, 2023. https://www.congress.gov/bill/118th-congress/senate-bill/1207

Hamm, Andrew. “Gonzalez v. Google LLC.” SCOTUSblog, April 12, 2022. https://www.scotusblog.com/cases/case-files/gonzalez-v-google-llc/

Hasan, Zayn. “Supreme Court Report: Gonzalez v. Google LLC, 21-1333.” National Association of Attorneys General, October 17, 2022.

https://www.naag.org/attorney-general-journal/supreme-court-report-gonzalez-v-google-llc

LII / Legal Information Institute. “Twitter, Inc. V. Taamneh,” 2023. https://www.law.cornell.edu/supct/cert/21-1496

Mark, Warner,. “S.560 - 118th Congress (2023-2024): SAFE TECH Act.” Congress.gov, 2023.

 https://www.congress.gov/bill/118th-congress/senate-bill/560

"Twitter, Inc. v. Taamneh." Oyez. 2022.

 https://www.oyez.org/cases/2022/21-1496

Previous
Previous

The Ongoing Controversy Over the Epstein Files 

Next
Next

Trump's anti-media rhetoric, unlike any other