Section 230 and Other Content Moderation Laws Across The Globe


As the Internet has grown, and companies that host (and moderate) user content have sprung up and spread across the globe, it’s become harder to track the various frameworks and policies that these companies apply to content moderation, which are often different from company to company and from country to country. 
In the United States, the speech policy choices of intermediaries—that is, social media platforms and other sites that host user-generated content (UGC)—are shaped in part by by three laws: the First Amendment to the U.S. Constitution; 47 U.S.C. §230 (commonly referred to as Section 230), and the Digital Millennium Copyright Act (DMCA).

The First Amendment and Section 230

As a general matter, the First Amendment protects the right of platforms to choose to host speech, or not, without interference from the government, with a few narrow exceptions for some kinds of illegal speech such as child pornography. In other words, the government cannot interfere with a platform’s curation policies.  

That said, in the early days of the internet, intermediaries might have faced very serious threats from private parties who were looking to hold them responsible for content that might violate other laws, such as defamation. In this environment, litigation costs alone could stifle many new services before they ever got going. And, ironically perhaps, services that actively moderated user-generated content might be more at risk than those who did not moderate at all. 

Section 230, passed into law in 1996, substantially mitigates that risk. Simply put, it largely shields an intermediary from liability for user-generated content, and provides additional protection to ensure that intermediary moderation doesn’t invite new liability. 

Section 230 contains two important statutes related to content moderation and censorship. The first, (c)1, states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider," which effectively means that although you are legally responsible for what you say online, if you host or republish other peoples' speech, only those people are legally responsible for what they say. Under Section 230, the only party responsible for unlawful speech online is the person who said it, not the website where they posted it, the app they used to share it, or any other third party. This is true whether the web host is a large platform such as Facebook, or your own personal website or blog.

There are some limitations—Section 230 does not shield intermediaries from liability under federal criminal law, nor does it protect intermediaries from federal intellectual property claims.  In a nutshell, Section 230 makes it possible for sites and services that host user-generated speech and content to exist, and allows users to share their ideas—without having to create their own individual sites or services that would likely have much smaller reach. Since its passage in 1996, this law has been credited with helping to create the internet ecosystem that we have today, giving many more people access to the content that others create than they would ever have otherwise. It’s part of why we have flourishing online communities where users can comment and interact with one another without waiting hours, or days, for a moderator, or an algorithm, to review every post.

Section 230 also helps prevent over-censorship of  controversial or potentially problematic or “harmful” content—a category that itself often changes over time, can be subject to political whim, and depends considerably on viewpoint. Platform censorship is a serious problem, but without Section 230, platforms might feel pressured to shut down even more controversial conversations for fear of indirect liability—which would inevitably harm vulnerable groups, whose voices are already often marginalized, more than others. Censorship has been shown to amplify existing imbalances in society—sometimes intentionally and sometimes not. The result has been that platforms are more likely to censor disempowered individuals and communities’ voices. 

In short, because of Section 230, if you break the law online, you are the one held responsible, not the website, app, or forum where you said something unlawful. Similarly, if you forward an email or even retweet a tweet, you’re protected by Section 230 in the event that that material is found unlawful. Remember—this sharing of content and ideas is one of the major functions of the Internet, from Bulletin Board Services in the 80s, to Internet Relay Chats of the 90s, to the forums of the 2000s, to the social media platforms of today. Section 230 protects all of these different types of intermediary services (and many more). While Section 230 didn’t exist until 1996, it was created, in part, to protect those services that already existed—and the many that have come after.


The vast majority of speech takedowns are due to allegations of copyright infringement, using procedures set out in the Digital Millennium Copyright Act (DMCA).  

The DMCA contains two main sections. The "anti-circumvention" provisions (sections 1201 et seq. of the Copyright Act) bar circumvention of access controls and technical protection measures. The "safe harbor" provisions (section 512) protect service providers who meet certain conditions from monetary damages for the infringing activities of their users and other third parties on the net. 

To receive these protections service providers must comply with the conditions set forth in Section 512, including “notice and takedown” procedures that give copyright holders a quick and easy way to disable access to allegedly infringing content. Section 512 also contains provisions allowing users to challenge improper takedowns. Without these protections, the risk of potential copyright liability would prevent many online intermediaries from providing services such as hosting and transmitting user-generated content. Thus the safe harbors, while imperfect, have been essential to the growth of the internet as an engine for innovation and free expression. That said, the DMCA is often abused to take down lawful content.

Laws in Other Countries

Given the number of large platforms based in the U.S., understanding the First Amendment and Section 230 is  essential to understanding platform moderation policies and practices. But we cannot ignore how content moderation is influenced by laws in other countries. For example, of Facebook’s 2.79 billion global users (as of February 2021), only 228 million are in the U.S.. 

A vast majority of global internet users are subject to moderation frameworks that are based on more stringent local laws, even when companies that they are using are based in the U.S. In other countries, the government often plays a considerably larger, and more dangerous, role in determining content moderation policies. 

For example, Germany’s flawed NetzDG law, passed in 2018, requires large social media platforms to quickly remove “illegal content” as defined by over 20 different criminal codes.  The broad law forces moderators with potentially limited understanding of context to make decisions quickly or the companies face steep fines. The law also mandates social media platforms with two million users to name a local representative authorized to act as a focal point for law enforcement and receive content take down requests from public authorities. It also mandates these companies to remove or disable content that appears to be “manifestly illegal” within 24 hours of having been alerted of the content. Human Rights Watch has criticized the law as violating Germany’s obligation to respect free speech.  

Venezuela, Australia, Russia, India, Kenya, the Philippines, and Malaysia have followed with their own laws or have proposed laws similar to the German example. A proposed law in Indonesia would coerce social media platforms, apps, and other online service providers to accept local jurisdiction over their content and users’ data policies and practices. Ethiopia’s stringent Computer Crime Proclamation and Hate Speech and Disinformation Prevention and Suppression Proclamation of 2020 requires platforms to police content by giving them 24 hours to take down disinformation or hate speech. More recently, Mauritius proposed a law that would allow the government to intercept traffic to social media platforms. 

The list, unfortunately, goes on: Singapore, inspired by Germany’s NetzDG, passed the Protection from Online Falsehoods and Manipulation Bill in May 2019, empowering the government to order platforms to correct or disable content, accompanied with significant fines if the platform fails to comply. A recently-passed law in Turkey goes way beyond NetzDG, as its scope does not only include social media platforms but also news sites. In combination with its exorbitant fines and the threat to block access to websites, the law enables the Turkish government to erase any dissent, criticism, or resistance. 

This is only a brief overview of the legal landscape of content moderation. For more information, visit the “Further Readings” section of the site.