Read the original article: Section 230 is Good, Actually
Even though it’s only 26 words long, Section 230 doesn’t say what many think it does.
So we’ve decided to take up a few kilobytes of the Internet to explain what, exactly, people are getting wrong about the primary law that defends the Internet.
Section 230 (47 U.S.C. § 230) is one of the most important laws protecting free speech online. While its wording is fairly clear—it states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” —it is still widely misunderstood. Put simply, the law means that although you are legally responsible for what you say online, if you host or republish other peoples’ speech, only those people are legally responsible for what they say.
But there are many, many misconceptions–as well as misinformation from Congress and elsewhere–about Section 230, from who it affects and what it protects to what results a repeal would have. To help explain what’s actually at stake when we talk about Section 230, we’ve put together responses to several common misunderstandings of the law.
Section 230 should seem like common sense: you should be held responsible for your speech online, not the platform that hosted your speech or another party.
Let’s start with a breakdown of the law, and the protections it creates for you.
How Section 230 protects free speech:
Without Section 230, the Internet would be a very different place, one with fewer spaces where we’re all free to speak out and share our opinions.
One of the Internet’s most important functions is that it allows people everywhere to connect and share ideas—whether that’s on blogs, social media platforms, or educational and cultural platforms like Wikipedia and the Internet Archive. Section 230 says that any site that hosts the content of other “speakers”—from writing, to videos, to pictures, to code that others write or upload—is not liable for that content, except for some important exceptions for violations of federal criminal law and intellectual property claims.
Section 230 makes only the speaker themselves liable for their speech, rather than the intermediaries through which that speech reaches its audiences. This makes it possible for sites and services that host user-generated speech and content to exist, and allows users to share their ideas—without having to create their own individual sites or services that would likely have much smaller reach. This gives many more people access to the content that others create than they would ever have otherwise, and it’s why we have flourishing online communities where users can comment and interact with one another without waiting hours, or days, for a moderator, or an algorithm, to review every post.
And Section 230 doesn’t only allow sites that host speech, including controversial views, to exist. It allows them to exist without putting their thumbs on the scale by censoring controversial or potentially problematic content. And because what is considered controversial is often shifting, and context- and viewpoint- dependent, it’s important that these views are able to be shared. “Defund the police” may be considered controversial speech today, but that doesn’t mean it should be censored. “Drain the Swamp,” “Black Lives Matter,” or even “All Lives Matter” may all be controversial views, but censoring them would not be beneficial.
Online platforms’ censorship has been shown to amplify existing imbalances in society—sometimes intentionally and sometimes not. The result has been that more often than not, platforms are more likely to censor disempowered individuals and communities’ voices. Without Section 230, any online service that did continue to exist would more than likely opt for censoring more content—and that would inevitably harm marginalized groups more than others.
No, platforms are not legally liable for other people’s speech–nor would that be good for users.
Basically, Section 230 means that if you break the law online, you should be the only one held responsible, not the website, app, or forum where you said the unlawful thing. Similarly, if you forward an email or even retweet a tweet, you’re protected by Section 230 in the event that that material is found unlawful. Remember—this sharing of content and ideas is one of the major functions of the Internet, from Bulletin Board Services in the 80s, to Internet Relay Chats of the 90s, to the forums of the 2000s, to the social media platforms of today. Section 230 protects all of these different types of intermediary services (and many more). While Section 230 didn’t exist until 1996, it was created, in part, to protect those services that already existed—and the many that have come after.
What’s needed to ensure that a variety of views have a place on social media isn’t creating more legal exceptions to Section 230.
If you consider that one of the Internet’s primary functions is as a way for people to connect with one another, Section 230 should seem like common sense: you should be held responsible for your speech online, not the platform that hosted your speech or another party. This makes particular sense when you consider the staggering quantity of content that online services host. A newspaper publisher, by comparison, usually has 24 hours to vet the content it publishes in a single issue. Compare this with YouTube, whose users upload at least 400 hours of video [pdf] every minute, an impossible volume to meaningfully vet in advance of publishing online. Without Section 230, the legal risk associated with operating such a service would deter any entrepreneur from starting one.
We’ve put together an infographic about how Section 230 works that you can also view to get a quick rundown of how the law protects Internet speech, and a detailed explanation of how Section 230 works for bloggers and comments on blogs, if you’d like to see how this scenario plays out in more detail.
No, Section 230 is not a “hand-out to Big Tech,” or a big tech “immunity, ” or a “gift” to companies. Section 230 protects you and the forums you care about, not just “Big Tech.”
Section 230 protects Internet intermediaries—individuals, companies, and organizations that provide a platform for others to share speech and content over the Internet. Yes, this includes social networks like Facebook, video platforms like YouTube, news sites, blogs, and other websites that allow comments. It also protects educational and cultural platforms like Wikipedia and the Internet Archive.
But it also protects some sites and activities you might not expect—for example, everyone who sends an email, as well as any cybersecurity firm that uses user-generated content for their threat assessments, patches, and advisories. A list of organizations that signed onto a letter about the importance of 230 includes Automattic (makers of WordPress), Kickstarter, Medium, Github, Cloudflare, Meetup, Patreon, Reddit, for example. But just as important as currently-existing services and platforms are those that don’t exist yet—because without Section 230, it would be cost-prohibitive to start a new service that allows user-generated speech.
No, the First Amendment is not at odds with Section 230.
Online platforms are within their First Amendment rights to moderate their online platforms however they like, and they’re additionally shielded by Section 230 for many types of liability for their users’ speech. It’s not one or the other. It’s both.
Some history on Section 230 is instructive here. Section 230 originated as an amendment to the Communications Decency Act (CDA), which was introduced in an attempt to regulate sexual material online. The CDA amended telecommunications law by making it illegal to knowingly send to or show minors obscene or indecent content online. The House passed the Section 230 amendment with a sweeping majority, 420-4.
The online community was outraged by the passage of the CDA. EFF and many other groups pushed back on its overly broad language and launched a Blue Ribbon Campaign, urging sites to “wear” a blue ribbon and link back to EFF’s site to raise awareness. Several sites chose to black out their webpages in protest.
The ACLU filed a lawsuit, which several civil liberties organizations like the EFF as well as other industry groups joined, and which reached the Supreme Court. On June 26, 1997, in a 9-0 decision, the Supreme Court applied the First Amendment by striking down the anti-indecency sections of the CDA. Section 230, the amendment that promoted free speech, was not affected by that ruling. As it stands now, Section 230 is pretty much the only part of the CDA left. But it took several different lawsuits to do that.
No, online platforms are not “neutral public forums.”
But Section 230 only shields an intermediary from liability that already exists. If speech is protected by the First Amendment, there can be no liability either for publishing it or republishing it, regardless of Section 230. As the Supreme Court recognized in the Reno v. ACLU case, the First Amendment’s robust speech protections fully apply to online speech. Section 230 was included in the CDA to ensure that online services could decide what types of content they wanted to host. Without Section 230, sites that removed sexual content could be held legally responsible for that action, a result that would have made services leery of moderating their users’ content, even if they wanted to create online spaces free of sexual content. The point of 230 was to encourage active moderation to remove sexual content, allowing services to compete with one another based on the types of user content they wanted to host.
Moreover, the First Amendment also protects the right of online platforms to curate the speech on their sites—to decide what user speech will and will not appear on their sites. So Section 230’s immunity for removing user speech is perfectly consistent with the First Amendment. This is apparent given that prior to the Internet, the First Amendment gave non-digital media, such as newspapers, the right to decide what stories and opinions it would publish.
No, online platforms are not “neutral public forums.”
Nor should they be. Section 230 does not say anything like this. And trying to legislate such a “neutrality” requirement for online platforms—besides being unworkable—would violate the First Amendment. The Supreme Court has confirmed the fundamental right of publishers to have editorial viewpoints.
It’s also foolish to suggest that web platforms should lose their Section 230 protections for failing to align their moderation policies to an imaginary standard of political neutrality. One of the reasons why Congress first passed Section 230 was to enable online platforms to engage in good-faith community moderation without fear of taking on undue liability for their users’ posts. In two important early cases over Internet speech, courts allowed civil defamation claims against Prodigy but not against Compuserve; since Prodigy deleted some messages for “offensiveness” and “bad taste,” a court reasoned, it could be treated as a publisher and held liable for its users’ posts. Former Rep. Chris Cox recalls reading about the Prodigy opinion on an airplane and thinking that it was “surpassingly stupid.” That revelation led to Cox and then Rep. Ron Wyden introducing the Internet Freedom and Family Empowerment Act, which would later become Section 230.
In practice, creating additional hoops for platforms to jump through in order to maintain their Section 230 protections would almost certainly result in fewer opportunities to share controversial opinions online, not more: under Section 230, platforms devoted to niche interests and minority views can thrive.
Print publishers and online services are very different, and are treated differently under the law–and should be.
It’s true that online services do not have the same liability for their content that print media does. Unlike publications like newspapers that are legally responsible for the content they print, online publications are relieved of this liability by Section 230. The major distinction the law creates is between online and offline publication, a recognition of the inherent differences in scale between the two modes of publication. (Despite claims otherwise, there is no legal significance to labeling an online service a “platform” as opposed to a “publisher.” And there is no legal significance to labeling an online service a “platform.”)
But an additional purpose of Section 230 was to eliminate any distinction between those who actively select, curate, and edit the speech before distributing it and those who are merely passive conduits for it. Before Section 230, courts effectively disincentivized platforms from engaging in any speech moderation. Section 230 provides immunity to any “provider or user of an interactive computer service” when that “provider or user” republishes content created by someone or something else, protecting both decisions to moderate it and those to transmit it without moderation.
“User,” in particular, has been interpreted broadly to apply “simply to anyone using an interactive computer service.” This includes anyone who maintains a website, posts to message boards or newsgroups, or anyone Become a supporter of IT Security News and help us remove the ads.
Read the original article: Section 230 is Good, Actually