Trump Executive Order Misreads Key Law Promoting Free Expression Online and Violates the First Amendment

Read the original article: Trump Executive Order Misreads Key Law Promoting Free Expression Online and Violates the First Amendment


President Trump’s Executive Order targeting social media companies is an assault on free expression online and a transparent attempt to retaliate against Twitter for its decision to curate (well, really just to fact-check) his posts and deter everyone else from taking similar steps.  The good news is that, assuming the final order looks like the draft we reviewed on Wednesday, it won’t survive judicial scrutiny. To see why, let’s take a deeper look at its incorrect reading of Section 230  (47 U.S.C. § 230) and how the order violates the First Amendment.

The Executive Order’s Error-Filled Reading of Section 230

The main thrust of the order is to attack Section 230, the law that underlies the structure of our modern Internet and allows online services to host diverse forums for users’ speech. These platforms are currently the primary way that the majority of people express themselves online. To ensure that companies remain able to let other people express themselves online, Section 230 grants online intermediaries broad immunity from liability arising from publishing another’s speech. It contains two separate and independent protections.

Subsection (c)(1) shields from liability all traditional publication decisions related to content created by others, including editing, and decisions to publish or not publish. It protects online platforms from liability for hosting user-generated content that others claim is unlawful. For example, if Alice has a blog on WordPress, and Bob accuses Clyde of having said something terrible in the blog’s comments, Section 230(c)(1) ensures that neither Alice nor WordPress are liable for Bob’s statements about Clyde. The subsection also would also protect Alice and WordPress from claims from Bob in the event that Clyde demanded Alice remove the terrible things said about him and she did so.

Subsection (c)(2) is an additional and independent protection from legal challenges brought by users when platforms decide to edit or to not publish material they deem to be obscene or otherwise objectionable. Unlike (c)(1), (c)(2) requires that the decision be in “good faith.” In the context of the above example, (c)(2) would protect Alice and WordPress when Alice decides to remove a term within the comment from Bob that she considers to be offensive. Bob cannot successfully sue Alice for that editorial action as long as Alice acted in good faith.

The legal protections in subsections (c)(1) and (c)(2) are completely independent of one another. There is no basis in the language of Section 230 to qualify (c)(1)’s immunity on platforms obtaining immunity under (c)(2). And courts, including the U.S. Court of Appeals for the Ninth Circuit, have correctly interpreted the provisions as distinct and independent liability shields:

 Subsection (c)(1), by itself, shields from liability all publication decisions, whether to edit, to remove, or to post, with respect to content generated entirely by third parties. Subsection (c)(2), for its part, provides an additional shield from liability, but only for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider … considers to be obscene … or otherwise objectionable.”

Even though neither the statute nor court opinions that interpret it mush these two Section 230 provisions together, the order does. The order claims that (c)(2) a “qualifies” subsection (c)(1), thus requiring that all publication decisions be in “good faith.”

In short, the order tasks government agencies with defining “good faith” and eventually deciding whether any platform’s decision to edit, remove, or otherwise moderate user-generated content was made in good faith.

Should the order have its intended legal effect, for a single act of editing user content that the government doesn’t like, a platform would lose both kinds of protections under 230. This essentially will work as a trigger to remove Section 230’s protections entirely from a host of anything that someone disagrees with. But the impact of that trigger would be much broader than simply being liable for the moderation activities purportedly done in bad faith: Once a platform was deemed not in good faith, it loses (c)(1) immunity for all user-generated content, not just the triggering content. This could result in platforms being subjected to a torrent of private litigation for thousands of completely unrelated publication decisions.

The order also purports to immediately require federal agencies to adopt the order’s tortured reading of Section 230, stating that it “is the policy of the United States that all departments and agencies should apply section 230(c) according to the interpretation set out in this section.”

Although the order cannot actually rewrite 230 or override its interpretation by federal courts, the language is concerning. It could be read as permitting the Department of Justice and other agencies, such as the Federal Trade Commission, to adopt this position in legal or administrative proceedings, such as complaints filed by consumers against online platforms. Thus it could open up another avenue for this administration to attack online platforms based only on the fact that the President does not like the content moderation decisions they’ve made. 

The Executive Order’s First Amendment Problems

Taking a step back, the order purports to give the Executive Branch and federal agencies powerful leverage to force platforms to publish what the government wants them to publish, on pain of losing Section 230’s protections. But even if section 230 permitted this, and it doesn’t, the First Amendment bars such intrusions on editorial and curatorial freedom.

The Supreme Court has consistently upheld the right of publishers to make these types of editorial decisions. While the order faults social media platforms for not being purely passive conduits of user speech, the Court derived the First Amendment right from that very feature.

In its 1974 decision in Miami Herald Co v. Tornillo, the Court explained:

A newspaper is more than a passive receptacle or conduit for news, comment, and advertising. The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials — whether fair or unfair — constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time.

Courts have consistently applied this rule to social media platforms, including the 9th Circuit’s recent decision in Prager U v. Google and a decision yesterday by the U.S. Court of Appeals for the District of Columbia in a case brought by Freedom Watch and Laura Loomer against Google. In another case, a court ruled that when online platforms “select and arrange others’ materials, and add the all-important ordering that causes some materials to be displayed first and others last, they are engaging in fully protected First Amendment expression—the presentation of an edited compilation of speech generated by other persons.”

And just last term in Manhattan Community Access v. Halleck, the Supreme Court rejected the argument that hosting the speech of others negated these editorial freedoms. The court wrote, “In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.”

It went on to note that “Benjamin Franklin did not have to operate his newspaper as ‘a stagecoach, with seats for everyone,’” and that “The Constitution does not disable private property owners and private lessees from exercising editorial discretion over speech and speakers on their property.”

The Supreme Court also affirmed that these principles applied “Regardless of whether something ‘is a forum more in a metaphysical than in a spatial or geographic sense.’”

EFF filed amicus briefs in Prager U and Manhattan Community Access, urging that very result. These cases thus foreclose the President’s ability to intrude on platforms’ editorial decisions and to transform them into public forums akin to parks and sidewalks.

But even if the First Amendment were not implicated, the President cannot use an order to rewrite an act of Congress. In passing 230, Congress did not grant the Executive the ability to make rules for how the law should be interpreted or implemented. The order cannot abrogate power to the President that Congress has not given.

We should see this order in light of what prompted it: the President’s personal disagreement with Twitter’s decisions to curate his own tweets. Thus despite the order’s lofty praise for “free and open debate on the Internet,” this order is in no way based on a broader concern for freedom of speech and the press.

Indeed, this Administration has shown little regard, and much contempt, for freedom of speech and the press. We’re skeptical that the order will actually advance the ideals of freedom of speech or be justly implemented.

There are legitimate concerns about the current state of online expression, including how a handful of powerful platforms have centralized user speech to the detriment of competition in the market for online services and users’ privacy and free expression. But the order announced today doesn’t actually address those legitimate concerns and it isn’t the vehicle to fix those problems. Instead, it represents a heavy-handed attempt by the President to retaliate against an American company for not doing his bidding. It must be stopped.


Read the original article: Trump Executive Order Misreads Key Law Promoting Free Expression Online and Violates the First Amendment