A July 4 preliminary injunction issued by a federal judge in Louisiana limiting government contacts with social media platforms deals with government “jawboning”—urging private persons and entities to censor another’s speech—a serious issue deserving serious attention and judicial scrutiny.
The First Amendment forbids the government from coercing a private entity to censor, whether the coercion is direct or subtle. This has been an important principle in countering efforts to threaten and pressure intermediaries like bookstores and credit card processors to limit others’ speech.
But not every communication to an intermediary about users’ speech is unconstitutional. And the distinction between proper and improper speech is often obscure.
So, while the court order is notable as the first to hold the government accountable for unconstitutional jawboning of social media platforms, and appropriately recognizes the First Amendment right of persons to receive information online free of unlawful government interference, it is not the serious examination of jawboning issues that is sorely needed. The court did not distinguish between unconstitutional and constitutional interactions or provide guideposts for distinguishing between them in the future.
The injunction comes in a lawsuit brought by Louisiana, Missouri, and several individuals alleging federal government agencies and officials illegally pushed the platforms to censor content about COVID safety measures and vaccines, elections, and Hunter Biden’s laptop, among other issues. The court sided with the plaintiffs, issuing a broad injunction that does not clearly track First Amendment standards.
Oddly, the injunction includes exceptions that permit some of the most concerning government interactions and indicates that the court may have been more concerned with the subject matter of the government’s complaints—for instance, posts encouraging vaccine hesitancy—than with drawing a workable line on the government’s conduct.
Government Involvement in Content Moderation Raises Human Rights Issues
Because government involvement in private platforms’ content moderation processes raises serious human rights concerns, we have urged companies to proceed with caution in their editorial decision-making. As we have written:
“When sites cooperate with government agencies, it leaves the platform inherently biased in favor of the government's favored positions. It gives government entities outsized influence to manipulate content moderation systems for their own political goals—to control public dialogue, suppress dissent, silence political opponents, or blunt social movements. And once such systems are established, it is easy for government—and particularly law enforcement—to use the systems to coerce and pressure platforms to moderate speech they may not otherwise have chosen to moderate.”
EFF was also one of the co-authors and original endorsers of the second version of the Santa Clara Principles, which specifically scrutinizes “State Involvement in Content Moderation,” and affirms that “state actors must not exploit or manipulate companies’ content moderation systems to censor dissenters, political opponents, social movements, or any person.” The Santa Clara Principles recognize that government involvement in private companies’ content moderation processes raises human rights concerns not raised by the companies’ consultations with other experts.
“Companies should recognize the particular risks to users’ rights that result from state involvement in content moderation processes. This includes a state’s involvement in the development and enforcement of the company’s rules and policies, either to comply with local law or serve other state interests. Special concerns are raised by demands and requests from state actors (including government bodies, regulatory authorities, law enforcement agencies and courts) for the removal of content or the suspension of accounts.”
Bar Should Be Low When Government is Accused of Jawboning
Recognizing the gravity of the issue, we have written about jawboning and filed several amicus briefs in cases that raise the issue. In those briefs, we have focused primarily on the question of when private platforms may be liable when they respond to government jawboning. On that issue we have set a fairly high bar—private entities shall not be considered state actors unless “first, the government replaces the intermediary’s editorial policy with its own, second, the intermediary willingly cedes its editorial implementation of that policy to the government regarding the specific user speech, and third, the censored party has no possible remedy against the government.”
But we have set a fairly low bar for when the government itself should be liable for trying to coerce private entities to censor speech:
“When the government exhorts private publishers to censor, the censored party’s first and favored recourse is against the government. And the narrow path to holding private publishers liable as state actors proposed above in no way limits a plaintiff’s ability to hold governments liable for their role in pressuring social media companies to censor user speech. . . . In First Amendment cases, there is a lower threshold for suits against government agencies and officials that coerce private censorship: the government may violate speakers’ First Amendment rights with “system[s]of informal censorship” aimed at speech intermediaries. Bantam Books v. Sullivan, 372 U.S. 58, 61, 71 (1963).”
We also filed a FOIA lawsuit designed to uncover the US government’s involvement in the widespread removal of programs featuring a Palestinian activist from Zoom, YouTube, Facebook, and Eventbrite. We joined with other organizations to urge the administration to drop its planned Disinformation Governance Board. And we sharply criticized the “trusted flagger” provisions of the EU’s Digital Services Act under which a state’s Digital Services Coordinator can designate law enforcement agencies to be among those whose “flags” to hosting services of illegal content must be given priority. We also filed comments with Meta’s Oversight Board protesting Facebook acting upon law enforcement flags and removing drill music videos.
Not All Communications Between Platforms and Government Are Improper
We have also acknowledged that not every communication, interaction, or cooperative effort between a social media company and the government is unwise. As we have written in our amicus briefs:
“...content moderation is a difficult and often fraught process that even the largest and best resourced social media companies struggle with, often to the frustration of users. To even hope for fairness and consistency in their decisions, social media companies need to have breathing room to draw on outside resources. Indeed, the First Amendment protects this information gathering part of their editorial process. . . . [In addition to seeking input from users and NGOs] Platforms also seek input from governments. Although concerning, this is appropriate where the government is uniquely situated to verify information—such as the location of polling places, a list of street closures, or a synopsis of the CDC’s current COVID policies.”
Nor is every government communication to an intermediary about its users’ speech unconstitutional. The First Amendment bars the government from coercing censorship or providing “such significant encouragement” that the ultimate choice to censor must be considered that of the state, not the intermediary. Encouragement falling short of that extreme does not violate the First Amendment. Nor are all exhortations to intermediaries improper. Mere approval of or acquiescence with the intermediary’s decision is not a constitutional violation.
The Supreme Court has held that government need not ”renounce all informal contacts with persons” and may advise them, for example, how to comply with the law. Government should be able to criticize the content moderation practices and policies of social media companies without violating the First Amendment, as long as they are not expressly or implicitly threatening them with a penalty for failing to do the government’s bidding.
Unfortunately, the order does not make an adequate effort to distinguish between proper and improper communications by the government.
While these distinctions may be difficult, the district court did not seriously engage with them. The court’s ruling looks at the government’s actions broadly, and then deems all the various agencies’ and individuals’ actions as improper encouragement. While some of the instances seem to be coercion from the court’s findings, like those of the president’s former Director of Digital Strategy Rob Flaherty, others do not. For example, it is not clear what the Census Bureau or the Centers for Disease Control did to cross the First Amendment line. The court’s finding of improper coordination with several private misinformation remediation projects also seems thin.
The court’s injunction likewise applies to whole government agencies and perhaps thousands of federal government employees. It is not limited to the specific examples of interactions discussed. And it prohibits not only coercion and forceful encouragement, but all urging and encouraging.
Unnecessary Exemptions
The preliminary injunction also specifically allows the Biden administration to “notify and contact” social media platforms about numerous topics. These exceptions were unnecessary—the First Amendment doesn’t bar the government from contacting or notifying anyone about anything as long as there is not coercion or forceful encouragement. But the subjects the court lists reveal a lot about the court’s own value judgments about subjects the government has a legitimate concern in advancing—correcting public health misinformation is noticeably excluded, while law enforcement flagging is included.
Some of the subjects would seem to apply to the very matters complained of in the complaint. The injunction does not apply to contacting or notifying social media companies about postings involving criminal activity or criminal conspiracies, of national security threats, extortion, or other threats posted on its platform, or criminal efforts to suppress voting. It also doesn’t apply contacting or notifying platforms about illegal campaign contributions, cyber-attacks against election infrastructure, or foreign attempts to influence elections, threats that threaten the public safety or security of the US, or postings intending to mislead voters about voting requirements and procedure.
The injunction does not block the government from exercising “permissible public government speech” promoting government policies or views on matters of public concern. And it does not bar communicating with social-media companies to detect, prevent, or mitigate malicious cyber activity, and communicating with social-media companies about deleting, removing, suppressing, or reducing posts on social-media platforms that are not protected free speech by the Free Speech Clause in the First Amendment to the United States Constitution.
It seems clear the court recognizes that it is appropriate for the government in many circumstances to “inform” or “notify” social media platforms about what it considers to be problematic social media content. But it sharply criticizes in the opinion many of the systems the companies and the government have for such exchanges of information. And it offers little guidance as to when “notifying” and “contacting” rise to the level of coercion or improper encouragement.
It also bears noting that the type of law enforcement involvement in content moderation that is allowed by the court’s order raise some of the most serious human rights concerns. This is why we have strongly criticized granting “trusted flagger” status to law enforcement agencies.
Lastly, in an unfortunate moment that has caused many to question the seriousness of the court’s endeavor, the court characterizes the complaint as describing “arguably the most massive attack against free speech in United States history.”
One could argue about what actually is the most massive assault on freedom of speech in our nation’s history. But without denigrating the seriousness of the allegations in this complaint, my vote is on the 42-year reign of Anthony Comstock as a special agent to the U.S. Post Office, where he zealously sought to enforce the morality law he pushed Congress to pass, the effects of which we are living with to this day, 80 years later.