Governments should protect people against cybercrime, and they should equally respect and protect people's human rights. However, across the world, governments routinely abuse cybercrime laws to crack down on human rights by criminalizing speech. Governments claim they must do so to combat disinformation, “religious, ethnic or sectarian hatred,” “rehabilitation of nazism,” or “the distribution of false information,” among other harms. But in practice they use these laws to suppress criticism and dissent, and to more broadly clamp down on the freedoms of expression and association.
So it is concerning that some UN Member States are proposing vague provisions to combat hate speech to a committee of government representatives (the Ad Hoc Committee) convened by the UN to negotiate a proposed UN Cybercrime treaty. These proposals could make it a cybercrime to humiliate a person or group, or insult a religion using a computer, even if such speech would be legal under international human rights law.
Including offenses based on harmful speech in the treaty, rather than focusing on core cybercrimes, will likely result in overbroad, easily abused laws that will sweep up lawful speech and pose an enormous menace to the free expression rights of people around the world. The UN committee should not make that mistake.
The UN Ad Hoc Committee met in Vienna earlier this month for a second round of talks on drafting the new treaty. Some Member States put forward, during and ahead of the session, vague proposals aimed at online hate speech, including Egypt, Jordan, Russia, Belarus, Burundi, China, Nicaragua, Tajikistan, Kuwait, Pakistan, Algeria, and Sudan. Others made proposals aimed at racist and xenophobic materials, including Algeria, Pakistan, Sudan, Burkina Faso, Burundi, India, Egypt, Tanzania, Jordan, Russia, Belarus, Burundi, China, Nicaragua, and Tajikistan.
For example, Jordan proposes using the treaty to criminalize “hate speech or actions related to the insulting of religions or States using information networks or websites,” while Egypt calls for prohibiting the “spreading of strife, sedition, hatred or racism.” Russia, jointly with Belarus, Burundi, China, Nicaragua, and Tajikistan, also proposed to outlaw a wide range of vaguely defined speech intending to criminalize protected speech: “the distribution of materials that call for illegal acts motivated by political, ideological, social, racial, ethnic, or religious hatred or enmity, advocacy and justification of such actions, or to provide access to such materials, by means of ICT (information and communications technology),” as well as “humiliation by means of ICT (information and communications technology) of a person or group of people on account of their race, ethnicity, language, origin or religious affiliation.”
Speech Offences Don't Belong in the Proposed Cybercrime Treaty
As we have previously said, only crimes that target ICTs should be included in the proposed treaty, such as those offenses in which ICTs are the direct objects and instruments of the crimes and could not exist without the ICT systems. These include illegal access to computing systems, illegal interception of communications, data theft, and misuse of devices. So crimes where ICTs are simply a tool that is sometimes used to commit an offense, like the proposals before the UN Ad Hoc Committee, should be excluded from the proposed treaty. These crimes are merely incidentally involving or benefiting from ICT systems without targeting or harming ICTs.
The Office of the United Nations High Commissioner for Human Rights (OHCHR) highlighted in January that any future cybercrime treaty should not include offenses based on the content of online expression:
“Cybercrime laws have been used to impose overly broad restrictions on free expression by criminalizing various online content such as extremism or hate speech.”
Further, harmful speech should not be included among cybercrimes because of the inherent difficulties in defining prohibited speech. Hate speech, the subject of several proposals, is an apt example of the dangers raised by including speech-related harms in a cybercrime treaty.
Because we lack a universally agreed upon definition of hate speech in international human rights law, using the term “hate speech” is unhelpful in identifying permissible restrictions to speech. Hate speech can mean different things to different people and capture a broad range of expressions, including awful but lawful speech. Vague or overbroad laws criminalizing speech can lead to censorship, both state-sanctioned and self-censorship, of legitimate speech because internet users are left uncertain about what speech is disallowed.
Hate speech is many times conflated with hate crimes, a confusion that can be problematic when drafting an international treaty. Not all hate speech is a crime: restrictions on speech can come in the form of criminal, civil, administrative, policy, or self-regulatory measures. Although Article 20 (2) of the UN International Covenant of Civil and Political Rights (ICCPR) made clear that any “advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence” must be prohibited by law, prohibition does not necessarily equal criminalization.
Indeed, criminal sanctions are measures of last resort, invoked only in the most extreme situations. As Article19.org explained, only the “most severe types of hate speech that may appropriately attract criminal sanction include 'incitement to genocide,' and particularly severe forms of 'advocacy of discriminatory hatred that constitute an incitement to violence, hostility or discrimination.'”
International law already provides sufficient guidance on speech that can be restricted as inciting hatred, and thus should not be included in the treaty. Additional and conflicting provisions regarding online hate speech in the Cybercrime Treaty are unnecessary and unwise.
Broad Speech Protection and Very Narrow Limitations on Speech
At the heart of any limitations on the right to free expression must sit the Universal Declaration of Human Rights (UDHR) and the ICCPR, to which the UN Member States that are negotiating the new UN Cybercrime treaty are parties. Article 19 of the ICCPR provides broad protection of freedom of expression. It protects the right to seek, receive, and impart all kinds and forms of expression through any media of one’s choice. States may limit these rights in only very narrow circumstances.
Article 19(3) of the ICCPR lays down conditions any restriction on freedom of expression must meet, requiring that any limitation comply with the following test: it must be provided for by law (“legality”), designed to achieve a legitimate aim, be proportionate to that legitimate aim, and necessary for a democratic society. The UN Human Rights Committee’s General Comment 34 has established that these standards apply to online speech. Deeply offensive expression, blasphemy, defamation of religion, incitement to terrorism, and violent extremism are not categorically subject to permissible limitations. Any limitations on those categories of speech must, like most other categories of speech, satisfy the Article 19(3) test.
Both the UN Special Rapporteur on Freedom of Expression and the Committee on the Elimination of Racial Discrimination (CERD) have underlined that speech prohibitions must satisfy the Article 19(3) test. Moreover, they must primarily be civil sanctions: criminal sanctions are measures of last resort, invoked only in the most extreme situations, such as instances of imminent violence. The UN Human Rights Committee’s General Comment 34 and CERD General Recommendation 35 also confirm that any limitations on speech must comply with the Article 19 test.
Incitement to Discrimination, Hostility or Violence: The Standard
Although incitement is a category of speech that may presently be restricted, existing International law provides sufficient guidance on how States should respond to it; its inclusion in the Cybercrime Treaty is not needed and will only sow confusion.
As mentioned before, ICCPR Article 20 (2) requires Member States to prohibit the advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence based on the following categories: nationality, race, color, ethnicity, language, religion, national or social origin, political or other opinion, gender, sexual orientation, property, birth, disability, or other status.
In its 2012 report, the UN Special Rapporteur developed a standard to assess Article 20 prohibitions that focuses on intent, incitement, and particular harm. First, the speaker must intend to publicly advocate and promote national, racial, or religious hatred towards the specific group. Next, the speech must “create an imminent risk of discrimination, hostility or violence” against the group members. Finally, incitement must aim at producing discrimination, hostility, or violence against the group.
To meet these standards at the national level, the Member States have the following obligations:
- Adopt precise and unambiguous restrictions to combat advocacy of national, racial, or religious hatred that amounts to incitement to discrimination, hostility, or violence. Legal attempts to punish hate speech are often too vague or too broad. It is also unclear whether States prohibitions against “advocacy of hatred that constitutes incitement,” fall under ICCPR Article 20 or actually target legitimate speech.
- Only enact speech restrictions that have legitimate aims as prescribed under ICCPR Article 19 and 20 or CERD Article 4. Legitimate aim principles include respecting the rights and reputation of others and protecting national security, public order, or public health or morals. Even here, the restrictions must be narrowly tailored. There must be a pressing or substantial need, and restrictions must not be overbroad—banning speech because it’s critical isn’t a legitimate aim. Further, the protection of morals, which reflect social or religious traditions, shouldn’t be based on the principles of a single tradition. Under ICCPR General Comment 34, blasphemy laws, speech restrictions that discriminate in favor or against a certain religion, and prohibitions against criticism of religious leaders are not legitimate aims.
- Opt for measures that do not unnecessarily and disproportionately interfere with freedom of expression. When the Article 19(3) test is met, Member States must demonstrate that the speech in question poses an imminent threat of harm and applies the least intrusive means of restricting speech to achieve a legitimate objective. In addition, the speaker's intent to cause harm must be examined
This test has a very high threshold, and many laws have failed to comply with such standards. Myanmar’s hate speech law contained an unlawfully vague definition of hate speech crime. Spain’s speech-related offenses did not sufficiently distinguish between the severity of the expression and the impact that speech in having to determine proportionate sanctions that comply with Articles 20(20) and 19(3). France’s Avia law also attempted to tackle hateful content online but was declared unconstitutional.
Spread of Disinformation
There is even less agreement on a universal definition of disinformation in international human rights law. Disinformation laws are too often vague and overbroad, capturing protected expression. As Human Rights Watch explained, “false” information can be hotly contested:
"The spread of disinformation that undermines human rights and online gender-based violence requires a government response. However, government responses to these human rights challenges that focus on the criminalization of content can also lead to disproportionate rights restrictions, particularly the right to freedom of expression and privacy."
All kinds of information and ideas are protected under ICCPR Article 19, even those that may “shock, offend, or disturb,” regardless of whether the content is true or false. People have the right to hold and express unsubstantiated views or share parodies or satirical expressions. As the UN Special Rapporteur on the freedom of expression noted, “prohibition of false information is not a legitimate aim under the international human rights law.”
The free flow of information is an integral part of freedom of expression, which is especially important in political speech on matters of public interest. While disinformation disseminated intentionally to cause social harm is problematic, the UN Special Rapporteur emphasized that so too are vague criminal laws that chill online speech and shrink civic space.
The 2017 Joint Declaration on Freedom of Expression and “Fake News,” Disinformation and Propaganda provides key principles under international human rights law to assist states, companies, journalists, and other stakeholders in addressing disinformation. For example, Member States are encouraged to create an enabling environment for free expression, ensure that they disseminate reliable and trustworthy information, and adopt measures to promote media and digital literacy.
In its Resolution 44/12, the UN Human Rights Council stated that responses to disinformation should always comply with legality, legitimacy, necessity, and proportionality principles. As with hate speech, vague prohibitions on disinformation will rarely meet the legality standard. For example, the Joint Statement of the UN Special Rapporteur, the OSCE Representative on Freedom of Media, and the IACHR Special Rapporteur for Freedom of Expression sounded the alarm about the rise of overbroad “fake news” bills in the context of the COVID-19 pandemic. (Human Rights Watch documented the application of these laws, and EFF expressed its concerns on these bills, too).
On the specific topic of electoral disinformation, the UN Special Rapporteur has said that electoral laws prohibiting the propagation of falsehoods in the electoral process may meet the Article 19(3) test. Additionally, such restrictions should be “narrowly construed, time-limited, and tailored to avoid limiting political debate.”
Despite these cautions, numerous proposals were presented to the UN Ad Hoc Committee that would create new cybercrimes of misinformation. Tanzania proposed to outlaw the “publication of false information.” Jordan suggests including the “dissemination of rumors or false news through information systems, networks or websites.” Russia, jointly with Belarus, Burundi, China, Nicaragua, and Tajikistan, called for prohibiting “the intentional illegal creation and use of digital information capable of being mistaken for information already known and trusted by a user, causing substantial harm.”
Once again, these vague provisions will hardly satisfy human rights standards. Their practical interpretation and application will have an adverse effect on fundamental rights, and result in more harm than good.
The Way Forward—Exclude Offenses Based on the Content Of Online Expression
EFF joins its partners, including Article 19, AccessNow, Priva, and Human Rights Watch, in urging the UN Member States to exclude content-related offenses from the proposed UN Cybercrime Treaty. In a letter to the UN Ad Hoc Committee, EFF and more than 130 civil society groups warned that cybercrime laws have already been weaponized to target journalists, whistleblowers, political dissidents, security researchers, LGBTQ communities, and human rights defenders. Member States don’t have any room for error when drafting a global treaty. They should find consensus to exclude speech-related offenses from the UN Cybercrime treaty.