Recent elections across the Americas from the United States to Brazil have stirred fears about the impact of “fake news”. Earlier this month, EFF made a submission to the Organization of American States (OAS), the pan-American institution currently investigating the extent and impact of false information across the region. While acknowledging the perceived risks, our testimony warned of the dangers of over-reacting to a perceived online threat, at the cost of free expression standards in the region.
Over-reaction isn’t just a future hypothetical. During 2018, 17 governments approved or proposed laws restricting online media with the justification of combating online manipulation. Citizens were prosecuted and faced criminal charges in at least ten countries for spreading “fake news.” Disinformation flows are not a new issue, neither is the use of "fake news" as a label to attack all criticism as baseless propaganda. The lack of a set definition for this term magnifies the problem, rendering its use susceptible to multiple and inconsistent meanings. Time and again legitimate concerns about misinformation and manipulation were misconstrued or distorted to entrench the power of established voices and stifle dissent. To combat these pitfalls, EFF’s submission presented recommendations —and stressed that the human rights standards on which the Inter-American System builds its work, already provide substantial guidelines and methods to address disinformation without undermining free expression and other fundamental rights.
The Americas’ human rights standards — which include the American Convention on Human Rights — declare that restrictions to free expression must be (1) clearly and precisely defined by law, (2) serve compelling objectives authorized by the American Convention, and (3) be necessary and appropriate in a democratic society to accomplish the objectives pursued as well as strictly proportionate to the intended objective. New prohibitions on the online dissemination of information based on vague ideas, such as “false news,” for example, fail to comply with this three-part test. Restrictions on free speech that vaguely claim to protect the “public order” also fall short of meeting these requirements.
The American Convention on Human Rights also says that the right of free expression may not be restricted by indirect methods or means. Since most communication on the Internet is facilitated by intermediaries, such as ISPs and social media platforms, unnecessary and disproportionate measures targeted at them invariably result in undue limitation of the rights of freedom of expression and access to information. Governmental orders to shut down mobile networks or block entire social media platforms, as well as legislation compelling intermediaries to remove content within 24 hours after a user’s notice or to create automated content filters, all in the name of countering “fake news,” clearly constitute an excessive approach that harms free speech and access to information. Holding Internet intermediaries accountable for third-party content stimulates self-censorship by platforms and hinders innovation.
Any State’s attempt to tackle disinformation in electoral contexts must carefully avoid undercutting the deep connection between democracy and free expression. The fiercest debates over society and a government’s direction take place during elections, and during which public engagement is often maximized. While abuses of free speech by the person responsible for the content will and should be addressed by subsequent civil liability, companies should not be turned into a sort of speech police. Experience has proven this is not a wise alternative; private platforms are prone to error and can disproportionately censor the less powerful. When Internet intermediaries establish terms and rules for their platforms, they should do so by following standards of transparency, due process, and accountability, also taking human rights principles into account, including free expression, access to information, and non-discrimination.
So what can be done? In our submission, we outlined some guidelines on how to address actions aimed at combating disinformation during elections:
- Advancing transparency and accountability in content moderation. Platforms need better practices with regard to notification of users, due process, and available data on content restriction and accounts suspension, as developed in the Santa Clara Principles.
- Deploying better tools for users, including permitting greater user customization of feed and search algorithms, and increasing the transparency of electoral advertising, among others.
- Avoiding steps that might undermine personal privacy, including subverting encryption. Denying user security is not an answer to disinformation.
- Paying attention to network neutrality and platform competition. Zero-rating practices may discourage users from searching for alternative sources of information or even to read the actual news piece. Data portability and interoperability, on the other hand, can help to provide more players and sources.
As underscored in EFF’s submission, the abundance of information in the digital world should not be deemed, in itself, a problem. But the responses to the “fake news” phenomenon—if they’re unable to adhere to proper human rights standards—could be.