** This post is one section of a more extensive piece on Brazil’s platform accountability and regulation debate. Click here to read the entire content.
The Special Rapporteurs for Freedom of Expression have stated: "At a minimum, intermediaries should not be required to monitor user-generated content." And that: "Content filtering systems which are imposed by a government and which are not end-user controlled are not justifiable as a restriction on freedom of expression."
There are at least two main reasons why general monitoring obligations are a very bad idea. First, such obligations are perhaps the ultimate expression of treating internet applications as a policing force of everything we do and say online, with pernicious consequences to free expression, access to information, and overriding privacy expectations. If applications' commercial practices often raise similar concerns, societal pushback to corporate surveillance has driven data privacy regulations and changes in companies’ policies to better protect user privacy. Second, general monitoring and related pervasive filtering constantly fail, and the fact it performs poorly poses even more concerns to human rights. Given the sheer volume of new content that people post and share on internet applications every minute, content moderation increasingly relies on automated tools, reflecting their limitations and flaws. Regulations or interpretations mandating the adoption of these tools and tying such an obligation to sanctions or liability of internet applications amplify the potential for errors and problematic enforcement.
Speaking just in terms of probability, when a system that's already prone to making mistakes is scaled up to moderate content that churns out at a rate of many millions to billions of entries per day, more mistakes will occur. And when learning models are employed to educate the artificial intelligence (AI) inside these methods, there are rarely chances for the learning models to recognize and self-correct those mistakes. More often than not, such technologies reproduce discrimination and biases. They are prone to censoring legal, non-offending, and relevant speech. While we advocate, and will continue advocating, for human review in content moderation processes, having enough human moderators working in adequate conditions to prevent undue content restrictions will be a continuous challenge.
AI systems usually employed in content moderation include image recognition algorithms and natural language processing models. As for the intricacies of training AI language models, experts underscore that language is highly dependent on cultural and social contexts, and varies considerably across demographic groups, topics of conversation, and types of platforms. Moreover, training language processing algorithms demand clear and precise definitions of targeted content, which is very hard to achieve with complex terms normally implicated in characterizing a criminal or illicit practice. Even if we generally consider that the current stage of available natural language processing tools perform effectively in English, they vary significantly in quality and accuracy for other languages. They can also reproduce discrimination in data, disproportionately affecting marginalized communities, like LGBTQIA+ people and women. Multilingual language models also have their limitations, as they may not reflect well the day-to-day language used by native speakers and fail to account for specific contexts.
In turn, despite current advances in technology, image recognition tools also have their limitations. A good example relates to sexual imagery recognition. Since even people can't agree on where the line is drawn regarding offending and non-offending sexual imagery, the systems we build to automatically recognize it and remove it from online platforms will naturally tend towards the more conservative estimates to minimize legal risks. Without value judgment, that means expression that is otherwise protected, legal, and often coming from sexual minorities, will be deemed inappropriate. A landmark case of platform censorship in Brazil precisely reflects this problem. In 2015, Facebook blocked a picture from the early 20th century of an indigenous couple partially dressed, posted by the Brazilian Ministry of Culture to release the launch of the digital archive Portal Brasiliana Fotográfica right before Brazil's Indigenous Day.
Relatedly, and as we edge closer to sophisticated AI systems able to accurately determine sexual imagery from other material, we stumble onto the age-old problem of art versus porn. Classical art that depicts the nude form continues to be flagged as improper by moderation algorithms, despite overwhelming consensus that it is firmly in the "art" category, and not illegal or contrary to community standards. Contemporary art further blurs those boundaries, often intentionally. Our capabilities for expression as humans are ever-changing, and this will continue to be a challenge for developers of computer systems built to recognize and categorize user-generated content, which at scale will produce even more mistakes.
A considerable rate of mistakes can also happen in image recognition systems based on hashes. Common errors faced by this type of technology, such as the so-called “collisions,” occur because two different imagens can have the same hash value, leading to false positives, where an image is incorrectly identified as something it is not. This can occur for various reasons, such as if the images are very similar, if the hash function is not very good at distinguishing between different images, or if the image has been corrupted or manipulated. The opposite can also occur, that is, to manipulate infringing images so the hash function does not recognize and flag them. Beyond efficiency issues, these systems undermine protections in the architecture of digital platforms that, by design, ensure the inviolability of communications, privacy, security, and data protection, as is the case with end-to-end encryption.
When moderation systems are scaled up to disproportionately large sizes, the reach of their attached monitoring and reporting obligations, if existent, are scaled the same way. And these things can, and have been, tooled as the eyes and ears of arbitrary, nondemocratic forces.
Platform regulation should not incentivize interpretations or further regulation demanding general content monitoring and filtering. PL 2630 should be more explicit to repel such interpretations, and Brazil’s regulatory debate over platform accountability should reject such mandates as not being necessary and proportionate responses.