Crossposted from Techpresident
We are living in an era where transparency — be it from government, corporations, or individuals — has come to be expected. As such, social media platforms have come under scrutiny in recent years for their policies around content moderation, but perhaps none have received as much criticism as Facebook.
The platform, which boasts 900 million users worldwide, has been the object of ire by LGBT rights advocates, Palestinian activists and others for its seemingly arbitrary methods of content moderation. The platform’s policies are fairly clear, but the manner by which its staff chooses to either keep or delete content from the site has long seemed murky — until now.
Recently, Facebook posted an elaborate flow chart dubbed its “Reporting Guide,” demonstrating what happens when content is reported by a user. For example, if a Facebook user reports another user’s content as spam, the content is referred (or “escalated”) to Facebook’s Abusive Content Team, whereas harassment is referred to the Hate and Harassment Team. There are also protocols for referring certain content to law enforcement, and for warning a user or deleting his or her account.
Facebook should be commended for lending transparency to a process that has long come under criticism for its seeming arbitrariness. Such transparency is imperative to help users understand when their behavior is genuinely in violation of the site’s policies; for example, several activists have reported receiving warnings after adding too many new “friends” too quickly, a result of a sensitive spam-recognition algorithm. Awareness of that fact could help users modify their behavior so as to avoid account suspension.
Nevertheless, the fact remains that the concept of “community reporting” — on which Facebook heavily relies — is inherently problematic, particularly for high-profile and activist users of a site. Whereas an average user of Facebook might be able to get away with breaking the rules, a high-profile user is not only more likely to be reported (by sheer virtue of his high profile) but may in fact be the target of an intentional campaign to get him banned from the site. Such campaigns have been well-documented; in one instance, a Facebook group was set up for the sole purpose of inciting its members to report Arab atheist groups for violating the site’s policies, a strategy that proved successful in taking at least one such group down. Similar campaigns have been noted in other contexts.
The problem is also apparent when viewed through the context of Facebook’s “real name” rule. Chinese journalist Michael Anti, whose “real” name is Jing Zhao,
found himself banned from the platform in 2011 after being reported for violating the policy. Although Anti has used his English name for more than ten years, including as a writer for the New York Times, he was nonetheless barred from doing so on Facebook. At the time, however, there were a documented 500+ individuals with accounts under the name of “Santa Claus.”
Though these contradictions still exist, it’s clear that Facebook is working to improve both its policies and processes. After all, it was only a short time ago that users violating the site’s terms of service were met with account deletion and a terse message stating that “the decision was final.” Now, users receive warnings, guidance on behavior modification, and an opportunity to appeal — all significant improvements. Facebook also recently joined the Global Network Initiative as an observer. This should, hopefully, guide the company toward more transparency and accountability.
As Facebook grows, monopolizing more and more of the social media landscape, its methods of content moderation will become increasingly difficult to scale. The company runs the risk of alienating users from its community, and may want to consider loosening up on some of its policies lest enforcement become untenable.