The mainstream media has paid a lot more attention to abuse and harassment on Twitter lately, including a recent story by Lindy West on This American Life about her experience confronting an especially vitriolic troll. She isn’t alone—and it appears that for the company at least, the number of Twitter users speaking out about harassment has reached critical mass. In an internal memo obtained by The Verge earlier this month, Twitter CEO Dick Costolo acknowledged Twitter's troubled history with harassment, writing:
We suck at dealing with abuse and trolls on the platform and we've sucked at it for years. It's no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day. I'm frankly ashamed of how poorly we've dealt with this issue during my tenure as CEO. It's absurd. There's no excuse for it. I take full responsibility for not being more aggressive on this front. It's nobody else's fault but mine, and it's embarrassing.
We’re glad to see Twitter taking abuse1 seriously. In the past, Twitter's senior executives have been reticent to engage critics on the harassment issue. Mr. Costolo is right. Abuse has (understandably) driven some users off of Twitter and other platforms. And as a digital civil liberties organization, whose concerns include both privacy and free speech, we have some thoughts about what Twitter should and shoudn’t do to combat its abuse problem.
Clearer Policies, Better Tools
Twitter's rules concerning abusive behavior appear to be very straightforward:
Users may not make direct, specific threats of violence against others, including threats against a person or group on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability. Targeted abuse or harassment is also a violation of the Twitter Rules and Terms of Service.
In truth, Twitter's abuse policies are open to interpretation. And the way Twitter does interpret them can be difficult for outsiders to understand—which perhaps explains complaints from users that they seem to be enforced inconsistently. Users have argued that abuse reports often disappear without response, or take months to resolve. On the other hand, it appears that sometimes tweets are removed or accounts are suspended without any explanation to the user of the applicable policy or reference to the offending content.
Twitter could work to address these criticisms by bringing more transparency and accountability to its abuse practices. We understand why the company would want to avoid bringing more attention to controversial or offensive content. But Twitter could expand its transparency report to include information about content takedowns and account suspensions related to abuse complaints, such as the number of complaints, type of complaint, whether or not the complaint resulted in the takedown of content or the suspension of an account, and so on. Transparency can be a rocky road, and can draw attention to the flaws in a company's approach as well as highlight its success. But if Twitter wants a better process, and not just less criticism, it's a powerful feedback loop that can improve responsiveness and user trust.
Twitter could also give users better options for controlling the type of content they see on the service in the first place. Several proposals along these lines have been enumerated by Danilo Campos, including giving the option to block accounts that are less than 30 days old, blocking all users with a low follower count, and blocking keywords in @ replies, and the ability to share lists of blocked users with friends. Not all of these solutions need to start from scratch, either. Applications such as Block Together, which grants many of these options to Twitters users, already exist. But Twitter could and should build these abilities directly into its web and mobile interfaces.
Similarly, Twitter could make it easier to control which notifications users see. Currently, the web interface for Twitter allows people to easily filter notifications so that users can see all notifications, or only notifications from people they follow. However, the mobile interface and app bury this capability deep in its menus. Users facing harassment should be treated as a first class user of the platform, not a special case. That means features meant to help them should be easy to access.
Twitter's Role in Policing Content
Handling abuse complaints individually is an enormous challenge for a global corporation like Twitter. It's unclear how Twitter can scale the responses users expect and deserve when dealing with thousands of cases per week, across hundreds of languages and millions of users. That’s one reason why we support seeking solutions to harassment that don’t rely on centralized platforms. If Twitter wants to do any policing of its platform, it will need the human touch—not just an automated algorithm. To do that requires well-staffed abuse teams, with the resources and the tools to be responsive, examine complaints in context, and clearly explain the link between every takedown or suspension and Twitter's policies to the affected user.
Those abuse teams will need to understand the context from which their users speak. Research in the United States shows that women, African-American, and Hispanic users report disproportionate levels of harassment. The pattern that certain groups—ethnic, economic, political or gendered—receive higher levels of abuse online looks to hold true globally. Companies like Twitter have already publicly recognized the importance of diversity in their workplace. Diversity in their abuse team could be useful in providing necessary context to understand and react correctly to harassment reports. Before that happens, mistakes are going to be made—and we worry that mistakes will disproportionately affect those targeted groups.
What Twitter Shouldn't Do
There's plenty there for Twitter to consider in tackling its abuse problem head-on, but there are also a lot of ways in which these efforts can go wrong. Looking a little further down in Costolo's memo, he writes:
We're going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.
This is a dangerous sentiment. Kicking people off left and right is exactly the opposite of the kind of contextual, nuanced examination of complaints that Twitter needs to do if it intends to suspend accounts or take down content. And it's an attitude will inevitably lead to poor decisions. Mr. Costolo’s conflation of trolls and abuse is an indicator of this. While some trolls may also be abusers, as we’ve noted before, ugly or offensive speech doesn’t always rise to the level of harassment.
Solutions that empower users rather than solutions that seek to bury the platform's problem by swiftly ejecting and silencing users, are in the long term more scaleable, and less arbitrary. We're glad to see evidence that Twitter is planning to get aggressive about the challenge of harassment, but we hope that aggression doesn't come at the expense of being smart.
- 1. We see abuse as synonymous with our definition of harassment—extreme levels of targeted hostility; exposure of private lives; violent, personalized imagery, and threats of violence.