Every year or so, a new Facebook scandal emerges. These blowups follow a fairly standard pattern, at least in the U.S. First, new information is revealed that the company misled users about an element of the platform—data sharing and data privacy, extremist content, ad revenue, responses to abuse—the list goes on. Next, following a painful news cycle for the company, Mark Zuckerberg puts on a sobering presentation for Congress about the value that Facebook provides to its users, and the work that they’ve already done to resolve the issue. Finally, there is finger-wagging, political jockeying, and within a month or two, a curious thing happens: Congress does nothing.
It’s not for lack of trying, of course—much like Facebook, Congress is a many-headed beast, and its members rarely agree on the specific problems besetting American life, let alone the solutions. But this year may be different.
Many of the problems highlighted by these documents are not particularly new. Regardless, we may finally be at a tipping point.
For the last month, Facebook has been at the center of a lengthy, damaging news cycle brought on by the release of thousands of pages of leaked documents, sent to both Congress and news outlets by former Facebook data scientist Frances Haugen. The documents show the company struggling internally with the negative impacts of both Facebook and its former-rival, now-partner platform, Instagram. (Facebook’s attempt to rebrand as Meta should not distract from the takeaways of these documents, so we will continue to call the company Facebook here.)
In addition to internal research and draft presentations released several weeks ago, thousands of new documents were released last week, including memos, chats, and emails. These documents paint a picture of a company that is seriously grappling with (and often failing in) its responsibility as the largest social media platform. In no particular order, the documents show that:
- The size of Facebook’s corporate bureaucracy leaves employees with competing priorities and overlapping responsibilities, as the “left hand” often doesn’t know what the “right hand” is doing (or is incentivized to do).
- The company continues to struggle with effective content moderation, due to the scale of the user base, fear of political blowback, piecemeal enforcement, lack of local cultural and language expertise, and internal programs like “cross check” that classify some users differently from others.
- Recommendation engines continue to direct users to extreme and extremist content, especially in countries like India, Iraq, and Ethiopia, where little effort has been made to understand and appropriately address local issues.
- Changes to the news feed intended to improve “meaningful social interactions” have generally failed, increasing both misinformation and polarization.
- Internal research about features may point in different directions than the company publicly admits, such as in the case of Instagram’s effect on young women, and decisions are sometimes made that appear to conflict with the outcome of that research.
- The company has a growth at all costs mindset that dangerously prioritizes engagement and increasing time spent on the platform over everything else.
- Employee pushback to internal decisions is often significant and frequently ignored.
Many of the problems highlighted by these documents are not particularly new. People looking in at the black box of Facebook’s decision-making have come to similar conclusions in several areas; those conclusions have simply now been proven. Regardless, we may finally be at a tipping point.
When Mark Zuckerberg went in front of Congress to address his company’s role in the Cambridge Analytica scandal over three years ago, America’s lawmakers seemed to have trouble agreeing on basic things like how the company’s business model worked, not to mention the underlying causes of its issues or how to fix them. But since then, policymakers and politicians have had time to educate themselves. Several more hearings addressing the problems with Big Tech writ large, and with Facebook in particular have helped government develop a better shared understanding of how the behemoth operates; as a result, several pieces of legislation have been proposed to rein it in.
Now, the Facebook Papers have once again thrust the company into the center of public discourse, and the scale of the company’s problems have captured the attention of both news outlets and Congress. That’s good—it’s high time to turn public outrage into meaningful action that will rein in the company.
But it’s equally important that the solutions be tailored, carefully, to solve the actual issues that need to be addressed. No one would be happy with legislation that ends up benefitting Facebook while making it more difficult for competing platforms to coexist. For example, Facebook has been heavily promoting changes to Section 230 that would, by and large, harm small platforms while helping the behemoth.
Here’s where EFF believes Congress and the U.S. government could make a serious impact:
Break Down the Walls
Much of the damage Facebook does is a factor of its size. Other social media sites that aren’t attempting to scale across the entire planet run into fewer localization problems, are able to be more thoughtful about content moderation, and have, frankly, a smaller impact on the world. We need more options. Interoperability will help us get there.
Interoperability is the simple idea that new services should be able to plug into dominant ones. An interoperable Facebook would mean that you wouldn’t have to choose between leaving Facebook and continuing to socialize with the friends, communities and customers you have there. Today, if you want to leave Facebook, you need to leave your social connections behind as well: that means no more DMs from your friend, no more access to your sibling’s photos, and no more event invitations from your co-workers. In order for a new social network to get traction, whole social groups have to decide to switch at the same time - a virtually insurmountable barrier. But if Facebook were to support rich interoperability, users on alternative services could communicate with users on Facebook. Leaving Facebook wouldn’t mean leaving your personal network. You could choose a service - run by a rival, a startup, a co-op, a nonprofit, or just some friends - and it would let you continue to connect with content and people on Facebook, while enforcing its own moderation and privacy policies.
We need more options. Interoperability will help us get there.
Critics often argue that in an interoperable world, Facebook would have less power to deny bad actors access to our data, and thus defend us from creeps like Cambridge Analytica. But Facebook has already failed to defend us from them. When Facebook does take action against third-party spying on its platform, it’s only because that happens to be in its interests: either as a way to quell massive public outcry, or as a convenient excuse to undermine legitimate competition. Meanwhile, Facebook continues to make billions from its own exploitation of our data. Instead of putting our trust in corporate privacy policies, we’d need a democratically accountable privacy law, with a private right of action. And any new policies which promote interoperability should come with built-in safeguards against the abuse of user data.
Interoperability isn’t an alternative to demanding better of Facebook - better moderation, more transparency, better privacy rules - rather, it’s an immediate, tangible way of helping Facebook’s users escape from its walled garden right now. Not only does that make those users’ lives better - it also makes it more likely that Facebook will obey whatever rules come next, not just because those are the rules, but because when they break the rules, their users can easily leave Facebook.
Facebook knows this. It’s been waging a “secret war on switching costs” for years now. Legislation like the ACCESS Act that would force platforms like Facebook to open up are a positive step toward a more interoperable future. If a user wants to view Facebook through a third-party app that allows for better searching or more privacy, they ought to be able to do so. If they want to take their data to platforms that have better privacy protections, without leaving their friends and social connections behind, they ought to be able to do that too.
Pass a Baseline, Strong Privacy Law
Users deserve meaningful controls over how the data they provide to companies is collected, used, and shared. Facebook and other tech companies too often choose their profits over your privacy, opting to collect as much as possible while denying users intuitive control over their data. In many ways this problem underlies the rest of Facebook’s harms. Facebook’s core business model depends on collecting as much information about users as possible, then using that data to target ads - and target competitors. Meanwhile, Facebook (and Google) have created an ecosystem where other companies - from competing advertisers to independent publishers - feel as if they have no choice but to spy on their own users, or help Facebook do so, in order to squeak out revenue in the shadow of the monopolists.
Stronger baseline federal privacy laws would help steer companies like Facebook away from collecting so much of our data.
Stronger baseline federal privacy laws would help steer companies like Facebook away from collecting so much of our data. They would also level the playing field, so that Facebook and Google cannot use their unrivaled access to our information as a competitive advantage. A strong privacy law should require real opt-in consent to collect personal data and prevent companies from re-using that data for secondary purposes. To let users enforce their rights, it must include a private cause of action that allows users to take companies to court if they break the law. This would tip the balance of power away from the monopolists and back towards users. Ultimately, a well-structured baseline could put a big dent in the surveillance business model that not only powers Facebook, but enables so many of the worst harms of the tech ecosystem as well.
Break Up the Tech
Facebook’s broken system is fueled by a growth-at-any-cost model, as indicated by some of the testimony Haugen delivered to Congress. The number of Facebook users and the increasing depth of the data it gathers about them is Facebook’s biggest selling point. In other words, Facebook’s badness is inextricably tied to its bigness.
We’re pleased to see antitrust cases against Facebook. Requiring Facebook to divest Instagram, WhatsApp, and possibly other acquisitions and limiting the companies’ future mergers and acquisitions would go a long way toward solving some of the problems with the company, and inject competition into a field where it’s been stifled for many years now. Legislation to facilitate a breakup also awaits House floor action and was approved by the House Judiciary Committee.
Shine a Light On the Problems
Some of the most detailed documents that have been released so far show research done by various teams at Facebook. And, despite being done by Facebook itself, much of that research’s conclusions are critical of Facebook’s own services.
For example: a large percentage of users report seeing content on Facebook that they consider disturbing or hateful—a situation that the researcher notes “needs to change.” Research also showed that some young female Instagram users report that the platform makes them feel bad about themselves.
But one of the problems with documents like these is that it’s impossible to know what we don’t know—we’re getting reports piecemeal, and have no idea what practical responses might have been offered or tested. Also, some of the research might not always mean what first glances would indicate, due to reasonable limitations or the ubiquity of the platform itself.
EFF has been critical of Facebook’s lack of transparency for a very long time. When it comes to content moderation, for example, the company’s transparency reports lack many of the basics: how many human moderators are there, and how many cover each language? How are moderators trained? The company’s community standards enforcement report includes rough estimates of how many pieces of content of which categories get removed, but does not tell us why or how these decisions are taken.
The company must make it easier for researchers both inside and outside to engage in independent analysis.
Transparency about decisions has increased in some ways, such as through the Facebook Oversight Board’s public decisions. But revelations from the whistleblower documents about the company’s “cross-check” program, which gives some “VIP” users a near-blanket ability to ignore the community standards, make it clear that the company has a long way to go. Facebook should start by embracing the Santa Clara Principles on Transparency and Accountability in Content Moderation, which are a starting point for companies to properly indicate the ways that they moderate user speech.
But content moderation is just the start. Facebook is constantly talking out of both sides of its depressingly large mouth—most recently by announcing it would delete face recognition templates of users of Facebook, then backing away from this commitment in its future ventures. Given how two-faced the company has frankly, always been, transparency is an important step towards ensuring we have real insight into the platform. The company must make it easier for researchers both inside and outside to engage in independent analysis.
Look Outside the U.S.
Facebook must do more to respect its global user base. Facebook—the platform—is available in over 100 languages, but the company has only translated its community standards into around 50 of those (as of this writing). How can a company expect to enforce its moderation rules properly when they are written in languages, or dialects, that its users can’t read?
The company also must ensure that its employees, and in particular its content moderators, have cultural competence and local expertise. Otherwise it is literally impossible for them to appropriately moderate content. But first, it has to actually employ people with that expertise. It’s no wonder that the company has tended to play catch-up when crises arrive outside of America (where it also isn’t exactly ahead of the game).
And by the way: it’s profoundly disappointing that the Facebook Papers were released only to Western media outlets. We know that many of the documents contain information about how Facebook conducts business globally—and particularly how the company fails to put appropriate resources behind its policymaking and content moderation practices in different parts of the world. Providing trusted, international media publications that have the experience and expertise to provide nuanced, accurate analysis and perspective is a vital step in the process—after all, the majority of Facebook’s users worldwide live outside of the United States and Europe.
Don’t Give In To Easy Answers
Facebook is big, but it’s not the internet. More than a billion websites exist; tens of thousands of platforms allow users to connect with one another. Any solutions Congress proposes must remember this. Though Zuckerberg may “want every other company in our industry to make the investments and achieve the results that [Facebook has],” forcing everyone else to play by their rules won’t get us to a workable online future. We can’t fix the internet with legislation that pulls the ladder up behind Facebook, leaving everyone else below.
For example: legislation that forces sites to limit recommended content could have disastrous consequences, given how commonly sites make (often helpful) choices about the information we see when we browse, from restaurant recommendations to driving directions to search results. And forcing companies to rethink their algorithms, or offer “no algorithm” versions, may seem like fast fixes for a site like Facebook. But the devil is in the details, and in how those details get applied to the entire online ecosystem.
The Facebook leaks should be the starting point—not the end—of a sincere policy debate over concrete approaches that will make the internet—not just Facebook—better for everyone.
Facebook, for its part, seems interested in easy fixes as well. Rebranding as “Meta” amounts to a drunk driver switching cars. Gimmicks designed to attract younger users to combat its aging user base are a poor substitute for thinking about why those users refuse to use the platform in the first place.
Zuckerberg has gotten very wealthy while wringing his hands every year or two and saying, “sorry. I’m sorry. I’m trying to fix it.” Facebook’s terrible, no good, very bad news cycle is happening at the same time that the company reported a $9 billion dollar profit for the quarter.
Zuckerberg insists this is not the Facebook he wanted to create. But, he’s had nearly two decades of more-or-less absolute power to make the company into whatever he most desired, and this is where it’s ended up—despised, dangerous, and immensely profitable. Given that track record, it’s only reasonable that we handicap his suggestions during any serious consideration about how to get out of this place.
Nor should we expect policymakers to do much better unless and until they start listening to a wider array of voices. While the leaks have been directing the narrative about where the company is failing its users, there are plenty of other issues that aren’t grabbing headlines—like the fact that Facebook continues collecting data on deactivated accounts. A focused and thoughtful effort by Congress must include policy experts who have been studying the problems for years.
The Facebook leaks should be the starting point—not the end—of a sincere policy debate over concrete approaches that will make the internet—not just Facebook—better for everyone.