Next time you hear someone blame Section 230 for a problem with social media platforms, ask yourself two questions: first, was this problem actually caused by Section 230? Second, would weakening Section 230 solve the problem? Politicians and commentators on both sides of the aisle frequently blame Section 230 for big tech companies’ failures, but their reform proposals wouldn’t actually address the problems they attribute to Big Tech. If lawmakers are concerned about large social media platforms’ outsized influence on the world of online speech, they ought to confront the lack of meaningful competition among those platforms and the ways in which those platforms fail to let users control or even see how they’re using our data. Undermining Section 230 won’t fix Twitter and Facebook; in fact, it risks making matters worse by further insulating big players from competition and disruption.
While large tech companies might clamor for regulations that would hamstring their competitors, they’re notably silent on reforms that would curb the practices that allow them to dominate the Internet today.
Section 230 says that if you break the law online, you should be the one held responsible, not the website, app, or forum where you said it. Similarly, if you forward an email or even retweet a tweet, you’re protected by Section 230 in the event that that material is found unlawful. It has some exceptions—most notably, that it doesn’t shield platforms from liability under federal criminal law—but at its heart, Section 230 is just common sense: you should be held responsible for your speech online, not the platform that hosted your speech or another party.
Without Section 230, the Internet would be a very different place, one with fewer spaces where we’re all free to speak out and share our opinions. Social media wouldn’t exist—at least in its current form—and neither would important educational and cultural platforms like Wikipedia and the Internet Archive. The legal risk associated with operating such a service would deter any entrepreneur from starting one, let alone a nonprofit.
As commentators of all political stripes have targeted large Internet companies with their ire, it’s become fashionable to blame Section 230 for those companies’ failings. But Section 230 isn’t why five companies dominate the market for speech online, or why the marketing and behavior analysis decisions that guide Big Tech’s practices are so often opaque to users.
The Problem with Social Media Isn’t Politics; It’s Power
A recent Congressional hearing with the heads of Facebook, Twitter, and Google demonstrated the highly politicized nature of today’s criticisms of Big Tech. Republicans scolded the companies for “censoring” and fact-checking conservative speakers while Democrats demanded that they do more to curb misleading and harmful statements.
There’s a nugget of truth in both parties’ criticisms: it’s a problem that just a few tech companies wield immense control over what speakers and messages are allowed online. It’s a problem that those same companies fail to enforce their own policies consistently or offer users meaningful opportunity to appeal bad moderation decisions. There’s little hope of a competitor with fairer speech moderation practices taking hold given the big players’ practice of acquiring would-be competitors before they can ever threaten the status quo.
Unfortunately, trying to legislate that platforms moderate “neutrally” would create immense legal risk for any new social media platform—raising, rather than lowering, the barrier to entry for new platforms. Can a platform filter out spam while still maintaining its “neutrality”? What if that spam has a political message? Twitter and Facebook would have the large legal budgets and financial cushions to litigate those questions, but smaller platforms wouldn’t.
We shouldn’t be surprised that Facebook has joined Section 230’s critics: it literally has the most to gain from decimating the law.
Likewise, if Twitter and Facebook faced serious competition, then the decisions they make about how to handle (or not handle) hateful speech or disinformation wouldn’t have nearly the influence they have today on online discourse. If there were twenty major social media platforms, then the decisions that any one of them makes to host, remove, or factcheck the latest misleading post about the election results wouldn’t have the same effect on the public discourse. The Internet is a better place when multiple moderation philosophies can coexist, some more restrictive and some more permissive.
The hearing showed Congress’ shortsightedness when it comes to regulation of large Internet companies. In their drive to use the hearing for their political ends, both parties ignored the factors that led to Twitter, Facebook, and Google’s outsized power and remedies to bring competition and choice into the social media space.
Ironically, though calls to reform Section 230 are frequently motivated by disappointment in Big Tech’s speech moderation policies, evidence shows that further reforms to Section 230 would make it more difficult for new entrants to compete with Facebook or Twitter. It shouldn’t escape our attention that Facebook was one of the first tech companies to endorse SESTA/FOSTA, the 2018 law that significantly undermined Section 230’s protections for free speech online, or that Facebook is now leading the charge for further reforms to Section 230 (PDF). Any law that makes it more difficult for a platform to maintain Section 230’s liability shield will also make it more difficult for new startups to compete with Big Tech. (Just weeks after SESTA/FOSTA passed and put multiple dating sites out of business, Facebook announced that it was entering the online dating world.) We shouldn’t be surprised that Facebook has joined Section 230’s critics: it literally has the most to gain from decimating the law.
Remember, speech moderation at scale is hard. It’s one thing for platforms to come to a decision about how to handle divisive posts by a few public figures; it’s quite another for them to create rules affecting everyone’s speech and enforce them consistently and transparently. When platforms err on the side of censorship, marginalized communities are silenced disproportionately. Congress should not try to pass laws dictating how Internet companies should moderate their platforms. Such laws would not pass Constitutional scrutiny, would harden the market for social media platforms from new entrants, and would almost certainly censor innocent people unfairly.
Then How Should Congress Keep Platforms in Check? Some Ideas You Won’t Hear from Big Tech
While large tech companies might clamor for regulations that would hamstring their competitors, they’re notably silent on reforms that would curb the practices that allow them to dominate the Internet today. That’s why EFF recommends that Congress update antitrust law to stop the flood of mergers and acquisitions that have made competition in Big Tech an illusion. Before the government approves a merger, the companies should have to prove that the merger would not increase their monopoly power or unduly harm competition.
But even updating antitrust policy is not enough: big tech companies will stop at nothing to protect their black box of behavioral targeting from even a shred of transparency. Facebook recently demonstrated this when it threatened the Ad Observatory, an NYU project to shed light on how the platform was showing different political advertising messages to different segments of its user base. Major social media platforms’ business models thrive on practices that keep users in the dark about what information they collect on us and how it’s used. Decisions about what material (including advertising) to deliver to users are informed by a web of inferences about users, inferences that are usually impossible for users even to see, let alone correct.
Because of the link between social media’s speech moderation policies and its irresponsible management of user data, Congress can’t improve Big Tech’s practices without addressing its surveillance-based business models. And although large tech companies have endorsed changes to Section 230 and may endorse further changes to Section 230 in the future, they will probably never endorse real, comprehensive privacy-protective legislation.
That the Internet Association and its members have fought tooth-and-nail to stop privacy protective legislation while lobbying for bills undermining Section 230 says all you need to know about which type of regulation they see as the greater threat to their bottom line.
Any federal privacy bill must have a private right of action: if a company breaks the law and infringes on our privacy rights, it’s not enough to put a government agency in charge of enforcing the law. Users should have the right to sue the companies, and it should be impossible to sign away those rights in a terms-of-service agreement. The law must also forbid companies from selling privacy as a service: all users must enjoy the same privacy rights regardless of what we’re paying—or being paid—for the service.
The recent fights over the California Consumer Privacy Act serve as a useful example of how tech companies can give lip service to the idea of privacy-protecting legislation while actually insulating themselves from it. After the law passed in 2018, the Internet Association—a trade group representing Big Tech powerhouses like Facebook, Twitter, and Google—spent nearly $176,000 lobbying the California legislature to weaken the law. Most damningly, the IA tried to pass a bill exempting surveillance-based advertising from the practices from which the law protects consumers. That’s right: big tech companies tried to pass a law protecting their own invasive advertising practices that helped cement their dominance in the first place. That the Internet Association and its members have fought tooth-and-nail to stop privacy protective legislation while lobbying for bills undermining Section 230 says all you need to know about which type of regulation they see as the greater threat to their bottom line.
Section 230 has become a hot topic for politicians and commentators on both sides of the aisle. Whether it’s Republicans criticizing Big Tech for allegedly censoring conservatives or Democrats alleging that online platforms don’t do enough to fight harmful speech online, both sides seem increasingly convinced that they can change Big Tech’s social media practices by undermining Section 230. But history has shown that making it more difficult for platforms to maintain Section 230 protections will further isolate a few large tech companies from meaningful competition. If Congress wants to keep Big Tech in check, it must address the real problems head-on, passing legislation that will bring competition to Internet platforms and curb the unchecked, opaque user data practices at the heart of social media’s business models.
You’ll never hear Big Tech advocate that.