Mandatory Filtering Proposals Curb Competition
When looking at a proposed policy regulating Internet businesses, here’s a good question to ask yourself: would this bar new companies from competing with the current big players? Google will probably be fine, but what about the next Google?
In the past few years, some large movie studios and record labels have been promoting a proposal that would effectively require user-generated media platforms to use copyright bots similar to YouTube’s infamous Content ID system. Today’s YouTube will have no trouble complying, but imagine if such requirements had been in place when YouTube was a three-person company. If copyright bots become the law, the barrier to entry for new social media companies will get a lot higher.
A Brief History of Copyright Bots
In many ways, the history of copyright bots is really the history of Content ID. Content ID was not the first bot on the market, but it’s the template for what major film studios and record labels have come to expect of content platforms.
When Google acquired YouTube in 2006, the platform was under heavy fire from major film studios and record labels, which complained in court and in Congress that the platform enabled widespread copyright infringement. YouTube complied with all of the requirements that the Digital Millennium Copyright Act (DMCA) puts on content platforms—including following the notice-and-takedown procedure when rights holders accuse their users of infringement. The DMCA essentially offers content platforms a trade—if they do their part to tackle infringing activity, they’re sheltered from copyright liability under the DMCA safe harbor rules. Hollywood agreed to those rules back in 1998, but now it wanted to rewrite the deal.
In response to legal and commercial pressure from content industries, Google developed Content ID, a program that goes beyond YouTube’s DMCA obligations. Content ID doesn’t replace notice-and-takedown; it creates a system for proactive filtering that often lets rights holders remove allegedly infringing content without even having to send a DMCA takedown request.
Rights holders submit large databases of video and audio fingerprints, and YouTube patrols new uploads for closely matching content. Rights holders can choose to have YouTube automatically remove or monetize videos, or they can review them manually and decide what they want YouTube to do with them. There’s a built-in appeals process (which includes escalation to a DMCA takedown, with the fair use consideration the DMCA requires), but it has problems of its own.
For better or worse, Content ID changed YouTube. It bought the company some goodwill with big content owners, many of which have now become prolific YouTube adopters.
Writing Bots into the Law
But the success of Content ID has led some rights holders to the dangerous notion that filtering alone can end the copyright wars. Now, copyright bots have begun to show up all over the Internet—often in places where they make no sense, like your private videos on Facebook. And it appears that some major content owners won’t be satisfied until web platforms have no choice but to adopt systems like Content ID – in other words, turning a voluntary system into a mandate.
Over the past few years, lobbyists representing large content owners both in the U.S. and in Europe have begun to demand mandatory filtering. These proposals vary, but their goals are the same: a world where social media platforms are vulnerable to massive copyright infringement damages unless they go to extreme measures to police their members’ uploads for potential infringement. The Chinese government has gone all-in on copyright filtering, partnering with Hollywood to scan not just people’s social media posts but even their private devices.
For the record, copyright bots can raise major problems even when they aren’t compelled by law. In principle, bots can be useful for weeding out cases of obvious infringement and obvious non-infringement, but they can’t be trusted to identify and allow many instances of fair use. What’s more, their appeals and conflict-resolution systems are often completely opaque to users and seem designed to favor large content companies.
Still, there’s a world of difference between platforms implementing copyright bots as a business decision and being forced to do so by governments. The latter creates a huge, expensive stumbling block for a company to cross before it can ever compete in the market.
Narrow Regulations and Broad Patents
It gets worse. When companies are given only narrow space in which to compete and innovate, it becomes easier for incumbents to set legal traps within those boundaries.
Microsoft was recently issued a patent called “Disabling prohibited content and identifying repeat offenders in service provider storage systems.” It’s a patent on copyright bots, and the Patent Office issued it even though its claims were far from novel: Microsoft only filed it in 2013, a full six years after Google introduced Content ID. We don’t know what Microsoft plans to do with its patent, but we do know that patents this broad can wreak havoc on a marketplace, casting doubt over standard and obvious business practices. And with both Hollywood and governments pressuring content platforms to implement filtering, it’s easy to imagine a time when a broad patent like Microsoft’s would apply by definition to essentially every platform that tried to enter the market.
It might be tempting to think that software patents on copyright filtering will incentivize innovation in filtering, thus making copyright bots more accessible to small platforms. But a patent as broad and generic as Microsoft's risks cutting off innovation well short of that goal: overbroad patents blanket an entire field, rarely disclosing any information of value about the underlying technology.
Business regulations should provide companies wide berth to innovate, experiment, and differentiate themselves from competitors. Patents should cover specific, narrowly defined inventions. Narrow regulations and broad patents are a dangerous combination.
Keep Safe Harbors Safe
Safe harbor protections are essential to how today’s Internet works—without them, many Internet companies would simply be exposed to too much legal risk to operate. Safe harbors have given us the entire social media boom and many other Internet technologies that we take for granted every day.
So any proposal that makes it more burdensome to comply with safe harbor requirements should be examined closely to make sure that it doesn’t close the market to new competitors. Mandatory copyright filtering is likely to do exactly that.
If the kind of laws big media companies are proposing today had been in place 12 years ago, it’s doubtful that YouTube could have survived its early days as a startup. And if those laws get implemented today, new players will need tremendous resources just to get started. Mandatory filtering would create a narrower playing field for Internet businesses and let the most successful players use legal tricks to maintain their advantages. It’s a bad idea.