In politics, as with Internet memes, ideas don't spread because they are good—they spread because they are good at spreading. One of the most virulent ideas in Internet regulation in recent years has been the idea that if a social problem manifests on the Web, the best thing that you can do to address that problem is to censor the Web.
It's an attractive idea because if you don't think too hard, it appears to be a political no-brainer. It allows governments to avoid addressing the underlying social problem—a long and costly process—and instead simply pass the buck to Internet providers, who can quickly make whatever content has raised rankles “go away.” Problem solved! Except, of course, that it isn't.
Amongst the difficult social problems that Web censorship is often expected to solve are terrorism, child abuse and copyright and trade mark infringement. In recent weeks some further cases of this tactic being vainly employed against such problems have emerged from the United Kingdom, France and Australia.
UK Court Orders ISPs to Block Websites for Trade Mark Infringement
In a victory for luxury brands and a loss for Internet users, the British High Court last month ordered five of the country's largest ISPs to block websites selling fake counterfeit goods. Whilst alarming enough, this was merely a test case, leading the way for a reported 290,000 websites to be potentially targeted in future legal proceedings.
Do we imagine for a moment that, out of a quarter-million websites, none of them are false positives that actually sell non-infringing products? (If websites blocked for copyright infringement or pornography are any example, we know the answer.) Do we consider it a wise investment to tie up the justice system in blocking websites that could very easily be moved under a different domain within minutes?
The reason this ruling concerns us is not that we support counterfeiting of manufactured goods. It concerns us because it further normalizes the band-aid solution of content blocking, and deemphasises more permanent and effective solutions that would target those who actually produce the counterfeit or illegal products being promoted on the Web.
Britain and France Call on ISPs to Censor Extremist Content
Not content with enlisting major British ISPs as copyright and trade mark police, they have also recently been called upon to block extremist content on the Web, and to provide a button that users can use to report supposed extremist material. Usual suspects Google, Facebook and Twitter have also been roped by the government to carry out blocking of their own. Yet to date no details have been released about how these extrajudicial blocking procedures would work, or under what safeguards of transparency and accountability, if any, they would operate.
This fixation on solving terrorism by blocking websites is not limited to the United Kingdom. Across the channel in France, a new “anti-terrorism” law that EFF reported on earlier was finally passed this month. The law allows websites to be blocked if they “condone terrorism.” “Terrorism” is as slippery a concept in France as anywhere else. Indeed France's broad definition of a terrorist act has drawn criticism from Human Rights Watch for its legal imprecision.
Australian Plans to Block Copyright Infringing Sites
Finally—though, sadly, probably not—reports last week suggest that Australia will be next to follow the example of the UK and Spain in blocking websites that host or link to allegedly copyright material, following on from a July discussion paper that mooted this as a possible measure to combat copyright infringement.
How did this become the new normal? When did politicians around the world lose the will to tackle social problems head-on, and instead decide to sweep them under the rug by blocking evidence of them from the Web? It certainly isn't due to any evidence that these policies actually work. Anyone who wants to access blocked content can trivially do so, using software like Tor.
Rather, it seems to be that it's politically better for governments to be seen as doing something to address such problems, no matter how token and ineffectual, than to do nothing—and website blocking is the easiest “something” they can do. But not only is blocking not effective, it is actively harmful—both at its point of application due to the risk of over-blocking, but also for the Internet as a whole, in the legitimization that it offers to repressive regimes to censor and control content online.
Like an overused Internet meme that deserves to fade away, so too it is time that courts and regulators moved on from website blocking as a cure for society's ills. If we wish to reduce political extremism, cut off the production of counterfeits, or prevent children from being abused, then we should be addressing those problems directly—rather than by merely covering up the evidence and pretending they have gone away.