On Monday, the W3C announced that its Director, Tim Berners-Lee, had determined that the "playback of protected content" was in scope for the W3C HTML Working Group's new charter, overriding EFF's formal objection against its inclusion. This means the controversial Encrypted Media Extension (EME) proposal will continue to be part of that group's work product, and may be included in the W3C's HTML5.1 standard. If EME goes through to become part of a W3C recommendation, you can expect to hear DRM vendors, DRM-locked content providers like Netflix, and browser makers like Microsoft, Opera, and Google stating that they can now offer W3C standards compliant "content protection" for Web video.
We're deeply disappointed. We've argued before as to why EME and other protected media proposals are different from other standards . By approving this idea, the W3C has ceded control of the "user agent" (the term for a Web browser in W3C parlance) to a third-party, the content distributor. That breaks a—perhaps until now unspoken—assurance about who has the final say in your Web experience, and indeed who has ultimate control over your computing device.
EFF believes that's a dangerous step for an organization that is seen by many as the guardian of the open Web to take. We have rehashed this argument many times before, in person with Tim Berners-Lee, with staff members and, along with hundreds of others, in online interactions with the W3C's other participants.
But there's another argument that we've made more privately. It's an argument that is less about the damage that sanctioning restricted media does to users, and more about the damage it will do to the W3C.
At the W3C's advisory council meeting in Tokyo, EFF spoke to many technologists working on Web standards. It's clear to us that the engineering consensus at the consortium is the same as within the Web community, which is the same almost anywhere else: that DRM is a pain to design, does little to prevent piracy, and is by its nature, user-unfriendly. Nonetheless, many technologists have resigned themselves to believing that until the dominant rightsholders in Hollywood finally give up on it (as the much of the software and music industry already has), we're stuck with implementing it.
The EME, they said, was a reasonable compromise between what these contracts demand, and the reality of the Web. A Web where movies are fenced away in EME's DRM-ridden binary blobs is, the W3C's pragmatists say, no worse than the current environment where Silverlight and Flash serve the purpose of preventing unauthorized behavior.
We pointed out that EME would by no means be the last "protected content" proposal to be put forward for the W3C's consideration. EME is exclusively concerned with video content, because EME's primary advocate, Netflix, is still required to wrap some of its film and TV offerings in DRM as part of its legacy contracts with Hollywood. But there are plenty of other rightsholders beyond Hollywood who would like to impose controls on how their content is consumed.
Just five years ago, font companies tried to demand DRM-like standards for embedded Web fonts. These Web typography wars fizzled out without the adoption of these restrictions, but now that such technical restrictions are clearly "in scope," why wouldn't typographers come back with an argument for new limits on what browsers can do?
Indeed, within a few weeks of EME hitting the headlines, a community group within W3C formed around the idea of locking away Web code, so that Web applications could only be executed but not examined online. Static image creators such as photographers are eager for the W3C to help lock down embedded images. Shortly after our Tokyo discussions, another group proposed their new W3C use-case: "protecting" content that had been saved locally from a Web page from being accessed without further restrictions. Meanwhile, publishers have advocated that HTML textual content should have DRM features for many years.
In our conversations with the W3C, we argued that the W3C needed to develop a clearly defined line against the wave of DRM systems it will now be encouraged to adopt.
A Web where you cannot cut and paste text; where your browser can't "Save As..." an image; where the "allowed" uses of saved files are monitored beyond the browser; where JavaScript is sealed away in opaque tombs; and maybe even where we can no longer effectively "View Source" on some sites, is a very different Web from the one we have today. It's a Web where user agents—browsers—must navigate a nest of enforced duties every time they visit a page. It's a place where the next Tim Berners-Lee or Mozilla, if they were building a new browser from scratch, couldn't just look up the details of all the "Web" technologies. They'd have to negotiate and sign compliance agreements with a raft of DRM providers just to be fully standards-compliant and interoperable.
To be clear, we don't think all of these proposals will come to fruition. We appreciate that there's no great hunger for DRM at the W3C. Many W3C participants held their nose to accept even the EME draft, which was carefully drafted to position itself as far away from the taint of DRM as was possible for a standard solely intended to be used for DRM systems.
But the W3C has now accepted "content protection". By discarding the principle that users should be in charge of user agents, as well as the principle that all the information needed to interoperate with a standard should be open to all prospective implementers, they've opened the door for the many rightsholders who would like the same control for themselves.
The W3C is now in an unenviable position. It can either limit its "content protection" efforts to the aims of a privileged few, like Hollywood. Or it can let a thousand "content protection systems" bloom, and allow any rightsholder group to chip away at software interoperability and users' control.
EFF is still a W3C member, and we'll do our best to work with other organizations within and without the consortium to help it fight off the worse consequences of accepting DRM. But it's not easy to defend a king who has already invited its attackers across his moat.
Still, even if the W3C has made the wrong decision, that doesn't mean the Web will. The W3C has parted ways with the wider Web before: in the early 2000s, its choice to promote XHTML (an unpopular and restrictive variant of HTML) as the future led to Mozilla, Apple and Opera forming the independent WHATWG. It was WHATWG's vision of a dynamic, application-oriented Web that won—so decisively, in fact, that the W3C later re-adopted it and made it the W3C's own HTML5 deliverable.
Recently, WHATWG has diplomatically parted with the W3C again. Its "HTML Living Standard" continues to be developed in tandem with the W3C's version of the HTML standard, and does not contain EME or any other such DRM-enabling proposals.
By contrast, W3C has now put its weight behind a restrictive future: let's call it "DRM-HTML". Others have certainly bet against open, interoperable standards and user control before. It's just surprising and disappointing to see the W3C and its Director gamble against the precedent of their own success, as well as the fears and consciences of so many of their colleagues.