After seven years of litigation, the basic contours of the Digital Millennium Copyright Act (DMCA) safe harbors should be pretty well established. Unfortunately, a new front may have opened up in a case called Gardner v. CafePress, thanks to a mistaken and dangerous misreading of Section 512.
With the invaluable assistance of Venkat Balasubramani, EFF, joined by the Center for Democracy and Technology, the Computer & Communications Industry Association, and Public Knowledge, has filed an amicus brief in that case. In our brief, we explain our deep concerns about how that recent ruling could have profound consequences for user-generated content sites.
CafePress is a platform that allows users to set up online shops to sell custom physical goods like clothing and stationery. The lawsuit was filed by photographer Steven Gardner, whose wildlife images were included on a user's sales page. CafePress had asked the court to resolve the case as a matter of law (also called summary judgment) because it believed it was clearly protected by the DMCA's safe harbors. The court denied that request, concluding that it could not be sure that CafePress was protected by the DMCA.
Our brief explains why that was a dangerous decision for online speech and innovation. We focus on two issues in particular: (1) the court’s interpretation of the term “service provider”; and (2) the court’s suggestion that image metadata might qualify as a “standard technical measure” under the DMCA—which would mean CafePress's automated stripping of metadata from photos would jeopardize the availability of safe harbor protections. The court could have resolved these arguments in CafePress’s favor as a matter of law. By forcing the parties to go trial on these issues, the court may undermine the purpose of the DMCA safe harbors.
On the first point, it appears that the court conflated CafePress’s online and offline activities as a website and as a producer of physical goods, and adopted a cramped definition of “service provider” that has long since been rejected by numerous courts.
On the second point, the court clearly misunderstood the definition of a “standard technical measure.” This point is pretty technical, but it has serious implications because service providers are required to comply with “standard technical measures” in order to enjoy the legal protections of the DMCA safe harbors.
A standard technical measure, in the sense of DMCA § 512(i) is one that is “used by copyright owners to identify or protect copyrighted works” and “has been developed pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process;” is “available to any person on reasonable and nondiscriminatory terms;” and does not “impose substantial costs on service providers or substantial burdens on their systems or networks.”
However, no broad consensus has ever emerged as to any such measure, with respect to metadata or any other technical artifact. In fact, with respect to metadata, industry practices show there is no such consensus: service providers commonly strip metadata from uploaded images. Without a consensus standard, there can be no "technical measure" that a website is required to honor.
And a good thing too. From our brief:
Casting doubt on the practice of removing metadata may also put users at risk. ... Stripping metadata from uploaded images helps protect users’ privacy and security, and should not be discouraged.
But even though there is no broad industry consensus to treat image metadata as a "standard technical measure" for copyright enforcement, the court seems to have made metadata removal a ticket to trial. That's bad news.
Heads up: this case has flown under the radar, but a wrong decision on these points could end up shrinking the effective contours of DMCA safe harbors. Online service providers have a very strong incentive to stay inside those boundaries: the staggering quantity of user-generated content uploaded combined with ridiculously large statutory damages and litigation costs mean any risk of ambiguity is serious.
Service providers need well-established legal safe harbors, because those safe harbors create the space within which new platforms can develop and thrive. That’s good for user speech, and good for online innovation. We hope the court agrees.