As the European Parliament considers passing a directive that would target hacking, EFF has submitted comments urging the legislators not to create legal woes for researchers who expose security flaws.
In the United States, laws such as the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act have created a murky legal landscape for researchers who conduct independent analysis of technology for security threats. Throughout the world, the Convention on Cybercrime has caused similar problems. Now, new vague and sweeping computer crime legislation is back on the European Union's agenda threatening coders' rights: the European Commission’s proposal on a draft Directive on Attacks Against Information Systems [pdf].
All told, the European Commission needs to make a stronger case for why this directive is needed at all. We believe it is largely duplicative of the Convention on Cybercrime, which itself is riddled with problems. Should the proposed directive move forward, however, we urge the Parliament to improve several aspects.
No criminalization of tools
The main so-called “novelty” of the draft directive is the criminalization of the use, production, sale, or distribution of tools to commit attacks against information systems. In our submission to the European Parliament, we opposed the wholesale criminalization of these tools: while they can be used for malicious purposes, they are also crucial for research and testing, including for "defensive" security efforts to make systems stronger and to prevent and deter attacks.
We urge the Parliament to focus on the intent behind using the tool, rather than mere possession, use, production, or distribution of such tools per se. The latter approach threatens valuable security testing that makes technology more robust and benefits us all.
Protect coders’ rights to unauthorized access to computers for security testing
We asked the European Parliament to protect researchers who access a computer system without explicit permission when the perpetrator does not have a criminal intent, or mens rea. This protection is needed to safeguard security researchers’ rights to free expression and innovation. Examining computers without the explicit permission of the owner is necessary for a vast amount of useful research, which might never be done if obtaining prior permission was a legal requirement.
The language of the draft Directive resembles language in the Computer Fraud and Abuse Act (CFAA), which provides, among other things, that it is illegal to ‘intentionally access[] a computer without authorization or exceed[] authorized access, and thereby obtain[] . . . information from any protected computer.’
The precise scope of the phrases "without authorization" and "exceeds authorized access" has been hotly disputed in the US courts, with the US government and private companies arguing for a broad interpretation that would go so far as to criminalize violations of private contractual agreements. 1 If adopted by European courts, this approach threatens to put the immense coercive power of criminal law into the hands of those who draft contracts. This means that private parties, rather than lawmakers, would be in a position to determine what conduct is criminal, simply by prohibiting it in an agreement. Criminalizing breaches of website terms of use could turn millions of Internet users into criminals for typical everyday activities.
The US experience can serve as a warning to European legislators that vague ill-defined terms can have deleterious effects on free expression, innovation, and competition, especially with respect to the meaning of "authorized" computer access.
Protect coders' rights to free expression and innovation
Finally, we asked the European Parliament to protect security researchers’ right to free expression. Their ability to freely report security flaws is crucial and highly beneficial for the global online community. Public disclosure of security information enables informed consumer choice and encourages vendors to be truthful about flaws, repair vulnerabilities, and improve upon products.
For example, in early February, two German security researchers reported a vulnerability in two encryption systems that could allow eavesdropping on hundreds of thousands of satellite phone calls. Public disclosure of this kind of research allows consumers to be better informed and aware that their communications are not actually protected, which in turn lets them make thoughtful choices about the technology they use. Hopefully it could even inspire the European Telecommunications Standards Institute to formulate a stronger security algorithm that protects users’ privacy.
In our submission, we asked the Parliament to protect the rights of those researchers and whistleblowers. In the course of fixing a problem, they could inadvertently violate laws—even if they never intend to steal information, invade people’s privacy, or otherwise cause harm. By reporting the vulnerability, researchers could risk exposing themselves to a lawsuit or criminal investigation. On the other hand, potentially serious security flaws will go unaddressed if security researchers are forced to withhold information to protect themselves from possible legal liability.
All told, the European Commission hasn’t demonstrated that this proposed directive is necessary, and we don’t think it is. If this proposal moves forward, though, the European Parliament needs to narrowly define and clarify it. The goal should be to leave breathing room for legitimate security research and testing, allowing security researchers to flourish and do what they do best.