What We Know about the Vulnerabilities Equities Process and Government Hacking
Since the FBI’s announcement last month that it had successfully accessed data on a locked iPhone used in the San Bernardino shootings, there has been intense speculation about exactly how the Bureau got in. If you had “iOS zero day” in the office pool, you’re a winner. According to a new report in the Washington Post, the FBI paid “professional hackers” for information about a “previously unknown software flaw” in Apple’s iOS operating system, which allowed the FBI to disable security features and then brute-force the passcode on the phone. As a result of this outside help, the Justice Department dropped its attempt to compel Apple to assist in accessing the phone, despite previously arguing that Apple’s assistance was essential.
For many, the FBI’s sudden change in tactics, abandoning its legal case in favor of a technical solution, has raised questions about whether there are any practical limits on the government’s use of flaws to “hack” devices and software. It also raises questions about whether it ever has to tell—or voluntarily tells—companies about these vulnerabilities. This post attempts to answer some of these frequently asked questions.
Does the government often use vulnerabilities to “hack” or exploit the software and devices we all use?
Yes. The Apple case was especially high profile, but the FBI and other agencies exploit software flaws all the time. While we don’t know of any comprehensive list, it happens in a wide variety of cases. The government has admitted it uses vulnerabilities for “offensive purposes” in “cyber operations,” law enforcement operations, and counterintelligence. Cyber operations might include Stuxnet, in which the government reportedly used previously unknown vulnerabilities or “zero days” in Microsoft Windows to sabotage the Iranian nuclear program by destroying centrifuges. In the law enforcement context, the government routinely exploits vulnerabilities to install malware, also called “network investigative techniques” or NITs, to identify suspects and conduct remote surveillance. Some of the agencies that have admitted or been shown using vulnerabilities to engage in hacking include the FBI, DEA, NSA, and CIA.
How does the government find out about these flaws?
As with the Apple case, the government has admitted it purchases information about flaws in commonly used software and devices, sometimes reportedly paying large sums. Some agencies like the FBI have in-house units that actively exploit flaws.
Does the government “hoard” or “stockpile” vulnerabilities in order to hack users?
It’s unclear. The White House Cybersecurity Coordinator Michael Daniel denied that the government has a “Raiders of the Lost Ark style” stockpile of vulnerabilities, while the NSA claimed that it has historically disclosed 91% of the vulnerabilities it discovers. But other evidence points toward agencies like CIA, FBI, and NSA holding on to at least some flaws for long periods. In one case involving a Network Investigative Technique, technologists have suggested the FBI may have withheld a previously unknown vulnerability in Firefox for more than a year.
What is the Vulnerabilities Equities Process (“VEP”) and can I read it?
The Vulnerabilities Equities Process is the policy the government uses to decide whether to disclose information about security vulnerabilities or instead withhold this information for its own purposes, including law enforcement, intelligence collection, and "offensive" exploitation. According to the White House, the VEP has a “strong bias” in favor of disclosure.
Thanks to EFF’s Freedom of Information Act lawsuit, the government has publicly released the VEP (with some redactions). You can read it here.
According to the policy, when the government learns of a new flaw—either by discovering it on its own or buying it from third parties—it must submit the flaw to an interagency group. This group has officials from across the government representing various “equities”—the competing interests in disclosing the flaw to strengthen security and exploiting it for offensive purposes.
Has the VEP interagency group looked at the iOS flaw? How long does it take?
We don’t know. The Washington Post story suggests that it has not yet been submitted to the review group, although the policy states that discovery of a flaw, including by a contractor, should start the process.
Will the FBI have to share the iOS flaw with Apple? Does the VEP ever require the government to share vulnerabilities with the companies or the public?
No. The VEP only requires that newly discovered flaws be considered for disclosure. Despite the “strong bias” in favor of disclosure, there are exceptions for law enforcement and intelligence use. Reuters reports that in the San Bernardino case, the outside sellers retained "sole legal ownership" of the iOS zero day, suggesting that it will not be disclosed under the VEP.
We don’t know, but the answer historically has been “not very effective.” Although the VEP was adopted in 2010, reports indicate that it was not “implemented” properly until April 2014. As further evidence, a panel of experts appointed by President Obama recommended in December 2013 that vulnerabilities be disclosed in “almost all cases.” Meanwhile, the government has not released any information to back up its claims that it regularly discloses vulnerabilities to software vendors.
EFF believes that much more oversight of the government’s use of vulnerabilities is needed. As a first step, Congress could require that agencies report on the numbers of vulnerabilities the government has acquired and disclosed. It could also pass a law codifying the “strong bias” in favor of disclosure.
Meanwhile, at least in criminal cases, defendants who have been the target of exploits may be able to argue that they are entitled to information about the flaw if it is “material” to their defense.