Congress has never made a law saying, "Corporations should get to decide who gets to publish truthful information about defects in their products,"— and the First Amendment wouldn't allow such a law — but that hasn't stopped corporations from conjuring one out of thin air, and then defending it as though it was a natural right they'd had all along.
Some background: in 1986, Ronald Reagan, spooked by the Matthew Broderick movie Wargames (true story!) worked with Congress to pass a sweeping cybercrime bill called the Computer Fraud and Abuse Act (CFAA) that was exceedingly sloppily drafted. CFAA makes it a felony to "exceed[] authorized access" on someone else's computer in many instances.
Fast forward to 1998, when Bill Clinton and his Congress enacted the Digital Millennium Copyright Act (DMCA), a giant, gnarly hairball of digital copyright law that included section 1201, which bans bypassing any "technological measure" that "effectively controls access" to copyrighted works, or "traffic[ing]" in devices or services that bypass digital locks.
Notice that neither of these laws bans disclosure of defects, including security disclosures! But decades later, corporate lawyers and federal prosecutors have constructed a body of legal precedents that twist these overbroad laws into a rule that effectively gives corporations the power to decide who gets to tell the truth about flaws and bugs in their products.
Businesses and prosecutors have brought civil and criminal actions against researchers and whistleblowers who violated a company's terms of service in the process of discovering a defect. The argument goes like this: "Our terms of service ban probing our system for security defects. When you login to our server for that purpose, you 'exceed your authorization,' and that violates the Computer Fraud and Abuse Act."
Likewise, businesses and prosecutors have used Section 1201 of the DMCA to attack researchers who exposed defects in software and hardware. Here's how that argument goes: "We designed our products with a lock that you have to get around to discover the defects in our software. Since our software is copyrighted, that lock is an 'access control for a copyrighted work' and that means that your research is prohibited, and any publication you make explaining how to replicate your findings is illegal speech, because helping other people get around our locks is 'trafficking.'"
The First Amendment would certainly not allow Congress to enact a law that banned making true, technical disclosures. Even (especially!) if those disclosures revealed security defects that the public needed to be aware of before deciding whether to trust a product or service.
But the presence of these laws has convinced the tech industry — and corporations that have added 'smart' tech to their otherwise 'dumb' products — that it's only natural that they should be the sole custodians of the authority to embarrass or inconvenience them. The worst of these actors use threats of invoking CFAA and DMCA 1201 to silence researchers altogether, so the first time you discover that you've been trusting a defective product is when it is so widely exploited by criminals and grifters that it's impossible to keep the problem from becoming widely known.
Even the best, most responsible corporate actors get this wrong. Tech companies like Mozilla, Dropbox and, most recently, Tesla, have crafted "coordinated disclosure" policies in which they make sincere and legally enforceable promises to take security disclosures seriously and act on them within a defined period, and they even promise not to use laws like DMCA 1201 to retaliate against security researchers who follow their guidelines.
This is a great start, but it's a late and limited solution to a much bigger problem.
The point is that almost every company is a "tech company" — from medical implant vendors to voting machine companies — and not all of them are as upstanding and public-spirited as Mozilla.
Many of these companies do have "coordinated disclosure" policies by which they hope to tempt security researchers into coming to them first when they discover problems with their products and services. But these companies don't make these policies out of the goodness of their hearts: those policies exist because they're the companies' best hope of keeping security researchers from embarrassing them and leaving them scrambling by just publishing the bug without warning.
If corporations can simply silence researchers who don't play ball, we should expect them to do so. There is no shortage of CEOs who are lulling themselves to sleep tonight with fantasies about getting to shut their critics up.
EFF is currently suing the US government to invalidate DMCA 1201 and the ACLU is trying to chip away at CFAA, and there will come a day when we succeed, because the idea of suppressing bug reports (even ones made in disrespectful or rude ways) is totally incompatible with the First Amendment.
Rather than crafting a disclosure policy that says "We'll stay away from these unjust and absurd interpretations of these badly written laws, provided you only tell the truth in ways we approve of," companies that want to lead by example could do so by putting something like this in their disclosure policies:
We believe that conveying truthful warnings about defects in systems is always legal. Of course, we have a strong preference for you to use our disclosure system [LINK] where we promise to investigate your bugs and fix them in a timely manner. But we don't believe we have the right to force you to use our system.
Accordingly, we promise to NEVER invoke any statutory right — for example, rights we are granted under trade secret law, anti-hacking law, or anti-circumvention law — against ANYONE who makes a truthful disclosure about a defect in one of our products or services, regardless of the manner of that disclosure.
We really do think that the best way to keep our customers safe and our products bug-free is to enter into a cooperative relationship with security researchers and that's why our disclosure system exists and we really hope you'll use it, but we don't think we should have the right to force you to use it.
Companies should not rely on these laws to silence security researchers who displease them with the time and manner of their truthful disclosures — if their threats ever materialize into full-blown lawsuits, there's a reasonable chance that they'll find themselves facing down public-spirited litigators (ahem) who will use those suits as a fast-track to overturning these laws in the courts.
But while we wait for the slow wheels of justice to turn, the specter of legal retaliation haunts the best and most public-spirited security researchers (the researchers who work for cyber-criminals and state surveillance contractors don't have to worry about these laws, because they never make their findings public). That is bad for all of us, because for every Tesla, Dropbox and Mozilla, there are a thousand puny tyrants who are using these good-citizen companies' backhanded insistence that disclosure should be subject to their corporate approval to intimidate their own critics into silence.
Those intimidated researchers? They've discovered true facts about why we shouldn't trust systems with our data, our finances, our personal communications, the security of our homes and businesses, and even our lives.
EFF has sued the US government to overturn DMCA 1201 and we just asked the US Copyright Office to reassure security researchers that DMCA 1201 does not prevent them from telling the truth.
We're discussing all this in a Reddit AMA next Tuesday, August 21, from 12-3PM Pacific (3-6PM Eastern). We hope you'll come and join us.