Writing software to protect political activists against censorship and surveillance is a tricky business. If those activists are living under the kind of authoritarian regimes where a loss of privacy may lead to the loss of life or liberty, we need to tread especially cautiously.
A great deal of post-mortem analysis is occurring at the moment after the collapse of the Haystack project. Haystack was a censorship-circumvention project that began as a real-time response to Iranian election protests last year. The code received significant levels of media coverage, but never reached the levels of technical maturity and security that are necessary to protect the lives of activists in countries like Iran (or many other places, for that matter).
This post isn't going to get into the debate about the social processes that gave Haystack the kind of attention and deployment that it received, before it had been properly reviewed and tested. Instead, we want to emphasize something else: it remains possible to write software that makes activists living under authoritarian regimes safer. But the developers, funders, and distributors of that software need to remember that it isn't easy, and need to go about it the right way.
Here are a few essential points:
- Secure communications tools need a clearly defined model of the privacy threats they defend against, and the way the design addresses those threats needs to be clearly and rigorously specified.
- Careful thought needs to be put into user interface design, so that the end users of the system (who may not speak English, nor be sophisticated computer users) have some hope of understanding what threats the software is and isn't defending against. This is hard to do right, but it's very important: in some cases, if a dissident is a major target for a sophisticated government, they probably shouldn't be using networked computers at all.
- Writing secure software is much harder than just writing software; it requires a different mindset and a whole extra set of skills and experience. Unless a project includes experienced, competent security engineers, it is almost certain to include bugs that threaten users' privacy (actually, all complex codebases include security bugs, but good security teams will be able to make them rarer and do a better job of mitigating the damage).
- Tools need to be thoroughly tested by the computer security community before they are distributed to activists whose lives and liberty are at stake. Fortunately, plenty of well-tested tools are available to provide privacy and circumvention of censorship, including Tor, ssh, VPNs, or Gmail over HTTPS. All of these tools have their own limitations, and need to be used for the correct purposes, but they are the best choices for activists in at least some situations.
- Until you're familiar with the extensive research literature on privacy-preserving communications systems, it's probably best to get involved with (or fund) one of the many existing projects that are trying to defeat Internet censorship, before starting your own. The Tor Project is the largest and most organized of these, and is a good place for developers and funders to find work that needs to be done. There are numerous academic groups doing high-quality research, and some of them also build invaluable privacy tools. There are also some small projects that still need a lot of extra work and security auditing, but which may one day provide extremely important tools for dissidents; the "T(A)ILS" project is one good example.
For further reading on good security practices and the tools available for activists living under authoritarian regimes, see EFF's Surveillance Self-Defense International whitepaper. For more advice on how to evaluate the quality of censorship-circumvention software, see the Tor Project's article, "Ten things to look for in a circumvention tool".