Papers tagged ‘Policy’

The Declining Half-Life of Secrets and the Future of Signals Intelligence

Today we’re going to look at a position paper from the New America think tank’s Cybersecurity Initiative. If you’re someone like me, that descriptor probably raises at least four red flags: regarding anything to do with Internet security, there’s a tremendous gulf in groupthink between people associated with the US government, and people associated with commercial or pro bono development of computer stuff. Which is precisely why it’s useful to read papers from the other side of the gulf, like this one. (This particular think tank appears to be more on the left as that term is used in US politics. I haven’t dug into their other position papers.)

Swire starts by pointing out that the government’s understanding of secrecy was developed during the Cold War, when it was, frankly, much easier to keep secrets. Paper documents in an archive, which readers must physically visit, and demonstrate their need-to-know to the man with a gun at the entrance, are inherently difficult to duplicate. But that entire archive probably fits on a $50 thumbdrive today. In a similar vein, regular readers will recall the standard military teletype with its data transfer rate of 75 bits per second, from Limitations of End-to-End Encryption (1978).

Also, once data has been exfiltrated, it’s much easier to broadcast it, because there are lots more news organizations who might take an interest—or you can just post it online yourself and rely on the tremendously accelerated speed of gossip. These things together are what Swire means by the declining half-life of secrets: secrets have always been expected to get out eventually, but the time scale is no longer decades. The metaphor of a reduced half-life seems spot on to me: leakage of secrets is inherently probabilistic, so exponential decay is the simplest model, and should give people the right first-order intuition.

Swire then moves on to discussing the effects of that groupthink gulf I mentioned. This bit is weaker, because it’s plain that he doesn’t understand why people might prefer the world-view of EFF. But it’s accurate as far as it goes. People associated with the government are starting from the premise that revealing a secret, regardless of its contents, is the worst possible thing anyone can do. (I confess to not understanding how one comes to think this, myself. It probably has to do with one’s default idea of a secret being something that really could get someone killed if it were revealed, never mind that only a tiny fraction of all classified information is that dangerous.) In contrast, the world-view of EFF begins with the premise that most information should be published, and that an organization doing something in secret from the general public probably means it knows, institutionally, that the general public would not approve. And, therefore, that it shouldn’t be doing it in the first place. Since most of the technology community takes this position, the government has an increasingly large problem trying to persuade that community to cooperate with its own attitude, and (Swire says) this will only get worse.

The paper concludes with some fairly weaksauce recommendations: plan for the possibility of disclosure; take the impact on public opinion (should the secret be revealed) into account when planning secret operations; put more effort into arguing for surveillance. Basically, business as usual but with more media savvy. This may be the best one can hope for in the short term, but I have some policy suggestions of my own:

  • Apply Kerckhoffs’ Principle to all surveillance programs. The overall design of the system, its budget, the nature of the data collected, all relevant policies and procedures, everything except the collected data should be public knowledge, subject to normal public oversight (e.g. any Congressional hearings on the topic should be conducted in public and on the record), and debated in public prior to implementation—just like any other government program. If that would render the surveillance useless, the logic of Kerckhoffs’ principle says it was already useless. (I’ve made this point before, on my main blog.)

  • Abandon the desire for exceptional access. The technology community has spent 20+ years explaining over and over and over again why exceptional access is impractical and makes security worse for everyone. Government agencies refusing to accept that message is probably the single strongest reason why the groupthink gulf is as wide as it is.

  • More generally, whenever there is a tradeoff between offense and defense in computer security, choose defense. Design cryptographic standards that are secure for everyone, even if they happen to be enemies of the USA right now (or might be at some time in the future). Disclose all security-critical bugs to the vendors, so they get fixed, even if this means not being able to pull off another Stuxnet. Think of this as the Internet analogue of the SALT and START treaties.

  • Split the NSA in half. Merge the offensive signals intelligence mission into the CIA, or scrap it, I don’t care. Merge the defensive cryptographic and cryptanalytic mission into NIST, declassify and publish everything, and do all future work in public (Kerckhoffs’ Principle again). Make it a bedrock policy that this organization only does offensive research in support of defensive programs (e.g. to demonstrate the (un)soundness of a cipher).

I’m willing to listen to reasons not to do these things, as long as they do not boil down to we’re scared of hypothetical enemy X.

Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications

Today’s paper is not primary research, but an expert opinion on a matter of public policy; three of its authors have posted their own summaries [1] [2] [3], the general press has picked it up [4] [5] [6] [7] [8], and it was mentioned during Congressional hearings on the topic [9]. I will, therefore, only briefly summarize it, before moving on to some editorializing of my own. I encourage all of you to read the paper itself; it’s clearly written, for a general audience, and you can probably learn something about how to argue a position from it.

The paper is a direct response to FBI Director James Comey, who has for some time been arguing that data storage and communications systems must be designed for exceptional access by law enforcement agencies (quote from paper); his recent Lawfare editorial can be taken as representative. British politicians have also been making similar noises (see the above general-press articles). The paper, in short, says that this would cause much worse technical problems than it solves, and that even if, by some magic, those problems could be avoided, it would still be a terrible idea for political reasons.

At slightly more length, exceptional access means: law enforcement agencies (like the FBI) and espionage agencies (like the NSA) want to be able to wiretap all communications on the ’net, even if those communications are encrypted. This is a bad idea technically for the same reasons that master-key systems for doors can be more trouble than they’re worth. The locks are more complicated, and easier to pick than they would be otherwise. If the master key falls into the wrong hands you have to change all the locks. Whoever has access to the master keys can misuse them—which makes the keys, and the people who control them, a target. And it’s a bad idea politically because, if the USA gets this capability, every other sovereign nation gets it too, and a universal wiretapping capability is more useful to a totalitarian state that wants to suppress the political opposition, than it is to a detective who wants to arrest murderers. I went into this in more detail toward the end of my review of RFC 3514.

I am certain that James Comey knows all of this, in at least as much detail as it is explained in the paper. Moreover, he knows that the robust democratic debate he calls for already happened, in the 1990s, and wiretapping lost. [10] [11] [12] Why, then, is he trying to relitigate it? Why does he keep insisting that it must somehow be both technically and politically feasible, against all evidence to the contrary? Perhaps most important of all, why does he keep insisting that it’s desperately important for his agency to be able to break encryption, when it was only an obstacle nine times in all of 2013? [13]

On one level, I think it’s a failure to understand the scale of the problems with the idea. On the technical side, if you don’t live your life down in the gears it’s hard to bellyfeel the extent to which everything is broken and therefore any sort of wiretap requirement cannot help but make the situation worse. And it doesn’t help, I’m sure, that Comey has heard (correctly) that what he wants is mathematically possible, so he thinks everyone saying this is impossible is trying to put one over on him, rather than just communicate this isn’t practically possible.

The geopolitical problems with the idea are perhaps even harder to convey, because the Director of the FBI wrestles with geopolitical problems every day, so he thinks he does know the scale there. For instance, the paper spends quite some time on a discussion of the jurisdictional conflict that would naturally come up in an investigation where the suspect is a citizen of country A, the crime was committed in B, and the computers involved are physically in C but communicate with the whole world—and it elaborates from there. But we already have treaties to cover that sort of investigation. Comey probably figures they can be bent to fit, or at worst, we’ll have to negotiate some new ones.

If so, what he’s missing is that he’s imagining too small a group of wiretappers: law enforcement and espionage agencies from countries that are on reasonably good terms with the USA. He probably thinks export control can keep the technology out of the hands of countries that aren’t on good terms with the USA (it can’t) and hasn’t even considered non-national actors: local law enforcement, corporations engaged in industrial espionage, narcotraficantes, mafiosi, bored teenagers, Anonymous, religious apocalypse-seekers, and corrupt insiders in all the above. People the USA can’t negotiate treaties with. People who would already have been thrown in jail if anyone could make charges stick. People who may not even share premises like what good governance or due process of law or basic human decency mean. There are a bit more than seven billion people on the planet today, and this is the true horror of the Internet: roughly 40% of those people [14] could, if they wanted, ruin your life, right now. It’s not hard. [15] (The other 60% could too, if they could afford to buy a mobile, or at worst a satellite phone.)

But these points, too, have already been made, repeatedly. Why does none of it get through? I am only guessing, but my best guess is: the War On Some Drugs [16] and the aftermath of 9/11 [17] (paywalled, sorry; please let me know if you have a better cite) have both saddled the American homeland security complex with impossible, Sisyphean labors. In an environment where failure is politically unacceptable, yet inevitable, the loss of any tool—even if it’s only marginally helpful—must seem like an existential threat. To get past that, we would have to be prepared to back off on the must never happen again / must be stopped at all cost posturing; the good news is, that has an excellent chance of delivering better, cheaper law enforcement results overall. [18]