Papers tagged ‘Philosophy of security’

The Harmful Consequences of Postel’s Maxim

Postel’s Maxim of protocol design (also known as the Robustness Principle or the Internet Engineering Principle) is Be liberal in what you accept, conservative in what you send. It was first stated as such (by Jon Postel) in the 1979 and 1980 specifications (e.g. RFC 760) of the protocol that we now call IPv4. [1] It’s been tremendously influential, for instance quoted as an axiom in Tim Berners-Lee’s design principles for the Web [2] but has also come in for a fair bit of criticism [3] [4]. (An expanded version of the principle, in RFC 1122, anticipates many of these criticisms and is well worth reading if you haven’t.) Now we have an Internet-Draft arguing that it is fatally flawed:

… there are negative long-term consequences to interoperability if an implementation applies Postel’s advice. Correcting the problems caused by divergent behavior in implementations can be difficult or impossible.

and arguing that instead

Protocol designs and implementations should be maximally strict.

Generating fatal errors for what would otherwise be a minor or recoverable error is preferred, especially if there is any risk that the error represents an implementation flaw. A fatal error provides excellent motivation for addressing problems.

The primary function of a specification is to proscribe behavior in the interest of interoperability.

This is the first iteration of an Internet-Draft, so it’s not intended to be done, so rather than express an opinion as such, I want to put forward some examples of real-world situations from the last couple decades of Internet protocol design that the author may or may not have considered, and ask how he feels they should be / have been handled. I also invite readers to suggest further examples where strictness, security, upward compatibility, incremental deployment, ergonomics, and so on may be in tension.

  • The original IP and TCP (v4) specifications left a number of bits reserved in their respective packet headers. In 2001 the ECN specification gave meaning to some of those bits. It was almost immediately discovered that many intermediate routers would silently discard packets with the ECN bits set; in consequence, fourteen years later ECN is still quite rarely used, even though there are far fewer such routers than there were in 2001. [5] [6]

  • Despite the inclusion of a protocol version number in SSL/TLS, and a clear specification of how servers were supposed to react to clients offering a newer protocol than they supported, servers that drop connections from too-new clients are historically quite common, so until quite recently Web browsers would retry such connections with an older protocol version. This enables a man-in-the-middle to force negotiation of an old, possibly insecure version, even if both sides support something better. [7] [8] [9] Similar to the ECN situation, this problem was originally noticed in 2001 and continues to be an issue in 2015.

  • Cryptographic protocols (such as TLS) can be subverted—and I mean complete breach of confidentiality subverted—if they reveal why a message failed to decrypt, or how long it took to decrypt / fail to decrypt a message, to an attacker that can forge messages. [10] [11] To close these holes it may be necessary to run every message through the complete decryption process even if you already know it’s going to fail.

  • In the interest of permitting future extensions, HTML5 [12] and CSS [13] take pains to specify exact error recovery behavior; the idea is that older software will predictably ignore stuff it doesn’t understand, so that authors can be sure of how their websites will look in browsers that both do and don’t implement each shiny new feature. However, this means you can predict how the CSS parser will parse HTML (and vice versa). And in conjunction with the general unreliability of MIME types it means you used to be able to exploit that to extract information from a document you shouldn’t be able to read. [14] (Browsers fixed this by becoming pickier about MIME types.)

Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications

Today’s paper is not primary research, but an expert opinion on a matter of public policy; three of its authors have posted their own summaries [1] [2] [3], the general press has picked it up [4] [5] [6] [7] [8], and it was mentioned during Congressional hearings on the topic [9]. I will, therefore, only briefly summarize it, before moving on to some editorializing of my own. I encourage all of you to read the paper itself; it’s clearly written, for a general audience, and you can probably learn something about how to argue a position from it.

The paper is a direct response to FBI Director James Comey, who has for some time been arguing that data storage and communications systems must be designed for exceptional access by law enforcement agencies (quote from paper); his recent Lawfare editorial can be taken as representative. British politicians have also been making similar noises (see the above general-press articles). The paper, in short, says that this would cause much worse technical problems than it solves, and that even if, by some magic, those problems could be avoided, it would still be a terrible idea for political reasons.

At slightly more length, exceptional access means: law enforcement agencies (like the FBI) and espionage agencies (like the NSA) want to be able to wiretap all communications on the ’net, even if those communications are encrypted. This is a bad idea technically for the same reasons that master-key systems for doors can be more trouble than they’re worth. The locks are more complicated, and easier to pick than they would be otherwise. If the master key falls into the wrong hands you have to change all the locks. Whoever has access to the master keys can misuse them—which makes the keys, and the people who control them, a target. And it’s a bad idea politically because, if the USA gets this capability, every other sovereign nation gets it too, and a universal wiretapping capability is more useful to a totalitarian state that wants to suppress the political opposition, than it is to a detective who wants to arrest murderers. I went into this in more detail toward the end of my review of RFC 3514.

I am certain that James Comey knows all of this, in at least as much detail as it is explained in the paper. Moreover, he knows that the robust democratic debate he calls for already happened, in the 1990s, and wiretapping lost. [10] [11] [12] Why, then, is he trying to relitigate it? Why does he keep insisting that it must somehow be both technically and politically feasible, against all evidence to the contrary? Perhaps most important of all, why does he keep insisting that it’s desperately important for his agency to be able to break encryption, when it was only an obstacle nine times in all of 2013? [13]

On one level, I think it’s a failure to understand the scale of the problems with the idea. On the technical side, if you don’t live your life down in the gears it’s hard to bellyfeel the extent to which everything is broken and therefore any sort of wiretap requirement cannot help but make the situation worse. And it doesn’t help, I’m sure, that Comey has heard (correctly) that what he wants is mathematically possible, so he thinks everyone saying this is impossible is trying to put one over on him, rather than just communicate this isn’t practically possible.

The geopolitical problems with the idea are perhaps even harder to convey, because the Director of the FBI wrestles with geopolitical problems every day, so he thinks he does know the scale there. For instance, the paper spends quite some time on a discussion of the jurisdictional conflict that would naturally come up in an investigation where the suspect is a citizen of country A, the crime was committed in B, and the computers involved are physically in C but communicate with the whole world—and it elaborates from there. But we already have treaties to cover that sort of investigation. Comey probably figures they can be bent to fit, or at worst, we’ll have to negotiate some new ones.

If so, what he’s missing is that he’s imagining too small a group of wiretappers: law enforcement and espionage agencies from countries that are on reasonably good terms with the USA. He probably thinks export control can keep the technology out of the hands of countries that aren’t on good terms with the USA (it can’t) and hasn’t even considered non-national actors: local law enforcement, corporations engaged in industrial espionage, narcotraficantes, mafiosi, bored teenagers, Anonymous, religious apocalypse-seekers, and corrupt insiders in all the above. People the USA can’t negotiate treaties with. People who would already have been thrown in jail if anyone could make charges stick. People who may not even share premises like what good governance or due process of law or basic human decency mean. There are a bit more than seven billion people on the planet today, and this is the true horror of the Internet: roughly 40% of those people [14] could, if they wanted, ruin your life, right now. It’s not hard. [15] (The other 60% could too, if they could afford to buy a mobile, or at worst a satellite phone.)

But these points, too, have already been made, repeatedly. Why does none of it get through? I am only guessing, but my best guess is: the War On Some Drugs [16] and the aftermath of 9/11 [17] (paywalled, sorry; please let me know if you have a better cite) have both saddled the American homeland security complex with impossible, Sisyphean labors. In an environment where failure is politically unacceptable, yet inevitable, the loss of any tool—even if it’s only marginally helpful—must seem like an existential threat. To get past that, we would have to be prepared to back off on the must never happen again / must be stopped at all cost posturing; the good news is, that has an excellent chance of delivering better, cheaper law enforcement results overall. [18]

The Security Flag in the IPv4 Header

Today I propose to explain a classic joke—or perhaps I should call it a parable instead.

RFC 3514 defines a bit in the IPv4 packet header as the evil bit: packets with malicious intent MUST set that bit, benign packets MUST NOT set it. It follows that a firewall router can protect its network from all attacks by dropping packets with the evil bit set (and the RFC requires them to do so). Yay! Network security is solved, we can all go home!

From the fact that we are all still chained to our desks twelve years later, you may deduce that network security is not solved. The flaw is obvious, once pointed out: nothing can force an attacker to set the evil bit. Nonetheless, there are people who believe that something resembling this ought to work; Matthew Skala called that belief the phenomenon of Colour back in 2004 [1] [2]. Bellovin did not intend his RFC to be implemented; he knew perfectly well it would not help at all. But by writing it out so baldly, he gave us something to point at when people propose the same thing in less obvious form, which happens all the time. I’m going to give three examples in increasing order of intractability; the first does actually have a technical solution (sort of), the second is a long-standing open problem which could have a technical solution (but no one’s thought of it yet), and the third is known to have no solution at all.

The evil bit is the difference between a cryptographic hash and a digital signature. A cryptographic hash is a large but fixed-length number (usually written as a string of hexadecimal) algorithmically derived from a document. Many documents produce the same hash, but it’s computationally infeasible to find two documents with the same hash—that is, nobody knows a better way to do it than guess-and-check, which would take so long that the Sun would be burnt out before you were done. Lots of people think that this means: when I give you a document and its hash, you can recompute the hash and if you get the same thing, you must have the document I meant to give you. It is not so. Mallory, the evil mastermind who can tamper with everything I send you, can change the document, and then change the hash to be the hash of Mallory’s document, and you’ll never know the difference. Mallory can’t do this with a digital signature, which is also a large but fixed-length number algorithmically derived from a document, because—unlike a hash—the algorithm depends on information that only I have (namely, my signing key). That you don’t also need that information to check the signature is the miracle of public key cryptography.

Public key cryptography isn’t perfect. The evil bit is also the difference between my public key, truthfully labeled with my name and address, and Mallory’s public key, which is also labeled with my name and address, because nothing stops Mallory from doing that if Mallory wants. How do you tell the difference? Well, you can look at https://www.owlfolio.org/contact/ where I have posted a string of hexadecimal called the fingerprint of my public key, and painstakingly compare that to the fingerprint of the key you just pulled off the PGP keyservers, and—wait a minute, how do you know that’s my website you’re looking at? How do you know Mallory hasn’t modified the fingerprint, and carefully left everything else alone?

Well, it’s an HTTPS website, so every page has a digital signature that the browser verifies for you, and you can click on the lock icon and observe that GeoTrust, Inc. assures you that you are talking to a web server operated by the person who controls the domain name owlfolio.org, and then you can consult whois and learn that that person is known only as Domain Administrator at a P.O. box in Florida, because spam. That didn’t help much, did it? Besides, why should you believe GeoTrust, Inc. about this, and how do you know you have their public key? If you know anything about the certificate authority system you will know that it is, in fact, turtles all the way down, and each one of those turtles could be Mallory in disguise. There are other ways you could attempt to decide whose public key you have, but those are also turtles all the way down.

Enough about Mallory, though; let’s talk about Reyes. Special Agent Reyes of the F.B.I., that is. Reyes’ job is tracking down murderers. Sometimes Reyes could catch the murderers faster if he could listen in on their telephone conversations; sometimes this might, in fact, save someone’s life. The F.B.I. wants Reyes to catch murderers as fast as possible, so they get Congress to pass a law that requires telephone companies to facilitate wiretapping. Reyes knows most of what people talk about on the phone is useless to him, even when those people are murderers, so he only listens in when he’s sure he will hear something that will help one of his cases.

The F.B.I. also employs Special Agent Spender. Spender does not track down murderers, he monitors extremist political groups. He uses a broad definition of extremism: if someone is even slightly famous and has said something negative about the government in the last 20 years, Spender has a file on them, and all their friends too. Spender’s job is also made considerably easier if he can listen in on these people’s telephone conversations. Unlike Reyes, Spender records all their telephone conversations, all the time, because Spender is looking for dirt. Spender wants to be able to publicly humiliate each and every one of those people, at a moment’s notice, should it become necessary. Who decides if it’s necessary? Spender.

The difference between Reyes and Spender is the evil bit.