Papers published in 2003

Using Memory Errors to Attack a Virtual Machine

All modern operating systems include some form of process isolation—a mechanism for insuring that two programs can run simultaneously on the same machine without interfering with each other. Standard techniques include virtual memory and static verification. Whatever the technique, though, it relies on the axiom that the computer faithfully executes its specified instruction set (as the authors of today’s paper put it). In other words, a hardware fault can potentially ruin everything. But how practical is it to exploit a hardware fault? They are unpredictable, and the most common syndrome is for a single bit in RAM to flip. How much damage could one bit-flip do, when you have no control over where or when it will happen?

Today’s paper demonstrates that, if you are able to run at least one program yourself on a target computer, a single bit-flip can give you total control. Their exploit has two pieces. First, they figured out a way to escalate a single bit-flip to total control of a Java virtual machine; basically, once the bit flips, they have two pointers to the same datum but with different types (integer, pointer) and that allows them to read and write arbitrary memory locations, which in turn enables them to overwrite the VM’s security policy with one that lets the program do whatever it wants. That’s the hard part. The easy part is, their attack program fills up the computer’s memory with lots of copies of the data structure that will give them total control if a bit flips inside it. That way, the odds of the bit flip being useful are greatly increased. (This trick is known as heap spraying and it turns out to be useful for all sorts of exploits, not just the ones where you mess with the hardware.)

This is all fine and good, but you’re still waiting for a cosmic ray to hit the computer and flip a bit for you, aren’t you? Well, maybe there are other ways to induce memory errors. Cosmic rays are ionizing radiation, which you can also get from radioactivity. But readily available radioactive material (e.g. in smoke detectors) isn’t powerful enough, and powerful enough radioactive material is hard to get (there’s a joke where the paper suggests that the adversary could purchase a small oil drilling company in order to get their hands on a neutron source). But there’s also heat, and that turns out to work quite well, at least on the DRAM that was commonly available in 2003. Just heat the chips to 80–100˚C and they start flipping bits. The trick here is inducing enough bit-flips to exploit, but not so many that the entire computer crashes; they calculate an ideal error rate at which the exploit program has a 71% chance of succeeding before the computer crashes.

The attack in this paper probably isn’t worth worrying about in real life. But attacks only ever get nastier. For instance, about a decade later some people figured out how to control where the memory errors happen, and how to induce them without pointing a heat gun at the hardware: that was An Experimental Study of DRAM Disturbance Errors which I reviewed back in April. That attack is probably practical, if the hardware isn’t taking any steps to prevent it.

The Security Flag in the IPv4 Header

Today I propose to explain a classic joke—or perhaps I should call it a parable instead.

RFC 3514 defines a bit in the IPv4 packet header as the evil bit: packets with malicious intent MUST set that bit, benign packets MUST NOT set it. It follows that a firewall router can protect its network from all attacks by dropping packets with the evil bit set (and the RFC requires them to do so). Yay! Network security is solved, we can all go home!

From the fact that we are all still chained to our desks twelve years later, you may deduce that network security is not solved. The flaw is obvious, once pointed out: nothing can force an attacker to set the evil bit. Nonetheless, there are people who believe that something resembling this ought to work; Matthew Skala called that belief the phenomenon of Colour back in 2004 [1] [2]. Bellovin did not intend his RFC to be implemented; he knew perfectly well it would not help at all. But by writing it out so baldly, he gave us something to point at when people propose the same thing in less obvious form, which happens all the time. I’m going to give three examples in increasing order of intractability; the first does actually have a technical solution (sort of), the second is a long-standing open problem which could have a technical solution (but no one’s thought of it yet), and the third is known to have no solution at all.

The evil bit is the difference between a cryptographic hash and a digital signature. A cryptographic hash is a large but fixed-length number (usually written as a string of hexadecimal) algorithmically derived from a document. Many documents produce the same hash, but it’s computationally infeasible to find two documents with the same hash—that is, nobody knows a better way to do it than guess-and-check, which would take so long that the Sun would be burnt out before you were done. Lots of people think that this means: when I give you a document and its hash, you can recompute the hash and if you get the same thing, you must have the document I meant to give you. It is not so. Mallory, the evil mastermind who can tamper with everything I send you, can change the document, and then change the hash to be the hash of Mallory’s document, and you’ll never know the difference. Mallory can’t do this with a digital signature, which is also a large but fixed-length number algorithmically derived from a document, because—unlike a hash—the algorithm depends on information that only I have (namely, my signing key). That you don’t also need that information to check the signature is the miracle of public key cryptography.

Public key cryptography isn’t perfect. The evil bit is also the difference between my public key, truthfully labeled with my name and address, and Mallory’s public key, which is also labeled with my name and address, because nothing stops Mallory from doing that if Mallory wants. How do you tell the difference? Well, you can look at https://www.owlfolio.org/contact/ where I have posted a string of hexadecimal called the fingerprint of my public key, and painstakingly compare that to the fingerprint of the key you just pulled off the PGP keyservers, and—wait a minute, how do you know that’s my website you’re looking at? How do you know Mallory hasn’t modified the fingerprint, and carefully left everything else alone?

Well, it’s an HTTPS website, so every page has a digital signature that the browser verifies for you, and you can click on the lock icon and observe that GeoTrust, Inc. assures you that you are talking to a web server operated by the person who controls the domain name owlfolio.org, and then you can consult whois and learn that that person is known only as Domain Administrator at a P.O. box in Florida, because spam. That didn’t help much, did it? Besides, why should you believe GeoTrust, Inc. about this, and how do you know you have their public key? If you know anything about the certificate authority system you will know that it is, in fact, turtles all the way down, and each one of those turtles could be Mallory in disguise. There are other ways you could attempt to decide whose public key you have, but those are also turtles all the way down.

Enough about Mallory, though; let’s talk about Reyes. Special Agent Reyes of the F.B.I., that is. Reyes’ job is tracking down murderers. Sometimes Reyes could catch the murderers faster if he could listen in on their telephone conversations; sometimes this might, in fact, save someone’s life. The F.B.I. wants Reyes to catch murderers as fast as possible, so they get Congress to pass a law that requires telephone companies to facilitate wiretapping. Reyes knows most of what people talk about on the phone is useless to him, even when those people are murderers, so he only listens in when he’s sure he will hear something that will help one of his cases.

The F.B.I. also employs Special Agent Spender. Spender does not track down murderers, he monitors extremist political groups. He uses a broad definition of extremism: if someone is even slightly famous and has said something negative about the government in the last 20 years, Spender has a file on them, and all their friends too. Spender’s job is also made considerably easier if he can listen in on these people’s telephone conversations. Unlike Reyes, Spender records all their telephone conversations, all the time, because Spender is looking for dirt. Spender wants to be able to publicly humiliate each and every one of those people, at a moment’s notice, should it become necessary. Who decides if it’s necessary? Spender.

The difference between Reyes and Spender is the evil bit.