Papers by Ross Anderson

Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications

Today’s paper is not primary research, but an expert opinion on a matter of public policy; three of its authors have posted their own summaries [1] [2] [3], the general press has picked it up [4] [5] [6] [7] [8], and it was mentioned during Congressional hearings on the topic [9]. I will, therefore, only briefly summarize it, before moving on to some editorializing of my own. I encourage all of you to read the paper itself; it’s clearly written, for a general audience, and you can probably learn something about how to argue a position from it.

The paper is a direct response to FBI Director James Comey, who has for some time been arguing that data storage and communications systems must be designed for exceptional access by law enforcement agencies (quote from paper); his recent Lawfare editorial can be taken as representative. British politicians have also been making similar noises (see the above general-press articles). The paper, in short, says that this would cause much worse technical problems than it solves, and that even if, by some magic, those problems could be avoided, it would still be a terrible idea for political reasons.

At slightly more length, exceptional access means: law enforcement agencies (like the FBI) and espionage agencies (like the NSA) want to be able to wiretap all communications on the ’net, even if those communications are encrypted. This is a bad idea technically for the same reasons that master-key systems for doors can be more trouble than they’re worth. The locks are more complicated, and easier to pick than they would be otherwise. If the master key falls into the wrong hands you have to change all the locks. Whoever has access to the master keys can misuse them—which makes the keys, and the people who control them, a target. And it’s a bad idea politically because, if the USA gets this capability, every other sovereign nation gets it too, and a universal wiretapping capability is more useful to a totalitarian state that wants to suppress the political opposition, than it is to a detective who wants to arrest murderers. I went into this in more detail toward the end of my review of RFC 3514.

I am certain that James Comey knows all of this, in at least as much detail as it is explained in the paper. Moreover, he knows that the robust democratic debate he calls for already happened, in the 1990s, and wiretapping lost. [10] [11] [12] Why, then, is he trying to relitigate it? Why does he keep insisting that it must somehow be both technically and politically feasible, against all evidence to the contrary? Perhaps most important of all, why does he keep insisting that it’s desperately important for his agency to be able to break encryption, when it was only an obstacle nine times in all of 2013? [13]

On one level, I think it’s a failure to understand the scale of the problems with the idea. On the technical side, if you don’t live your life down in the gears it’s hard to bellyfeel the extent to which everything is broken and therefore any sort of wiretap requirement cannot help but make the situation worse. And it doesn’t help, I’m sure, that Comey has heard (correctly) that what he wants is mathematically possible, so he thinks everyone saying this is impossible is trying to put one over on him, rather than just communicate this isn’t practically possible.

The geopolitical problems with the idea are perhaps even harder to convey, because the Director of the FBI wrestles with geopolitical problems every day, so he thinks he does know the scale there. For instance, the paper spends quite some time on a discussion of the jurisdictional conflict that would naturally come up in an investigation where the suspect is a citizen of country A, the crime was committed in B, and the computers involved are physically in C but communicate with the whole world—and it elaborates from there. But we already have treaties to cover that sort of investigation. Comey probably figures they can be bent to fit, or at worst, we’ll have to negotiate some new ones.

If so, what he’s missing is that he’s imagining too small a group of wiretappers: law enforcement and espionage agencies from countries that are on reasonably good terms with the USA. He probably thinks export control can keep the technology out of the hands of countries that aren’t on good terms with the USA (it can’t) and hasn’t even considered non-national actors: local law enforcement, corporations engaged in industrial espionage, narcotraficantes, mafiosi, bored teenagers, Anonymous, religious apocalypse-seekers, and corrupt insiders in all the above. People the USA can’t negotiate treaties with. People who would already have been thrown in jail if anyone could make charges stick. People who may not even share premises like what good governance or due process of law or basic human decency mean. There are a bit more than seven billion people on the planet today, and this is the true horror of the Internet: roughly 40% of those people [14] could, if they wanted, ruin your life, right now. It’s not hard. [15] (The other 60% could too, if they could afford to buy a mobile, or at worst a satellite phone.)

But these points, too, have already been made, repeatedly. Why does none of it get through? I am only guessing, but my best guess is: the War On Some Drugs [16] and the aftermath of 9/11 [17] (paywalled, sorry; please let me know if you have a better cite) have both saddled the American homeland security complex with impossible, Sisyphean labors. In an environment where failure is politically unacceptable, yet inevitable, the loss of any tool—even if it’s only marginally helpful—must seem like an existential threat. To get past that, we would have to be prepared to back off on the must never happen again / must be stopped at all cost posturing; the good news is, that has an excellent chance of delivering better, cheaper law enforcement results overall. [18]

Security Analysis of Consumer-Grade Anti-Theft Solutions Provided by Android Mobile Anti-Virus Apps

Today we have another analysis of Android device security against scenarios where the owner loses control of the phone, by the same researchers who wrote Security Analysis of Android Factory Resets. Here, instead of looking at what happens when the owner deliberately erases a phone in order to sell it, they study what happens when the phone is stolen and the owner tries to wipe or disable it remotely. A remote disable mechanism is commonly included in anti-virus programs for Android, and this study looks at ten such mechanisms.

As with factory reset, the core finding boils down to none of them work. The root causes are slightly different, though. The core of Android, naturally, is at pains to prevent normal applications from doing something as destructive as initiating a memory wipe, rendering the phone unresponsive to input, or changing passwords. To be properly effective the anti-theft program needs administrative privileges, which can be revoked at any time, so there’s an inherent race between the owner activating the anti-theft program and the thief disabling it. To make matters worse, UX bugs make it easy for these programs to appear to be installed correctly but not have the privileges they need; implementation bugs (possibly caused by unclear Android documentation) may leave loopholes even when the program was installed correctly; and several device vendors added backdoor capabilities (probably for in-store troubleshooting) that allow a thief to bypass the entire thing—for those familiar with Android development, we’re talking if the phone is turned off and then plugged into a computer it boots into recovery mode and activates ADB.

There is a curious omission in this paper: since 2013, Google itself has provided a remote lock/wipe feature as part of the Google Apps bundle that’s installed on most Android devices [1]. Since the Apps bundle is, nowadays, Google’s way of getting around vendors who can’t be bothered to patch the base operating system in a timely fashion, it has all the privileges it needs to do this job correctly, and the feature should be available regardless of Android base version. The UX is reasonable, and one would presume that the developers are at least somewhat less likely to make mistakes in the implementation. This paper, however, doesn’t mention that at all, despite (correctly) pointing out that this is a difficult thing for a third-party application to get right and the device vendors should step up.

Practical advice for phone-owners continues to be: encrypt the phone on first boot, before giving it any private information, and use a nontrivial unlock code. Practical advice for antivirus vendors, I think, is you’re not testing your software adversarially enough. The implementation bugs, in particular, suggest that the vendors’ test strategy confirmed that remote lock/wipe works, when set up correctly, but did not put enough effort into thinking of ways to defeat the lock.

Security Analysis of Android Factory Resets

Unlike the last two papers, today’s isn’t theoretical at all. It’s a straightforward experimental study of whether or not any data can be recovered from an Android phone that’s undergone a factory reset. Factory reset is supposed to render it safe to sell a phone on the secondhand market, which means any personal data belonging to the previous owner should be erased at least thoroughly enough that the new owner cannot get at it without disassembling the phone. (This is NIST logical media sanitization, for those who are familiar with those rules—digital sanitization, where the new owner cannot extract any data short of taking an electron microscope to the memory chip, would of course be better, but in the situations where the difference matters, merely having a cell phone may mean you have already failed.)

The paper’s core finding is also straightforward: due to a combination of UI design errors, major oversights in the implementation of the factory-reset feature, driver bugs, and the market’s insistence that flash memory chips have to pretend to be traditional spinning rust hard disks even though they don’t work anything like that, none of the phones in the study erased everything that needed to get erased. The details are confusingly presented, but they’re not that important anyway. It ends with a concrete set of recommendations for future improvements, which is nice to see.

There are a few caveats. The study only covers Android 2.3 through 4.3; it is unclear to me whether the situation is improved in newer versions of Android. They don’t discuss device encryption at all—this is at least available in all versions studied, and should completely mitigate the problem provided that a factory reset successfully wipes out the old encryption keys, and that there’s no way for unencrypted data to survive the encryption process. (Unfortunately, the normal setup flow for a new phone, at least in the versions I’ve used, asks for a bunch of sensitive info before offering to encrypt the phone. You can bypass this but it’s not obvious how.) It would be great if someone tested whether encryption is effective in practice.

Beyond the obvious, this is also an example of Android’s notorious update problem. Many of the cell carriers cannot be bothered to push security updates—or any updates—to phones once they have been sold. [1] They are also notorious for making clumsy modifications to the software that harm security, usability, or both. [2] In this case, this means driver bugs that prevent the core OS from erasing everything it meant to erase, and failure to deploy patches that made the secure erase mechanism work better. Nobody really has a good solution to that. I personally am inclined to say that neither the telcos nor Google should be in the business of selling phones, and conversely, the consumer electronics companies who are in that business should not have the opportunity to make any modifications to the operating system. Whatever the hardware is, it’s got to work with a stock set of drivers, and the updates have to come direct from Google. That would at least simplify the problem.

(Arguably Google shouldn’t be in the OS business either, but that’s a more general antitrust headache. One thing at a time.)