Cormac Herley is a principal researcher at Microsoft Research. His work focuses on data and signal analysis problems that reduce complexity and remove pain points for users — other topics that he’s explored include why so many scammers say they are from Nigeria and the shoddy methodology behind cybercrime’s scary numbers. In this interview, he explores why a lot of long held security advice isn’t as effective as believed and discusses other considerations for building security into policies and practices.
Question: Your paper, “Unfalsifiability of Security Claims,” makes security seem as elusive as a unicorn. Why?
Herley: The main point is that you can’t really ever point at something and be sure that it’s secure. You can be sure that something is not secure by observing things that go wrong. But you can’t ever be sure that something is truly secure: that it will withstand attacks in the future.
A lot of people acknowledge this, but overlook the direct consequence: you can’t ever show that something is not necessary for security. If I tell you something is insecure, you can’t ever prove me wrong. You can claim: “My computer is really locked down. The best security people have looked at it and said it’s impenetrable.” And if I respond that it’s insecure, there’s nothing you can show to prove me wrong.
A consequence is that when you say something is necessary for security, it is unfalsifiable. In order to falsify the statement, “you must do X to be secure,” you have to produce something secure that doesn’t do X (but you can’t find something secure). For example, I am immune to contradiction if I say, “In order to be secure, your password should be at least eight characters long and contain upper, lower, and special characters.” There’s no possible evidence to show me wrong, not because the statement is right, but because it’s unfalsifiable.
What’s the big deal? Falsifiability is just a way of ensuring that you’re open to feedback. If you look at the history of passwords in particular we got a lot of stuff wrong. That’s to be expected, getting things wrong sometimes is unavoidable. What is really bad is that it that it took us decades to figure out that we were wrong; we were cut off from feedback, and that was an entirely avoidable error. The problem is that we recommended things and emphasized certain things, based on unfalsifiable claims. We got it wrong, and there was no way of showing that we got it wrong, so we continued in error for, essentially, several decades.
Question: So if the old advice is fundamentally flawed, what should, say, enterprise IT managers do instead to build security into their policies and practices?
Herley: The best way to figure out if you’re on the wrong path is to try to describe what it would take to convince you that you’re wrong. What would that evidence look like?
This is the way people do things in science. We believe that Newton’s laws are true, but physicists have no difficulty whatsoever telling you what the evidence against them would look like. They’ll say, “Here’s the experiment you should do to convince me that we’re wrong about Newton.” If you can’t describe the evidence that would change your mind there is a real problem with the policy or practice.
Question: Would an example be somebody who says, “Cryptography will let me sleep at night,” and then you bring up the possibility of quantum computing breaking that encryption within the next decade?
Herley: Sure. But a better example might be that cryptography doesn’t do any good whatsoever if your endpoint is corrupted. So I can encrypt the connection between here and the bank, but if the bad guy is sitting on my computer, you can do all the crypto in the world, and it makes no difference.
Similarly, your password can be 20 characters long and contain every random unicode character you want. And you can change it four times a day. But if a bad guy has a keylogger on your machine, none of this makes any difference whatsoever.
The key point is that for every claim about security, you have some set of assumptions under which it works. It’s really, really, really important to understand accurately what those assumptions are. With crypto, you don’t have to wait until quantum computing happens to start worrying. There have been very successful side-channel attacks.
Question: In “Sex, Lies, and Cyber-Crime Surveys,” you and your colleague, Dinei Florêncio, argue that the amount of money lost to cybercrime is overblown because of the dubious methodology used to quantify the impact. Give us an overview of the paper’s main conclusions. And what prompted the paper in the first place? Did you and Dinei just get tired of all the media hype about cybercrime?
Herley: That’s a much simpler and less-abstract paper than the one on falsifiability. Essentially it’s that a lot of the numbers we’ve seen about how much cybercrime is costing are junk numbers. They are produced in a way that just does not allow you to conclude anything about the size of the cybercrime problem.
Dinei and I kept seeing these eye-popping numbers, and we were just a little disbelieving. For example, there was one survey that said identity theft cost, I think, $47 billion in 2004. That’s a lot of money. I think Microsoft made something like $80 billion last year, but you can understand how, since a lot of people run Windows and Office. A lot of people use our stuff.
So who’s getting that $47 billion? Who’s losing it? We started digging into it, and essentially our conclusion was that these numbers were produced in a methodology that is just laughably bad. You can’t have any confidence whatsoever that the number they published is even within a factor of 100 of the correct answer. The same was true for the other cybercrime surveys we looked at.
But that’s not the same as saying cybercrime isn’t a big deal or that it’s not an important problem. Look at spam. There finally have been one or two good studies about how much spammers really make, and it’s less than $100 million globally. The damage that spam does is orders of magnitude larger than that if you add up the inconvenience of all of us having our inboxes cluttered with this stuff.
It’s a mistake to equate the amount of money that bad guys make from something like identity theft or phishing with the harm that it actually causes. The harm is the time and effort and energy we spend in fighting the stuff, as well as the losses that are not measured monetarily.
Question: One form of cybercrime is cyberexploitation: basically business espionage, often by foreign powers. The National Security Agency chief calls it “the greatest transfer of wealth in history.” Is that overblown, too?
Herley: Short answer: yes. The first thing he says is that he’s quoting somebody else’s numbers. He says, “intellectual property theft,” and he just pulls out some number: $250 billion. And the other number that he gave — $114 billion or something like that — came from a survey report that I’m familiar with. In fact, it suffers from exactly the problems we looked at in “Sex, Lies, and Cyber-Crime Surveys.”
Some of these things are just inherently difficult to measure. There definitely is cyberespionage. There definitely is cybercrime. There definitely is ransomware. But it’s very difficult to measure precisely how much of it is going on, and it’s even more difficult to put dollar numbers on it. For example, intellectual property is extremely difficult to value.
What’s more concerning to me is the lack of curiosity in pushing back and asking, “Where do these big numbers come from?” And really, the largest transfer of wealth in history? Bigger than, say, the gold, silver, and other resources that companies and countries have stripped out of places in Africa and South America over the years? It’s disappointing that people who should know better will repeat claims like that and abdicate any responsibility for checking that the methodology passes basic sanity checks.