Michael Hicks on Why Building Security In Requires New Ways of Thinking

Facebooktwitterredditpinterestlinkedintumblrmail

Michael Hicks is a professor in the Computer Science Department and University of Maryland Institute for Advanced Computer Studies (UMIACS) at the University of Maryland, College Park, where he also helps direct the Programming  Languages  at  University  Of  Maryland (PLUM) lab. He’s a steering committee member for the IEEE Secure Development Conference (SecDev) 2017, which will be held 24–27 September in Cambridge, Massachusetts. In this interview, he explores some underappreciated security fundamentals.

Question: You’ve been working on a “build-it, break-it, fix-it” contest for software security. What is that?

Hicks: That contest was motivated by the idea that we very rarely assess security mechanisms end to end. We were curious to see if we trained students to use security principles whether or not they could actually apply those principles in building software systems in a competitive setting.

Typically, software security contests are the attack variety. The organizers produce some buggy software that has vulnerabilities, and the participants try to find and exploit those vulnerabilities. But that focus ignores a key question: How do we build software that avoids those vulnerabilities in the first place?

So our contest melds a security-oriented programming competition with an attack-oriented contest. We get to see what techniques were successful and then do some scientific analysis to figure out why. More information about the contest is available at https://builditbreakit.org.

Question: What are some lessons learned?

Hicks: Over the past 3 years, we’ve had about 160 teams participate in the contest. Most them have been professional part-timers who participated in online courses that were offered through Coursera and the University of Maryland’s Cybersecurity Specialization. The average age was 35, with 10 years of development experience.

After analyzing data from contests held in 2015, we found that people who programmed their systems using C and C++ were far more likely to have a security vulnerability in their system than people who programmed in so-called type-safe languages, like Java and Google’s Go. On the other hand, we found that C and C++ programs performed better on average.

Another interesting finding is that teams that had more diverse programming language experience — knowing four or five languages rather than one or two — tended to produce higher quality code. They only used one of the languages for their particular submission, but the fact that they had broad programming expertise seemed to help them produce better software.

Shorter programs were more secure. We hypothesize that’s because those programs tended to use more libraries, so there was less code written in a hurry that was more likely to have bugs in it, or maybe that code was also less convoluted. All of these results are presented in detail in a paper published in 2016.

Next what we’d like to do is dig into these findings. For example, we are devising an experiment to compare, in a controlled setting, programmers who are trained in C and C++ with programmers who start that way but then are taught a safer programming language. We’re thinking about using Go for that comparison. Using this setup, we can more safely conclude causation rather than correlation.

Question: The IEEE Center for Secure Design says in a recent paper that many breaches are due to design flaws — in other words, not simple bugs, but something more fundamental in the software’s architecture. What are some tips and best practices for minimizing flawed designs so they don’t create vulnerabilities?

Hicks: I agree that design is really important. My Coursera course on software security has a unit on secure design, and I borrow liberally from the report you mention. Also, for the build it, break it, fix it contest problems, we tried to come up with software specifications for the participants that included security goals that affect design.

There are a lot of questions developers should consider. For example, do I need to use cryptography? What kind of cryptography satisfies the goal I have in mind? What are the attackers’ capabilities, and how do I defend against those? Many of the bugs that people found and exploited in the contest failed to match requirements with appropriate designs.

For example, we asked students to build a simulation of an ATM communicating with the bank. They had to worry about the client (the ATM) and the server (the bank), and they had to worry about a man-in-the-middle, that is, an adversary who could observe, corrupt, and insert messages. So they had to think, “I need to encrypt my communication, so the observer can’t learn bank account balances and such.” But they also needed to consider other types of attacks, such as replaying a withdrawal message to debit the account twice.

This is a case where all the code can be programmed correctly, but it might be the wrong code because the design was not carefully considered. So my high-level advice (and this is not new at all) is that you really need to be thinking about your security goals right from the beginning, when architecting your system. That IEEE paper certainly has a lot of great advice.

Question: Some of your research is protecting against side-channel attacks. Are these increasingly common and, if so, why? And are they particularly tough to detect and thwart?

Hicks: Several implementations of cryptographic algorithms have been shown to be vulnerable to side-channel attacks, and that’s a serious concern. Even when the algorithm itself is correct, a hacker can observe how the algorithm’s implementation operates, for example by measuring how long it takes to encrypt and decrypt. For vulnerable implementations, such observations can reveal information about the secret key used in the encryption, and eventually reveal the key.

So we have to rethink how we write implementations of these algorithms (and others that manipulate secret information) to avoid those side channels. I’m doing research that’s part of the DARPA Space/Time Analysis for Cybersecurity (STAC) program. The program was started because DARPA believes side channels might become an even more serious issue in the future.

Question: One school of thought says open source software is inherently more secure than closed source. The reasoning is that because many people can scrutinize code, there is a better the chance that someone will ferret out a vulnerability. What’s your take?

Hicks: I think that’s actually a great research question. I’m not sure whether we’ll ever get a definitive answer, though. You could test it by looking at some proprietary algorithm that does the same thing as an open source version and then try to measure which one is more secure or more resilient.

You can make arguments both ways. For example, OpenSSL has been open source for a long time. But the Heartbleed vulnerability was a pretty significant bug in OpenSSL. So there you go: a vulnerability was sitting there, available for all to see, undetected literally for years before someone finally discovered it. Perhaps in a closed source setting, people might feel more accountability.

I suppose one more thing in favor of open source is that many cutting-edge research efforts evaluate their efficacy on open source software. For example, many of the recent efforts to discover side channels and cryptographic code have applied to open source implementations of those cryptographies. The Google OSS-Fuzz and Coverity Scan projects are examples where cutting edges vulnerability finding tools are applied to open source software at scale.