Sean Peisert on the Value of Failed Experiments in Cybersecurity

Facebooktwitterredditpinterestlinkedintumblrmail

The “Learning from Authoritative Security Experiment Results (LASER)” workshop series was established in 2012 with the purpose of publishing results for “properly conducted experimental (cyber) security research, including both successful as well as unsuccessful experiments.” Here, 2016 program chair Sean Peisert, a staff scientist at Lawrence Berkeley National Laboratory, associate adjunct professor of computer science at the University of California, Davis, and chief scientist for cybersecurity for the Corporation for Education Network Initiatives in California, discusses the importance of failed experiments and evidence-based approaches in cybersecurity.

Question: The LASER workshop encourages exploration of both positive and negative results of experiments in cybersecurity. Why include negative results?

Peisert: At a high level, failed experiments can help us develop better methodologies by illuminating procedures that might result in invalid results—for example, a procedure that often fails to control for a particular variable. Even if a particular research idea is a conceptual success but some of the preliminary experiments had negative or even just unexpected results, the major conference or journal version of a paper will typically only include the parts of the experiments that succeeded—never the rocky path to get there.

Failed experiments also help demonstrate which ideas—perhaps widely held beliefs—aren’t necessarily backed by strong evidence. Although results presented at LASER might not disprove those beliefs (proving a negative generally isn’t possible), they can show a significant preponderance of evidence.

In all of these cases, as long as the methodology can be clearly explained, these unexpected or negative insights can help other researchers know what to focus on or avoid in their own experiments.

Question: The workshop also aims to “foster a dramatic change in the paradigm of cybersecurity research and experimentation.” Why is there a need for such a change?

Peisert: There is a general sense in the computer security research community that there is a lot wrong with how we’re doing experiments. But we’re missing an opportunity to improve experimental practices because methods and negative results aren’t being published. And even when positive results are published, we often don’t have enough information about the methodology, including the processes, data, and source code, to reproduce the experiment.

Question: Does taking a scientific approach to cybersecurity support such a change in paradigm?

Peisert: To be clear, I am on the side of science, and think bringing science to computer security is a laudable goal.  However, I also have mixed feelings about the notion of a “science of computer security.” My sense is that ever since that term entered our lexicon (about eight years ago) we’ve spent more time arguing about whether something does or doesn’t constitute “science” than making actual progress. This probably shouldn’t be surprising. As computer scientists, we tend to see things in a binary way—but can’t we have a spectrum of options that help advance knowledge? Other disciplines don’t seem to have this problem.  Consider medicine, which these days focuses on “evidence-based” treatments.

We need to develop rules and processes for producing meaningful evidence for understanding when systems are more or less secure, when and how humans who interact with computers might be affecting the security of a system, what methods we can use to identify the most damaging attacks, and so on. If we want to call that process “science,” I’m OK with that.

Indeed, some things are clearly done in a way that many people would find scientific. I personally think that the folks doing what we call “usable security” and “human–computer interaction” studies tend to most often be setting the gold standard for computer security experiments here. But at the same time, I don’t want the vocabulary to get in the way of meaningful progress, and maybe that means we need to start with evidence-based approaches, for example, as is often done in medicine, rather than focusing exclusively on ones that fit a definition of “science.”

Question: You’re also involved with the Open Science CyberThreat Profile (OSCTP). Can you talk a bit about that effort and its asset/impact-oriented approach?

Peisert: Many “open” (that is, unclassified) scientific research efforts don’t have the capabilities to secure their systems. This is likely because many such projects have very small (or nonexistent) IT staffs in comparison to the number of domain scientists, such as astronomers, biologists, chemists, and oceanographers, and the IT people they do have might not be well versed in computer security. The OSCTP effort tries to help scientists and IT personnel involved with “open science” projects determine their organizations’ most important network-connected assets—such as data servers, high-performance computing machines, radio telescopes, and particle accelerators—and to better understand the risks to those assets. This way, they can be more informed about their risk to potential attacks, and start engaging with the right people to help mitigate those risks.

Question: Earlier this year, the Corporation for Education Network Initiatives in California (CENIC) and the Energy Sciences Network (ESnet) announced a new partnership. What is the focus of this initiative?

Peisert: ESnet is one of the most important national research and education networks and CENIC is the regional research and education network for the state of California. CENIC serves more than 20 million users, encompassing the vast majority of K–20 students in California, together with educators, researchers, and other vital public-serving institutions. Given that ESnet and CENIC have a key vantage point regarding security for the institutions they provide connectivity for, they decided to get together to explore and share ways in which they could collectively better secure their own networks, as well as help to provide security for their networks’ users.

Question: How do the other efforts you’re currently involved in tie in with your work on these projects?

Peisert: Over the last few years, I’ve been focusing on security for high-performance computing environments and network-connected scientific instruments, and for electrical power grid systems. With cyber-physical systems—including both network-connected scientific instruments and power grid systems—very often you can just look at a network-connected physical system (a rotating generator, for example) and see if it’s doing what it’s supposed to or not. This is in contrast to computer systems that are so large and complex that the activities inside them are opaque. After many years of feeling frustrated by that opacity, I find it refreshing to be able to focus on systems for which “ground truth” about their state with regard to security is a bit more readily understandable.