Fred Schneider on Building a Body of Cybersecurity Laws

Facebooktwitterredditpinterestlinkedintumblrmail

Fred Schneider is the Samuel B. Eckert Professor of Computer Science at Cornell University. His research spans early work on designing fault-tolerant systems to current collaborations on public policy and legislation. In a 2012 paper, Schneider made the case for a science of cybersecurity. In this interview, he chats about building a set of cybersecurity laws researchers can draw on to design secure systems.

Question: First of all, why a science of cybersecurity as opposed, for example, to a theory or practice of cybersecurity?

Schneider: Let’s start with the definition of a science. In the old days, a science was any body of knowledge. The term then evolved to denote any body of truths that could not be refuted by experimentation. In the middle of the last century, we got to the point where scientists were doing things that one couldn’t validate or refute with experiments. That brought us to the contemporary meaning of the term, which is a body of laws that allow you to predict some phenomenon of interest.

By this most recent definition, computer science should be seen as a “science”–witness the many results that bound fundamental costs for solving certain problems or that characterize what things are impossible to compute, independent of the details of the computer or the architecture or the programming language. These laws transcend particular implementations and also transcend technological developments. If nothing else, they can help us avoid doing all of the work of building something only to discover it can’t work.

Question: How will having a body of laws change approaches to building secure systems?

Schneider: Until now, security has tended to be reactive. It’s a game of cat and mouse–somebody discovers a vulnerability and then somebody else devises a patch. But that approach doesn’t really help the people who build new systems, because to make design decisions they need to understand the consequences of their choices. By way of analogy, when people build bridges, they calculate the properties of the structure from the design and those calculations inform how thick the girders need to be and whether the particular architecture–suspension bridge or arch–makes sense.

Because we don’t have the capability to predict the implications of our cybersecurity design decisions, we only are exposed to those implications after the system is deployed and successfully attacked. To build systems that will resist attack, we need to understand the consequences of our architecture, design, and implementation decisions, which is to say we need a body of laws that will predict this phenomena of interest. The only viable long-term approach for building secure systems is to understand fundamental principles that–independent of the details of the computer or the architecture or the programming language–allow us to make our choices. A science of cybersecurity is what would provide those fundamental principles.

Question: In defining these laws, are you starting from scratch, or are there existing principles that lay the foundation?

Schneider: You don’t get up in the morning and say, “I want to invent a law.” I originated the term “science of security” some years ago, but for quite some time there have been various results in the security area that, in retrospect, should be seen as cybersecurity laws. I’ve found that the way to make progress on the science is by taking a somewhat broader perspective when you develop a solution. The science comes from the way you look at that new defense or whatever you have developed. You ask the next questions: What are the other contexts where it will work? What are the general kinds of assumptions it requires?

Question: What kind of larger discovery might you make in discovering a new defense?

Schneider: The culture of computer security research has been: discover a new attack, find a corresponding defense, and write a paper about it. That will not lead to a science. Instead, you can also ask what some new defense says for a bigger picture. For example, does this new defense belong to a broader class? What is that class? Once I’ve identified a defense as being in some class (and often I will be forced to define the class), will I know something about what it can and cannot do more generally than I would have realized just by looking at the defense?

Question: Can you give an example of how a discovery might lead to a law?

Schneider: Consider monitoring–one of the classic ways to build a secure system. Before a reference monitor allows the next step to proceed, the monitor determines whether that step is consistent with some security policy. For example, say that some user tries to read a file; a reference monitor here would check that this user has the appropriate access. Despite monitoring being prevalent in security, no one had considered what classes of policies it could enforce. For example, can monitoring be used to decide whether a particular piece of information can flow to a particular person? Can monitoring make sure that everyone gets their fair share of computing time?

In the late 1990s, I proved a result that defined the exact class of policies you could enforce with monitoring. And there were some surprises. You could not enforce information flow or guarantees, but you could enforce so called “safety properties”–that is, properties that rule out an identifiable event during a single execution. That then is an example of a science of security law. Moreover, it led to the development of new defenses, such as in-line reference monitors.

Question: What other fields does the science of cybersecurity draw from?

Schneider: Most existing cybersecurity laws draw from programming language semantics–the formalism that helps understand what programs mean and how program text can be analyzed.

Another piece of the picture is likely to be coupled with information theory. To date, information theory has been used in cryptography but it hasn’t had a huge impact yet in computer security. It has enormous potential though. Game theory also seems quite relevant and will likely play a significant role in a science of security.

Question: How does your current research inform your broad goal of developing a science of cybersecurity?

Schneider: My current research is to investigate the potential that adding tags to information affords for enforcing policies. There’s a long-standing understanding (which actually dates back to Alan Turing) in computer science that code and data are equivalent–a program is just a piece of data to another machine. In my recent research, I’ve been looking at tags as a way to enforce information flow confidentiality and integrity. Here, tags (a form of data) are replacing monitors (a kind of program) for policy enforcement. Confidentiality, I earlier said, could not be enforced simply with monitoring. So here’s a place where data and programs might not be interchangeable, which is interesting all by itself. I’m also looking at the use of tags for what is termed “use-based privacy,” which is a regime that says “privacy is concerned with ensuring that information is used only in ways originally prescribed.” Since use-based privacy is a potentially realistic definition for privacy, tags become an important mechanism.