Farinaz Koushanfar on Securing Hardware in the Emerging Physical/Logical World

Facebooktwitterredditpinterestlinkedintumblrmail

Farinaz Koushanfar is a professor and Henry Booker Faculty Scholar in the University of California, San Diego’s Electrical and Computer Engineering Department, where she directs the Adaptive Computing and Embedded Systems Lab. Her research addresses several aspects of efficient computing and embedded systems, with a focus on hardware and system security, real-time/energy-efficient big data analytics on small devices, design automation, and privacy-preserving computing. Koushanfar serves as an associate partner of the Intel Collaborative Research Institute for Secure Computing to aid developing solutions for the next generation of embedded secure devices. Here she talks to us about security issues and solutions at the intersection of the physical and logical worlds.

Question: What role does piracy play in increasing the vulnerability of hardware systems?

Koushanfar: Piracy is a big problem, and a hard problem to address. Every year, more transistors are fabricated than all previous years combined. Just as a comparison point, the number of transistors fabricated in 2017 is estimated to be about two orders of magnitude greater than the total number of ants on Earth. These transistors are rarely precoded with protection. Current methods for detecting piracy are costly and don’t make sense for cheaper legacy devices. For example, say a pirate repackages/recycles some old chips that were trashed and sells them for $50. Does it make sense financially to try to detect every $50 device on the market?

Aside from recycling, another source of piracy is overbuilding. Most chip design houses are fab-less, particularly those in the United States. Designers make the blueprint of the chip and send it offshore to a country that has fab. They might order 1 million copies of the chip, but nothing prevents the fabricator from making another million copies and selling them on some other market. Pirating can also occur after chips are tested. Typically, 10 to 15 percent of chips are found defective during testing. This doesn’t mean that a chip is totally dysfunctional, just that it has reliability problems. If someone gets hold of and sells chips that failed one or two tests, they’re introducing reliability problems into the system. Another issue is that a lot of devices from the same family will likely share keys, so if someone gets possession of some of these devices and cracks the keys, they could use the keys to access the others.

Question: What approaches can be used to protect against piracy?

Koushanfar: Both passive and active approaches are possible but passive methods like monitoring or testing the parameters after production are typically more costly and harder to enforce. My team has invented methods that allow chip designers, for the first time, to actively and uniquely obfuscate and control each chip that’s manufactured in offshore countries. Unique control had been a challenge prior to our work, since for scalability and cost reasons, all the chips are fabricated from the same blueprint (mask). Imagine you have a stamp and every time you stamp it on a piece of paper, you get a slightly different print (fingerprint). We take this idea a bit further by integrating the fingerprints within each chip’s functionality, thus making obfuscated chips with unique functional locks. Basically, manufacturing variations create the same effect as a fingerprint, even though all the chips come from the same mask. We created a method, called hardware metering, for extracting these kinds of analog variations into digital codes and tying those codes to the chips’ functionality. Then, even if the manufacturer builds extra chips, they can’t be activated without the specific codes. Similar hardware-based locking or obfuscation mechanisms can be used to control and attest the software and data running on the IC based on the chip locks or obfuscated properties.

Question: Do you have other projects at the intersection of hardware and software?

Koushanfar: Interestingly, there are problems on the other side of the software protection and data privacy spectrum that can leverage hardware design principals to become practicable. They can simultaneously guide the design of new secure hardware. For example, in a more recent project, called TinyGarble, we introduced the idea of building upon scalable hardware synthesis methodologies for privacy protection with orders of magnitude efficiency compared with the prior art.

A classic problem in secure computing/cryptography at the software level addresses how to allow two parties to jointly compute a function without revealing their data to each other. The well-known example is the millionaire’s problem, where two parties can compare their wealth without revealing their own wealth. Andrew Yao introduced this concept in the 1980s, along with a protocol. For about 20 years, however, the protocol didn’t have any practical application because it’s too computing intensive to implement.

An essential part of garbling is converting the description of the function into Boolean logic. But conversion to logic is a task that hardware designers have been doing for more than 50 years. The contribution of our collaborative work to this field has totally changed the efficiency of these algorithms by bringing a digital designer’s perspective and building upon Boolean logic synthesis techniques. Our first set of techniques, which is more of a software contribution, aimed to show that you could use these hardware compilers to automatically interpret secure functions with an unprecedented efficiency. Next, we realized fully secure processors in hardware, so we now have an actual processor that can operate a garbled program and garbled data. It gives a final result that you can decrypt without knowing the details or the internals of the program or the data. Under certain assumptions, this hardware, which we call a GarbleCPU, is the first known instance of a scalable leakage-resilient cryptography because it processes data in a provably garbled, obfuscated way.

Question:  What is the role of scalable domain-specific machine learning solutions in establishing trust?

Koushanfar: The central theme of my research lab is efficiency, scalability, and practicability. Several classic security and cryptography solutions are resource intensive, and despite their conceptual beauty, not easily adoptable in practice. The discrepancy between the physical resources and security is even more pronounced when the content and transactions scale.

For example, consider how much data we share with various companies and entities, such as Google and Facebook, and that for many contemporary businesses, not just these giants, their value is their customer base, i.e., how much customer data they possess. It’s customary for such organizations to do a lot of analytics on this data for various reasons, such as targeted advertising. In domain-specific machine learning, we customize the learning algorithm to the data and to the machine’s architecture in a way that this learning can be done much more efficiently, while we also add security and privacy in a minimal number of places. The focus is to allow privacy-preserving computing, or computing on encrypted data, within limited resources. The idea is philosophically rather simple, since customized solutions are often made to be more efficient than the general-purpose ones. The challenge, however, remains in scaling customized secure solutions to several algorithms and platforms. This is why our research aims at making automated solutions that are robust and can be parametrically customized to various systems and platforms.

Question: What is your vision when it comes to creating a framework to support the Internet of Things (IoT)?

Koushanfar:  Many classic security algorithms have been about logically securing things. As we move forward in the IoT era, we are talking not only about the logical world, but also the physical world. We need to integrate physical aspects. For instance, notions of security in this IoT world could be tied to location and time. The idea is that one could leverage or bootstrap the spatial and temporal correlation among things to develop new security protocols. For example, if devices are in physical proximity and have certain other properties that can be used in multifactor authentication, such as exposure to the same noise in a room, one could leverage this info to build a more secure system. It is also possible to use a signal’s time of travel to make a statement about physical proximity. This type of bootstrapping—that is, bootstrapping physical and temporal properties—can be used to authenticate devices. The random generation of keys from chip manufacturing variations is an example of how a physical aspect of the system helps us to generate something that supports logical security.

This is our current focus in IoT security and where I envision a lot of new algorithms for IoT security can be developed—at the intersection of the physical and logical worlds. It’s not yet clear what IoT standards will emerge. But for anyone who is interested, this is the time to have an impact.