Alex Gantman on Finding the Sweet Spot Where Security and Practicality Converge

Facebooktwitterredditpinterestlinkedintumblrmail

Alex Gantman is the Vice President of Engineering at Qualcomm where he founded the Product Security Group and the Product Security Initiative. He’s also the force behind Qualcomm’s Mobile Security Summit, a conference that gathers top security engineers and researchers to discuss challenges and work toward solutions in the industry. He oversees product security support across all of Qualcomm’s business units and market segments, including mobile computing, networking, automotive, healthcare, smart home, wearables, and Internet of Things. Here, Gantman discusses some of the security challenges he faces as an industry leader.

Question: What are some of the major security challenges the computer hardware industry is facing today?

Gantman: There is certainly no shortage of challenges. From a technical perspective, we have to deal with the ever-increasing complexity of emerging technologies. As engineers, we address complexity by adding layers of abstraction. Sergey Bratus has this great saying: “Layers of abstraction become boundaries of competence.” Abstractions are meant to hide the underlying complexity to make it easier to think about increasingly higher-level systems. An abstraction will necessarily be simpler than the system it models. However, the resulting delta between abstraction and reality is often where vulnerabilities emerge. An attacker knows how the system actually works as opposed to how it is supposed to work and can exploit this divergence. A lot of the common attacks fall into this category. The challenge here for security engineers is to understand this ever-increasing complexity without being able to rely on the misleading simplicity of abstractions.

Question: What types of nontechnical challenges does the industry face?

Gantman: Beyond the technical challenges, as a society we’re still struggling to come to terms with how to think about cybersecurity. Think about the items in your house—the furniture, art, tableware, appliances, and so on. These consumer items have certain properties: they’re priced right for their target market; they look nice; and they provide a high level of functionality. But they also need to be taken care of, and they’re typically not designed to be abused. The things we put in public spaces, on the other hand, like a park bench or a shopping cart, are designed for high usage and unfortunately to be abused. They also cost a lot more, don’t look nearly as nice, and offer a different level of functionality. Modern connected devices break these assumptions, because they’re connected to the Internet, so they’re essentially in a public space all the time. There’s the expectation that they will withstand abuse. But the consumer market is not used to making things that can withstand abuse, especially from trained and determined attackers. The market is still adjusting to this new expectation; manufacturers are still trying to figure out how to design and build products that meet this new expectation in a cost effective way; regulators are still trying to figure out how to assess and impose rules on these products; and consumers are still struggling to figure out how to select products based on their robustness.

Question: How do you test for vulnerabilities when developing new technologies, such as wearables, self-driving cars, and Internet of Things devices?

Gantman: In some sense at a technical level, the vulnerabilities are the same. One of the challenges of emerging technologies is that use cases are not necessarily predictable, and if use cases are not predictable, the threat model is not predictable either. As we talk about emerging technologies, like IoT or self-driving cars, we need to balance the new benefits they will deliver against the new risks. Self-driving cars, in particular, will force a critical reexamination of how security practitioners approach threat modeling and risk management. One of the biggest hopes for self-driving cars is that their deployment will drastically reduce accident rates. For example, let’s assume that the number of car fatalities in the US will decrease significantly by tens of thousands when self-driving cars are rolled out. But what if a few people die from accidents caused in these self-driving cars from exploited vulnerabilities or hacks? Do we consider self-driving cars a horrible failure of the technology or an amazing success?

It’s a modern-day trolley problem. As a vendor or as a regulator in this space, do you try to rush out autonomous car technology as fast as possible to save as many lives as we can or do you delay the launch until it’s as hardened and as secure as possible? Either way, our actions will have consequences.

Question: How do you balance increased security of processors with performance and price point?

Gantman: It’s important to emphasize here that the end goal is to not necessarily make the most secure system possible, but the most secure system that will actually be used. If a technology is not used, either because it’s too expensive or it comes out too late and everyone has already bought something else, then the security of that system is irrelevant. It only matters if it’s used. That’s the sweet spot you have to strike. How do you make sure that the technology is attractive to your customer so that they buy it and use it and secure it.