Making Security More Useable: A Pre-Conference Conversation with Lorrie Cranor

Facebooktwitterredditpinterestlinkedintumblrmail

Lorrie Cranor is a professor in the School of Computer Science and the Engineering and Public Policy Department at Carnegie Mellon University; director of the Carnegie Mellon Usable Privacy and Security Laboratory; and, most recently, chief technologist at the US Federal Trade Commission. At both CMU and the FTC, Cranor focuses on making security technologies more usable and hence, increasing the protections they provide. Here, Cranor gives us a preview of her talk on usable security and privacy at the IEEE Cybersecurity Development (IEEE SecDev) 2016 conference.

Question:  How does making security more usable increase security and privacy overall?

Cranor: Security research has traditionally focused on the back end—the algorithms—rather than the human factors associated with securing a system. Over the last 15 years, we’ve begun to recognize that often the system’s weak link isn’t the algorithm; rather, it’s that the system is hard to use so people turn it off or find ways to route around it, or they miss an important signal about something they need to do. If we have secure systems that are unusable, people won’t use them, and having a secure system that you’re not using is no security. If we want people to actually use the tools we have available—and use them properly—they have to be usable.

Question:  You noted in a blog post that requiring users to change their passwords often won’t protect a system against attackers. What is a more reliable solution?

Cranor: The theory is that having users change their passwords frequently will cause any unknown attackers in the system to be locked out. However, that’s burdensome on the user, so it probably won’t work. For one thing, the attacker may have put in a backdoor so they can get right back in. Also, when users are required to regularly change their passwords, they tend to do it in predictable ways, so it will probably be easy for an attacker to get the new password.

We’d rather have effective approaches that are more convenient for the user and prevent an attacker from getting into the system in the first place. A lot of things can be done on the back end to improve password security. Hashing algorithms, for example, make it harder for an attacker to guess a user’s password. That’s low cost in terms of the system and basically free for users. On the front end, it’s possible to help users create stronger passwords in a way that’s not overly burdensome. For example, my students are developing a password meter that gives useful, actionable feedback for creating stronger passwords.

Question: How does your work on privacy policies factor in usability?

Cranor:  Privacy policies should inform consumers about their privacy options. In the United States, most of our privacy protections come from what we call “notice and choice”—that is, a company provides notice about the privacy protections it offers, and consumers choose whether or not to do business with that company. That only works, though, if consumers understand their choices, and the first step there is being able to read and understand the company’s privacy policy. I’ve been looking at what can be done to make these policies easier to read and more standardized, and how we might automate the process so consumers don’t have to read the whole policy.

The Useable Privacy Policy Project aims to have computers automatically read privacy policies and distill the most important information from them. A browser plug-in could then highlight the most important information in the policy. We’ve also done some work with computer-readable privacy policies; the main problem there is in getting companies to actually post their policies in computer-readable form.

Question:  In your usability studies, how do you find out what’s working for users and what isn’t?

Cranor:  We do a wide range of user studies. At one end of the spectrum are user surveys—easiest, but all self-reported. The next step is to invite people into the lab and watch them perform some task. We often have to use a little bit of deception, because it’s important in our studies for out subjects not to know what behaviors we’re trying to observe.

For example, one study looked at whether the phishing warnings in Web browsers are effective in keeping people from going to phishing websites. We brought some people into the lab and told them we were studying online shopping. We asked them to go to a website and purchase some inexpensive items, which they’d be reimbursed for. After they made the purchases and completed a survey about online shopping, we asked them to check their email for the receipt and print it for reimbursement. When they went to get the receipt, they found a phishing email in their mailbox that we had planted. Almost all of them fell for the phish, which caused a browser warning to pop up. We got a lot of realistic data from the study.

We also do purely online studies. For example, in one study we ask people to play a set of games and tell us what they think. Somewhere along the way one of the games asks them to download and install an update to their computer. We’re interested in whether they install the update. We vary the warning that pops up on their computers to see if changing the wording or the color or anything else about that warning will get more people to actually heed it.

Question:  What are some of the other projects you’re working on?

Cranor:  We have a large project looking at privacy in the Internet of Things (IoT). The notice-and-choice approach to privacy is hard to implement in an IoT world, so our thought is to have IoT devices broadcast their privacy policies in a computer-readable format. Your smartphone or smartwatch could then analyze the privacy policies and act on your behalf. We’re building a prototype system to show how that could work and are doing user studies to try to understand people’s privacy preferences in this kind of IoT environment.

We’re also looking to improve users’ understanding of privacy in social media. One study looked at why people post things they later regret, and we considered possible interventions. We found people sometimes post when they’re excited or angry, so we developed a 10-second countdown timer for Facebook, which gives you 10 seconds to decide whether you really want to post something. Another tool randomly shows pictures of people in your audience. If, for example, every time you go to post you see your grandmother’s picture, you might think again. These approaches have been fairly effective at helping people think about privacy; they don’t stop you from posting, but they give you a nudge.