Dr. Gary McGraw was a co-founder of the IEEE Center for Secure Design at its inception in 2014. He has served on the IEEE Computer Society Board of Governors, was founding editor of the Building Security In department in the IEEE Security & Privacy magazine, and he produces the monthly Silver Bullet Security Podcast for that magazine. He is the Chief Technology Officer at Cigital and the author of many books, articles, and blogs on computer security.
Question: Though software security is evolving into a discipline, with a trend towards certification, your own background includes an undergraduate degree in philosophy. How did that background serve you in software security, a field you actually helped to establish?
McGraw: I started coding when I was sixteen years old, after I got an Apple ][+ in 1981. When I attended college, I decided on a liberal arts education, rather than music school, though I was a classically trained musician. At the University of Virginia I studied different subjects before focusing on philosophy as my major.
I got interested in philosophy of mind and continental philosophy and encountered the work of Doug Hofstadter on artificial intelligence. His work fascinated me, so I went to Indiana University to study computer science and cognitive science with him. He was my Ph.D. advisor.
I grew up with a computer and taught myself coding at the same time that the Internet was blossoming. I watched it grow from a science experiment to a fantastic communications medium. By the time I was in grad school it became clear that the Internet was going to evolve into a commercial and communications tool for everyone. And that meant computer security would play a fundamental role in the future of the Internet.
My background in philosophy meant that I’d learned to read and think and analyze—to write and communicate. In grad school I learned to do peer-reviewed, scientific research. It became clear to me when I got my first real job that the then-current approach to computer security was overly emphasizing network connectivity and underemphasizing root causes, which were either poorly implemented or poorly designed systems. Around 1999 I got really interested in software security and wrote the first book in the world on that, called Building Secure Software: How to Avoid Security Problems the Right Way, with John Viega. We knew that the developers building our systems didn’t know enough about computer security. I’ve been working on that ever since, for about twenty years.
Question: If you were addressing students or young professionals getting into the field, what would you say about the value of a diverse undergraduate education?
McGraw: The diversity that we’re experiencing in the field of computer security today is a fantastic thing. It’s akin to the Renaissance, which led to lots of specialization in particular disciplines and the university system that we know today. In some sense that’s the sort of phenomenon going on in computer security today – there are a number of specialties one can go into, such as software security, networks, policy analysis, business management.
For all of those roles I’m a proponent of a broad and deep liberal arts education. But I also will acknowledge that having a deep technical capability and understanding of how technology actually works is highly important for computer security. In particular, when it comes to software security, my personal view is that a practitioner should have deep coding experience and be a software person before plunging into software security.
So, one’s background work, training and experience really matters. Now that computer security is becoming such a large discipline with a proliferation of specialties and entire academic degrees are based on that subject alone, the results may be mixed. We may see fewer thought leaders in the field who come from subjects such as biology, statistics, mathematics or, in my case, cognitive science. That may lead to less creativity in the field. Scientific training is very important. And that’s a bit underemphasized today in the popular market, which tends to glorify hacker boys and spectacular hacks. We don’t talk enough about the science of building secure systems and security engineering and an approach grounded in basic science.
Question: Are you saying there’s value in developing intellectual instincts that arise from a diverse education, as well as the need for technical training and experience?
McGraw: It’s a little of both. I don’t think you can discount one over the other. My own view is that computer science and security is becoming a more humanistic discipline. I’ve been involved in helping various universities “humanize” their approach to computer science. At Indiana University we intentionally developed an entire School of Informatics that includes computer science as a sub-discipline. At the University of Virginia I helped to develop a Bachelor of Arts program for students in the College of Arts and Sciences to earn a computer science degree even though the CS department is in the School of Engineering.
So my view is that people who can communicate and think widely and take a coherent philosophical stance help the field of computer science become less geeky, become a little bit less wrapped up in staring at its own navel and interact with the rest of society. Because, like it or not, the stuff that us geeks build is everywhere now and it’s really important that we understand the implications of what we’re designing and building for future societies.
A great example of that, that we have yet to grapple with in our own society, which is supposedly free, is the notion of surveillance as a security mechanism and the impact on freedom of speech and freedom of association. If we don’t teach our technologists that the things that they invent are tools that can be used for good and for evil and to distinguish between those possibilities as they’re designing stuff, then we may end up accidentally painting ourselves into a corner. That’s why I have a personal bias towards fields like philosophy and ethics and religious studies and sort of non-technical approaches to the way society ticks.
Question: You co-founded the IEEE Center for Secure Design. What are your thoughts on its mission to promote a balance between proactive, secure design and the reactive stance of fixing bugs?
McGraw: Organizations want to strike this balance, the question is how to do it. The good news is, we actually do know how to do it. People can consult the Building Security In Maturity Model (BSIMM), which objectively describes how more than one hundred firms are approaching software security. Today we have tools, technologies and approaches that work for many different types of businesses and we can all learn from measuring the effectiveness of their best practices.
Question: Can you give us a thumbnail sketch of the recently released BSIMM6?
McGraw: Sure. We released BSIMM6 in October 2015. It is the sixth iteration, hence the name. The study is an objective way of measuring the effectiveness of myriad, specific software security initiatives, based on data drawn from 78 firms. BSIMM measures the effectiveness of best practices in training developers, doing code review and architecture analysis, responding to an incident, measuring and monitoring these steps, tracking compliance and policies. Everything in software security is covered by the BSIMM.
We tried to make the report friendly to people with disparate technical backgrounds, from the über-geek to the senior executive. And we published it under the Creative Commons, a nonprofit organization that enables the sharing and use of knowledge through no-cost legal tools. Not incidentally, we used Creative Commons to publish the IEEE Center for Secure Design’s first report, “Avoiding the Top 10 Software Security Design Flaws.” BSIMM6 is an objective, scientific set of facts to help you figure out what you ought to be doing by revealing what everybody else is doing.