Brian David Johnson was the first futurist at Intel and is currently futurist in residence at Arizona State University’s Center for Science and the Imagination, a professor of practice in the School for the Future of Innovation in Society, and director of the Threatcasting Lab. He’s also a futurist and fellow at Frost and Sullivan and an author of both science fiction and science fact. Among these publications, his article, “Science Fiction and the Coming Age of Sentient Tools,” appeared in the June 2016 issue of Computer; and Wizards and Robots, a young adult novel coauthored with will.i.am about a teenage engineer who gets caught up in a centuries-spanning battle between wizards and robots, is forthcoming in January 2018. Here, Johnson talks about his work as a futurist, in both industry and academia.
Question: What is the role of a futurist?
Johnson: As a futurist, I work with organizations on making decisions today that may not pay off for five, 10, or 15 years. I use a mix of social science, technical research, cultural history, economics and trends, global interviews, and a little bit of science fiction, looking at both positive and negative futures based on all of these facts. I then work with these organizations to look backward and say, if these are the positive futures and these are the negative futures, what do we need to do today, tomorrow, five years from now to get to the positive future and avoid the negative? I also help them to identify the externalities—that is, the things they need to keep an eye on that their organizations may have no control over but will meaningfully affect those futures in either in a positive or negative way. The process is called futurecasting.
Question: What is the mission of the Threatcasting Lab?
Johnson: The Threatcasting Lab has three main goals. First, we convene threatcasting workshops twice each year—one on the West Coast and one on the East Coast. These workshops bring together people from the military, federal agencies, private organizations, foundations, trade associations, and corporations, and private industry. The threatcasting process involves looking 10 years out for possible threats, then, as in futurecasting, looking backward and asking how can we disrupt, mitigate, and recover from those possible threats. Second, we generate two reports a year with findings from the workshops. The first, “A Widening Attack Plane,” was published earlier this year. Third, and in parallel, we document the threatcasting process in such a way that we can create teachable texts, so people can do threatcasting themselves.
Question: What are some of the findings from the first workshop?
Johnson: In our first workshop, we focused on the proliferation of artificial intelligence 10 years out. Complex automated systems like supply chain are often designed with one goal in mind: efficiency. How can we make it run better? How can we make it run quicker? In multiple scenarios and in multiple futures that we modeled, we began to see that if you design complex automated systems with only efficiency in mind, they’re very easy to hack. So we need to start designing these systems not only with efficiency in mind, but also with security and safety and resilience.
Also, as we look 10 years out, we begin to see the weaponization of artificial intelligence and the weaponization of data. Today, cyberattacks and digital attacks mainly take place in the digital world, but they’re going to begin to blend, to move into social or personal attacks as well as physical. One of the key vulnerabilities with artificial intelligence in these complex automated systems is that they use data as their input. Their understanding of the world is based on this wide amount of data, and if the data can be corrupted, these systems begin to live in a very different reality from our reality. That could be anything from spoofing GPS to changing maps and, thus, changing the world we live in. State actors or terrorists or hacktivists could take over these complex systems and change their reality.
Question: How does science fiction support futurecasting or threatcasting?
Johnson: We use science fiction prototyping as a way to move from a very high level—the social, technical, economic, and cultural-historical inputs—to the very specific—that is, a person in a place with a problem. When we do the modeling, we do it multiple times, giving us multiple futures. The power of futurecasting and threatcasting is in the aggregate of those futures. Many times we’ll come up with five, 10, 30 futures, but they’re all based on the same inputs and the same process, so you can begin to look at them in totality and you begin to see patterns. It can also identify areas that you’re not talking about, areas you need to look into more deeply that might expose areas for research.
Question: Has anything come from your work with children that could change your approach to futurecasting when you think 10–15 years out?
Johnson: Students in the K–12th grade range approach robots with less baggage, fewer preconceptions than even college-age students, so they come up with fundamentally different ideas. In the adult world, we have always treated robots as slaves. We have them do the three Ds: things that are dirty, dangerous, and dull. It’s always command and control. Early on, when I was working with 21st Century Robot, we began asking students, “What’s your robot’s name?” “What would your robot do that nobody else’s robot would do?” “What would your robot do when you are at school?” And they would answer, “I want my robot to play Legos with me.” “I want my robot to sing with me.” “I want my robot to dance with me.” These kids were creating far more complex social interactions with robots than were coming out of some areas at MIT. Because they’re unencumbered by the past, they can imagine robots in a radically different way. As a result, we’re taking a fundamentally different approach to looking at robots and robot–human interaction. These students are showing us a new way of acting and interacting with robots. It will have a huge effect on my futurecasting when it comes to robots.