This week we are featuring an interview with 2013-2014 Berkman Fellow, Bruce Schneier, as part of an ongoing series showcasing individuals in the Berkman community. Conducted by our 2013 summer Berkterns, the mini-series highlights the unique and multidisciplinary group of people within the Berkman community exploring the many dimensions of cyberspace. This week highlights this "security guru" and the evolution of his interest in cyber security.
Interested in joining the Berkman Center’s community in 2014-2015? We are currently accepting fellowship applications - read more here.
Q+A with Bruce Schneier
Berkman Fellow and security technologist
Interviewed in summer 2013 by Berktern Kristin Bergman
Becoming a fellow isn’t your first interaction with the Berkman Center – you spoke here in April about “IT, Security, and Power” with Jonathan Zittrain. In light of that talk and the research you intend to conduct exploring the intersection of security, technology, and people, can you tell us more about the direction your research is going in, any challenges you currently face, and what you will be focusing on as a Berkman fellow?
I’ve been thinking about several things, all centered around power in the information age. I summarized them here before my Spring Berkman visit, and perhaps it’s better to send readers there than to rewrite what I wrote then. Since then, of course, I have been thinking and writing about the Snowden documents and ubiquitous Internet surveillance. My hope is that all of this turns into a book, but it’s too early for me to announce that definitively. I only know that I need something to focus my year at Berkman; otherwise, it will be over in a flurry and I won’t have anything tangible to show for it.
How did you originally become interested in security and cryptology?
I’ve always been interested in systems and how they fail, probably related to my inherent difficulty in dealing with authority. And I’ve always been interested in secret codes: Alvin’s Secret Code and Codes and Secret Writing are two treasured books from my childhood. But really, my career has been an endless series of generalizations. I started out in cryptography: mathematical security. Then I generalized out to computer and network security. After that, I generalized again to general security technology. And then came the economics of security, the psychology of security, and -- in my latest book, Liars and Outliers -- the sociology of security. I seem to now be straying into the political science of security, and who knows where I will be when my Berkman year ends.
Right now it is nearly impossible to turn on the news or log online without coming across a story about the NSA’s surveillance efforts and related privacy and security concerns. How do you see the media shaping public perception and opinion in this area? Have you felt that representations by the press have been accurate, and is there anything in the coverage that you would want to change or clarify?
The media loves human drama, and the personal story of Edward Snowden has dwarfed any substantive reporting on the NSA’s legal overreach or the details of the programs he’s exposed. That’s unfortunate. Also, there’s not much we’re learning from the Snowden documents that we didn’t already know, at least in broad outlines. What’s genuinely surprising is the complete disregard the NSA has shown for the law, and the degree to which corporations have enabled its practices. Secret rulings of secret laws made by secret judges -- that doesn’t sounds like the America I know. I’m tired of the “security vs. privacy” framing, as well as the fear rhetoric. I want debate on offensive cyberwar actions, the international hypocrisy of these surveillance programs, their uses beyond terrorism, and how to rebuild trust. Finally, I want discussion of transparency and accountability, which are the normal mechanisms we use to provide for security when we permit others to invade our privacy for legitimate purposes.
With respect to the disclosure of data vulnerabilities, you are known as a proponent of full disclosure, yet coordinated vulnerability disclosure has become the social norm. What weaknesses do you see in the coordinated vulnerability approach? Particularly in light of Google’s recently announced disclosure timeline for vulnerabilities under active attack, as well as recent cases like Auernheimer, (how) do you foresee data security and vulnerability disclosure practices changing?
The full disclosure debate is old. I first wrote about it in 2001 (although this 2007 essay is better), and not much has changed over the years. Vendors don’t want security researchers to go public with the vulnerabilities they find, because it embarrasses them and forces them to fix those vulnerabilities. But the community knows that if they don’t go public with the vulnerabilities, the vendors will take their own sweet time fixing those vulnerabilities. The “responsible disclosure” movement, what you’re now calling coordinated vulnerability disclosure, is a compromise: the researcher gives the vendor a head start at fixing the vulnerability before publishing it. This works as long as the threat of public disclosure is there. What Google is saying now is that the head-start window is shrinking: seven days instead of 30 days. That’s fine; the sooner these vulnerabilities are publicized, 1) the faster vendors will fix them, and 2) the better users will be able to defend themselves.
As Google Glass becomes accessible, many people are starting to think about a day when recording is ubiquitous, and peer privacy and surveillance are substantial concerns. With this future in mind, what are your views on privacy and security in the resulting human sensory augmentation? What would you recommend as a responsible way to consider privacy and security as society moves forward with this kind of progress?
What’s important here is information collection and recall: memory. We get so much social benefit in the fact that we don’t notice a lot of what’s going on around us, and we remember even less of it. Lifelogging, whether by Google Glass or some other technology, gives us perfect perception and eidetic memory. Combine that with facial recognition, object lookup, automatic facial recognition, the Internet of things, and you have a very different way of interacting with the world. Allow other people to search your lifelog, and you have what Charlie Stross refers to as the end of pre-history. Whether this is the end of privacy or not depends on the choices we make as a society: what we consider moral and proper behavior in this future. The biggest mistake we can make is to fail to think about this, and allow for-profit corporations to shape the future based on their revenue models.
Are there any other projects at Berkman about which you are particularly excited?
Almost certainly, but I don’t know what they are yet. I know I’m going to have to focus, though. There’s so much cool stuff going on at Berkman that I’m not going to be able to work on everything. I also want to work with some students -- but I haven’t picked the students or the projects yet. Any students within earshot who have an interesting project in mind, please contact me.
Lastly, I have to ask about the meme: how do you feel about Bruce Schneier Facts?
I lead a surreal life. It’s something I’ve just had to accept.
This post is part of a current series of interviews with members of the Berkman Community. Previous entries: Sara M. Watson and Yang Cao; Kate Darling, Hasit Shah, Dalia Othman, and J. Nathan Matias; Jeff Young and Sonia Livingstone; Shane Greenstein, Niva Elkin-Koren, and Amy Johnson
Last updated November 25, 2013