Current Berkman People and Projects

Keep track of Berkman-related news and conversations by subscribing to this page using your RSS feed reader. This aggregation of blogs relating to the Berkman Center does not necessarily represent the views of the Berkman Center or Harvard University but is provided as a convenient starting point for those who wish to explore the people and projects in Berkman's orbit. As this is a global exercise, times are in UTC.

The list of blogs being aggregated here can be found at the bottom of this page.

July 28, 2015

Bruce Schneier
Stagefright Vulnerability in Android Phones

The Stagefright vulnerability for Android phones is a bad one. It's exploitable via a text message (details depend on auto downloading of the particular phone), it runs at an elevated privilege (again, the severity depends on the particular phone -- on some phones it's full privilege), and it's trivial to weaponize. Imagine a worm that infects a phone and then immediately sends a copy of itself to everyone on that phone's contact list.

The worst part of this is that it's an Android exploit, so most phones won't be patched anytime soon -- if ever. (The people who discovered the bug alerted Google in April. Google has sent patches to its phone manufacturer partners, but most of them have not sent the patch to Android phone users.)

by Bruce Schneier at July 28, 2015 11:37 AM

July 27, 2015

Radio Berkman 224: Reddit – Community? Or Business?
Listen:or download | …also in Ogg Reddit is sometimes called “the frontpage of the Internet.” 170 million people a month help upload, curate, and make viral the cat photos, prank videos, and topical discussions that help fuel our neverending thirst for content. But recent moves by Reddit management to tighten up their content policy have […]

by Berkman Center for Internet & Society at Harvard Law School ( at July 27, 2015 05:07 PM

Berkman Center front page
Radio Berkman 224: Reddit - Community? Or Business?


Should Reddit be a home for "Free Speech?" Or just "Good Speech?" And who gets to decide? This week on the podcast.

Thumbnail Image: 

Reddit is sometimes called "the frontpage of the Internet." 170 million people a month help upload, curate, and make viral the cat photos, prank videos, and topical discussions that help fuel our neverending thirst for content.

But recent moves by Reddit management to tighten up their content policy have threatened what is seen as the fundamentally "free speech" culture at Reddit.

David Weinberger and Adrienne Debigare recently wrote about Reddit's crossroads for the Harvard Business Review.

They joined us this week to talk about the culture of Reddit, free speech, and just who gets to make these decisions anyway?

Flickr photo courtesy of fibonacciblue
Music from Neurowaxx and Timo Timonen

Reference Section:
How Reddit the Business Lost Touch With Reddit the Culture
Reddit's community responds to the changes
Internet Monitor's roundup of highlights from the controversy

by djones at July 27, 2015 05:04 PM

Project Spotlight: Internet Monitor


A new, customizable dashboard with real-time info about the Net coming soon


Internet Monitor’s mission is to collect and share data about how people all over the world access and use the Internet. This fall the project plans to release a new platform for displaying all this data and more on one dashboard.

Thumbnail Image: 

By Elizabeth Gillis

As you age, you go to the doctor so that he or she can measure your progress: height, weight, blood pressure, and maybe, cholesterol. Through all these measurements, it's possible to gauge how your body is growing and changing in relation to your life and your environment.

At the Berkman Center, we’re collecting similar data on change and growth for a much larger body, the Internet, with a project called Internet Monitor.   

Internet Monitor’s mission is to collect and share data about how people all over the world access and use the Internet. Right now, their online data platform presents and analyzes data from about 100 countries. Data comes in from a number of sources, including Berkman projects and partners, and can be used to figure out how accessible the Internet is in these places and what different online communities are talking about.

The team behind Internet Monitor mines for information and provides analysis and context so that it can be useful to decision makers, researchers, journalists and others who might just be interested in, for example, the secret, online world of Arab atheists, or the growing list of Internet regulations in Russia.

“The Internet is not its own separate thing anymore,” said Internet Monitor’s senior project manager Rebekah Heacock Jones. “It’s infiltrating all of these different pieces of our lives. We need accurate data about Internet access, online content controls, and online activity in order to better inform all kinds of decisions that affect the future of the Internet."

Since 2014, the Internet Monitor website has been a central location for hard data about Internet access and analysis about what different communities around the world are saying. The numbers come from places like the International Telecommunications Union, Akamai, and NetIndex, which collects speed and quality tests from In fact, there are a ton of sources, and they are all listed on Internet Monitor’s Data page. The goal of the project, according to Heacock Jones, is two-fold: providing better data for better decision making, and conducting original research that analyzes this data to see what it says about the status of the Internet.

This fall Internet Monitor plans to release a new platform for displaying all this data and more on one dashboard.

“The really exciting thing is that the dashboard format lets us pull in data that we don’t currently have a way to represent or to display,” she said. The new dashboard will allow people to see information like the most recent news topics in China, rather than just fixed statistics like the Internet penetration rate for China in 2012.

(Image: The IM dashboard currently in development.) 

Heacock Jones said she hopes the dashboard format will draw in audiences who want to be able to engage more with the data. It can be useful to journalists as a monitoring and storytelling platform, and it’s customizable for someone interested in one specific location or topic. Users are not only able to arrange the data provided in a way that’s useful for them, they’re also invited to contribute data as well as widgets that can be useful for others.  As such, the new dashboard will open up new space for data sharing partnerships with a wide variety of companies and partner organizations.

“There's a lot of important data out there, but much of it is inaccessible to policymakers and the public—it's in corporate hands, or it's sprinkled around on lots of different websites. If you don't know where to go, it's really hard to find.” she said. “We're hoping that by putting this focus on data we can encourage people who do have good data to make it more publicly accessible. And then we can also make that data more easily available for policymakers and others who are making decisions.”

As the dashboard emerges on the scene, Internet Monitor will continue publishing reports and blog posts analyzing what’s going on within the data. The next report on the Twittersphere in China will come out this summer, followed by one on the Twittersphere in Saudi Arabia.

Twitter is one of the many sites blocked in China, so the report focuses on how users who have gotten around the firewall are choosing to interact with the site. Saudi Arabia, on the other hand, is the country with the highest percentage of active Twitter users in its online population.

“Our Chinese Twitter research looks at this community of people who have already hopped over the censorship wall,” said Heacock Jones. “It looks at what they’re talking about online and how they’re connecting to networks outside of China.”

Check out the video below to learn more about Internet Monitor.

by gweber at July 27, 2015 02:33 PM

Bruce Schneier
Hacking Team's Purchasing of Zero-Day Vulnerabilities

This is an interesting article that looks at Hacking Team's purchasing of zero-day (0day) vulnerabilities from a variety of sources:

Hacking Team's relationships with 0day vendors date back to 2009 when they were still transitioning from their information security consultancy roots to becoming a surveillance business. They excitedly purchased exploit packs from D2Sec and VUPEN, but they didn't find the high-quality client-side oriented exploits they were looking for. Their relationship with VUPEN continued to frustrate them for years. Towards the end of 2012, CitizenLab released their first report on Hacking Team's software being used to repress activists in the United Arab Emirates. However, a continuing stream of negative reports about the use of Hacking Team's software did not materially impact their relationships. In fact, by raising their profile these reports served to actually bring Hacking Team direct business. In 2013 Hacking Team's CEO stated that they had a problem finding sources of new exploits and urgently needed to find new vendors and develop in-house talent. That same year they made multiple new contacts, including Netragard, Vitaliy Toropov, Vulnerabilities Brokerage International, and Rosario Valotta. Though Hacking Team's internal capabilities did not significantly improve, they continued to develop fruitful new relationships. In 2014 they began a close partnership with Qavar Security.

Lots of details in the article. This was made possible by the organizational doxing of Hacking Team by some unknown individuals or group.

by Bruce Schneier at July 27, 2015 11:17 AM

July 26, 2015

David Weinberger
Angry Birds Pansies

Pansies are supposed to look like thoughtful faces, right? That’s where the word comes from. But something seems to have pissed them off.

Or maybe their DNA somehow got mingled with Ed Asner’s.

The post Angry Birds Pansies appeared first on Joho the Blog.

by davidw at July 26, 2015 02:01 PM

July 24, 2015

Bruce Schneier
Friday Squid Blogging: How a Squid Changes Color

The California market squid, Doryteuthis opalescens, can manipulate its color in a variety of ways:

Reflectins are aptly-named proteins unique to the light-sensing tissue of cephalopods like squid. Their skin contains specialized cells called iridocytes that produce color by reflecting light in a predictable way. When the neurotransmitter acetylcholine activates reflectin proteins, this triggers the contraction and expansion of deep pleats in the cell membrane of iridocytes. By turning enzymes on and off, this process adjusts (or tunes) the brightness and color of the light that's reflected.

Interesting details in the article and the paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

by Bruce Schneier at July 24, 2015 09:18 PM

How an Amazon Worker Stole iPads

A worker in Amazon's packaging department in India figured out how to deliver electronics to himself:

Since he was employed with the packaging department, he had easy access to order numbers. Using the order numbers, he packed his order himself; but instead of putting pressure cookers in the box, he stuffed it with iPhones, iPads, watches, cameras, and other expensive electronics in the pressure cooker box. Before dispatching the order, the godown also has a mechanism to weigh the package. To dodge this, Bhamble stuffed equipment of equivalent weight," an officer from Vithalwadi police station said. Bhamble confessed to the cops that he had ordered pressure cookers thrice in the last 15 days. After he placed the order, instead of, say, packing a five-kg pressure cooker, he would stuff gadgets of equivalent weight. After receiving delivery clearance, he would then deliver the goods himself and store it at his house. Speaking to mid-day, Deputy Commissioner of Police (Zone IV) Vasant Jadhav said, "Bhamble's job profile was of goods packaging at's warehouse in Bhiwandi.

by Bruce Schneier at July 24, 2015 08:26 PM

10 Ways Not To Start A Radio Story
Let's keep those listeners glued to their headphones.Let’s keep those listeners glued to their headphones. (via Shutterstock)

I listen to radio almost nonstop for my job, and the more I listen, the more I notice trends. Producers can fall back on patterns that have worked and feel good.

We all know the ingredients of a good story: characters, conflict, hooks, turns, surprise, visual or sensory details, scenes, reflection… It’s easy to start listing these as checkboxes in our minds.

The problem is that once we operate according to checkboxes, we start making boring radio. We settle for an obvious descriptive detail, or check off the “surprising box” with a structure that isn’t surprising. Great intention can lead to lazy execution.

Here are 10 openers I’ve heard again and again from public radio producers and podcasters. They’re easy. They’re appealing. They’re overused.

1. The “Not Your Typical”

The concept behind the “Not Your Typical” beginning is that a character seems average—but there’s a twist. Often I’ll hear a reporter take some time to set a scene, then drop what’s supposed to be the big reveal—that this story is different.

Other times, a story might even open with a sentence like “Jane Doe is not your typical biker.” Even if this is the best concept to begin with, there must be a more compelling way to write or illustrate it.

More fundamentally, it’s not enough for a piece’s only “surprise” to be, say, that old people are doing something young people typically do. That kind of surprise wears off. It’s a reason to start reporting, but it alone won’t justify putting the piece together. A story needs another nugget, maybe an emotional one, to sing.

2. The “I’m Standing Next To”

“I’m standing next to the oldest building in the city. It’s been here for three hundred years…”

I get the sense that this just feels like a solid way to use natural sound. But you can put that ambi under literally any words you want. Better to use the little time you have to set up an interesting observation or metaphor.

There are infinite other ways to begin, so why not come up with some truly fantastic sentences?

3. The Physical Description

Including visual detail for the sake of checking of the “visual box” isn’t useful. When used well, an image can and should knock you over, change the way you see something, unsettle you or pull you in so that it’s impossible to move on with your day until you learn everything you can about it.

Unless the physical traits of a main character are extremely unusual or central to the story, hold off—and even then, resist if you can. Most of the time this isn’t the most interesting way to begin.

4. The Directions

“To get to Joe’s house, you drive five miles west of town until you hit a dirt road winding toward the base of the mountain, then…”

An extraordinary number of stories begin with the reporter giving directions, in some cases for no ostensible purpose. Even when directions do reveal something valuable, visualizing geography requires a lot of imagination on the listeners’ part. It’s too much work to require before you’ve convinced them the story is worth listening to.

Often, I zone out.

5. The Warm And Sunny

Isn’t weather what we talk about with strangers when we can’t think of anything interesting to say? Yes, radio thrives on sensory scenes. But producers need to write them vividly and with precision and purpose. If you want to stun listeners with the top of your story, don’t start with a weather report.

6. The “Okay! So…”

Starting with an off-the-cuff “Okay! So…” is huge right now. It’s colloquial, it’s personal, and it signals we’re jumping into action.

Brilliant producers use this line on brilliant shows, and it works.

But the Okay So has become such a go-to that to me, it’s starting to feel inauthentic, even cliché. When I hear it, I can feel a little manipulated, and I start focusing on the production instead of the story. Unless there’s a really compelling reason to begin with these words—and often there is!—avoid this one.

By refusing to rely on a trick, you’ll force yourself to write something new and strong.

7. The Long Intro

If you listen to PRX Remix, you know that I’m moving toward short intros—or often no host intro at all. I’m all for diving right in and letting a little mystery linger.

8. The Non-Narrated My Name Is

This one’s simple. Except in rare circumstances, start with strong tape, not a self-ID.

9. The Very Important Information

There are lots of issues I care about, but rarely will a story’s importance alone keep me listening.

Don’t start with a fact-vegetable and then assume that I’ll stay with you because I know vegetables are good for me. Start your story with an amuse bouche—a tiny appetizer that bursts with flavor when I pop it in my mouth and leaves me drooling for the main course.

And then I’ll probably eat my vegetables—er, listen to the facts.

10. Anything that isn’t stunning

A first sentence should transfix your listener. It’s competing with music, television, and all of the internet, so find the hook. Pick the detail you can’t stop thinking about and move it to the top. Challenge yourself to find new ways to write about things—which often means pushing yourself to push beyond the first few drafts—or to go deep right away.

So when I’m working, I repeat to myself:

Don’t start the way you think you have to.

When in doubt, write it better.

When uninspired, think Nancy Updike and her talk Die, Mediocrity, Die. (She has tips for what you should do, not just what you shouldn’t.)

When it’s worth it, break the rules. Even mine.

The post 10 Ways Not To Start A Radio Story appeared first on PRX.

by Erika Lantz at July 24, 2015 08:25 PM

Nick Grossman
Pain x Resistance = Suffering (the case for throughput)

For the past nine months or so, I’ve been seeing a therapist specializing in mindfulness. Perhaps the best decision I’ve ever made.

One of the things we spend a lot of time talking about is resistance – everyone has their own quirks and issues, and that’s one of mine.  The tendency to hit the brakes when faced with something difficult or unpleasant.  Set it to the side, avoid, wait.  Obviously, this is a bad tendency, and only serves to make things worse.

One idea that has come up is the relationship between resistance and suffering. Suffering is the ultimate mindstate we are looking to avoid.  There’s this equation which has really stuck with me :

Pain x Resistance = Suffering

In other words, it is possible (and typical) to start with a relatively painless situation and then amp it up, and multiply the ultimate suffering by resisting it.

I can’t tell you the number of things in my life that I have resisted and avoided which then ultimately ended up being no big deal. And the ultimate suffering was more a result of the resistance the the pain itself.

The mindfulness approach to resistance is to instead turn and face whatever thing your avoiding. Just recognize it and be with it. I’ve thought of this before as “living in the fall line“.  The opposite of living in a mode of resistance.

Another way of thinking about it is as throughput. Moving items (projects, emails, bills, whatever) through, rather than letting them like up. Resistance is like arterial plaque. Throughput is the result of keeping things healthy and flowing.

It’s a good feeling.

by Nick Grossman at July 24, 2015 03:11 PM

July 23, 2015

Ethan Zuckerman
Harnessing Mistrust for Civic Action

Yes, it’s international press day here on my old, creaky blog. Friends at Süddeutsch Zeitung asked whether I could turn my Re:publica Keynote on mistrust and civics into a newspaper op-ed. Here’s what I came up with, which ran in yesterday’s newspaper.

On Monday, British comedian Simon Brodkin pelted outgoing FIFA leader Sep Blatter with a stack of dollar bills as Blatter spoke at a press conference. Brodkin’s dollar shower expressed the boundless anger football fans feel about the corruption within football’s world governing body.

When Swiss police arrested senior leaders of FIFA at a posh hotel in Zurich in late May, football fans around the world were shocked. Unfortunately, very few were shocked to learn of corruption in the world governing body of football. Instead, they were surprised that the leaders of an institution with a long reputation for malfeasance might be held responsible for their misdeeds.

This misplaced surprise is characteristic of the current popular mood in many nations. We are so accustomed to news of institutions acting incompetently or unethically that we are less surprised by their misbehavior then that such misbehavior has consequences. Whether we consider the disastrous failures of the US and UK in Iraq from 2003 to the present, the near collapse of the global banking system in 2008 or the discovery of widespread sexual abuse within the Roman Catholic Church over the past two decades, it’s easy to understand why there is pervasive mistrust in many institutions: governments, big business, churches and the press have failed us time and again.

In the US, mistrust in government has deepened over the past 50 years, with 24% of Americans now reporting that they trust their government all or most of the time, down from 77% in 1964. But it’s not only government that Americans mistrust: polls show a steady decline in trust in corporations, banks, newspapers, universities, nonprofit organizations and churches. The only institutions that Americans trust more than they did a generation ago are the military and the police. And while specifics of mistrust differ between the US and Europe, the general pattern is similar. Public relations firm Edelman surveys a thousand citizens in 33 nations each year to build a “trust barometer”, measuring public trust in government, business, nonprofit organizations and the media. According to their survey Germany, Italy, Poland, Spain, Sweden and Ireland all have lower levels of institutional trust than the United States.

One predictable consequence of mistrust in institutions is a decrease in participation. Fewer than 37% of eligible US voters participated in the 2014 Congressional election. Participation in European parliamentary and national elections across Europe is higher than the US’s dismal rates, but has steadily declined since 1979, with turnout for the 2014 European parliamentary elections dropping below 43%. It’s a mistake to blame low turnout on distracted or disinterested voters, when a better explanation exists: why vote if you don’t believe the US congress or European Parliament is capable of making meaningful change in the world?

In his 2012 book, “Twilight of the Elites”, Christopher Hayes suggests that the political tension of our time is not between left and right, but between institutionalists and insurrectionists. Institutionalists believe we can fix the world’s problems by strengthening and revitalizing the institutions we have. Insurrectionists believe we need to abandon these broken institutions we have and replace them with new, less corrupted ones, or with nothing at all. The institutionalists show up to vote in elections, but they’re being crowded out by the insurrectionists, who take to the streets to protest, or more worryingly, disengage entirely from civic life.

Conventional wisdom suggests that insurrectionists will grow up, stop protesting and start voting. But we may have reached a tipping point where the cultural zeitgeist favors insurrection. My students at MIT don’t want to work for banks, for Google or for universities – they want to build startups that disrupt banks, Google and universities.

The future of democracy depends on finding effective ways for people who mistrust institutions to make change in their communities, their nations and the world as a whole. The real danger is not that our broken institutions are toppled by a wave of digital disruption, but that a generation disengages from politics and civics as a whole.

It’s time to stop criticizing youth for their failure to vote and time to start celebrating the ways insurrectionists are actually trying to change the world. Those who mistrust institutions aren’t just ignoring them. Some are building new systems designed to make existing institutions obsolete. Others are becoming the fiercest and most engaged critics of of our institutions, while the most radical are building new systems that resist centralization and concentration of power.

Those outraged by government and corporate complicity in surveillance of the internet have the option of lobbying their governments to forbid these violations of privacy, or building and spreading tools that make it vastly harder for US and European governments to read our mail and track our online behavior. We need both better laws and better tools. But we must recognize that the programmers who build systems like Tor, PGP and Textsecure are engaged in civics as surely as anyone crafting a party’s political platform. The same goes for entrepreneurs building better electric cars, rather than fighting to legislate carbon taxes. As people lose faith in institutions, they seek change less through passing and enforcing laws, and more through building new technologies and businesses whose adoption has the same benefits as wisely crafted and enforced laws.

“Monitorial citizens” are activists whose work focuses on watching and critiquing the work conducted by institutions. The young Italians behind, a project that invites citizens to visit, investigate and review projects paid for with European cohesion funds are monitorial citizens. So are the civilians who review complaints against the police, holding commanders accountable for mistreatment of the citizens. The rise of new tools and techniques, including video sharing and crowdsourced reporting, are helping mitigate the power imbalances between established institutions and the citizens who want to hold them accountable.

Some of the most radical thinking about a post-institutional future comes from proponents of systems like bitcoin, a virtual currency designed to free its users from trusting in central banks and the governments that back them. Internet advocates have a long track record of supporting decentralized systems, from mesh networks that provide internet connectivity without a central internet service provider, or Eben Moglen’s “Freedom Box“, a system for serving webpages that mirrors content around the internet, rather than centralizing it on a single server. But decentralization is a difficult technical problem. Technical systems like Google and Facebook have become powerful institutions not just due to the ambitions of their founders, but from the difficulty of building search engines and social networks in a decentralized way.

Could citizen monitors of FIFA have kept Qatar from hosting the 2022 World Cup? Would decentralized social networks have resisted NSA surveillance? Maybe so, maybe not. But the citizens finding ways to challenge institutions and engage in politics through other means are the ones to watch in this age of mistrust.

by Ethan at July 23, 2015 06:25 PM

Who benefits from doubt? Online manipulation and the Russian – and US – internet

I was asked by an editor at RBC, one of Russia’s best respected independent news organizations, to offer my thoughts on the Russian/US infowar. It was a great chance to think about Adrian Chen’s provocative tale about Russia’s Internet Research Agency (a topic that Global Voices RuNet Echo has done a terrific job of covering) and broader questions about skepticism, mistrust and who benefits from doubt. The piece ran on RBC today in Russian, but my English language text follows below.

In early June, the New York Times Sunday Magazine ran a story by investigative reporter Adrian Chen about a Russian “troll factory” in St. Petersburg, linked to Evgeny Prigozhin, reported to have close ties with Vladimir Putin. In the article, Chen interviewed Lyudmila Savchuk, a whistle blower who is suing the Internet Research Agency, her former employer, in hopes of shutting down their operations of posting pro-Kremlin comments on social media sites in English and Russian.

Until Chen’s story, many American readers had never heard of paid Russian propagandists writing online. But followers of the RuNet, Russia’s online spaces, have seen the Russian internet as one of the world’s most fiercely contested online spaces. In 2011, internet researchers in the US and Canada published a book, “Access Contested”, which suggested that battles over online spaces were progressing from censorship – preventing the posting of controversial content or preventing a nation’s citizens from reading that content – to a more complex model of contestation, where governments used a wide range of methods to disrupt dialog online: harassing users with frivolous lawsuits, rendering sites unavailable via denial of service attacks, and flooding comment threads. While these tactics have become popular worldwide, anywhere governments wish to disrupt online speech, many of them were pioneered in Russian cyberspace. My coauthors and I documented some of these early attacks, including attacks on Novaya Gazeta, in a 2010 study published by the Berkman Center at Harvard University.

What was surprising about Chen’s story was not that people were producing pro-government comments in Russian, but that this same Internet Research Agency appeared to be responsible for a set of fabricated news stories, released in English and intended to mislead US audiences. These stories have fascinated and baffled American media scholars. They are complex hoaxes, involving dozens of social media accounts, fake websites and fake YouTube videos, all towards the apparent goal of making American social media users believe that a chemical plant in Louisiana had been attacked by ISIS terrorists, or that there had been an outbreak of Ebola in Atlanta. These hoaxes were not successful in fooling many people for very long – they were quickly dismissed after mainstream news reports made clear that these tragedies had not occurred.

These hoaxes suggest an interesting new chapter in the ongoing infowar between the US and Russia. The goal of the infowar may no longer be to promote or discredit either the Kremlin or the White House. The goal may be to destroy trust in the internet, in social media and in news.

For decades, nations have worked to produce news that reflects their specific point of view. The United States Broadcasting Board of Governors oversees Voice of America, Radio and TV Marti (for Cuban audiences), Al-Hurra TV and Radio Sawa (for Arabic-speaking audiences), Radio Free Asia, and Radio Liberty/Radio Free Europe, which includes Radio Svoboda, aimed at Russian audiences. Defenders of these projects see them as providing objective news reporting in countries where press freedom is constrained. Others – including some US legislators – see these stations as pro-US propaganda. Until 2013, Voice of America was banned from broadcasting in the US because Congress believed that these broadcasts, played in the US, would function as pro-government propaganda. In recent years, BBG has broadened its remit beyond broadcasting, and proposed spending $12.5 million in 2016 to support internet anti-censorship technologies, intended to allow citizens of countries that censor the internet to access blocked content.

It should not have been a surprise that Russia would take to international broadcasting to promote a national agenda, joining stated sponsored channels France24 (France), CCTV (China), and Al Jazeera (Qatar). These channels have experimented with different mixes of news reporting and public diplomacy, sometimes coming under fire for compromising journalistic standards in favor of national interests.

Russia Today (RT) has taken some unusual and surprising approaches in deploying this tool of soft power. The network promotes a view of Russia as defender of the principle of international sovereignty in the face of relentless US-led globalization, a viewpoint that turns not only protests in Armenia into a US-led grab for power, but the arrest of FIFA officials for corruption into a plot to strip Russia of the 2018 World Cup. While Al Jazeera, in particular, has worked hard to gain respect as a journalistic outlet rather than a government mouthpiece, Russia Today seems content to take an explicitly pro-Russian, anti-US stance.

And then there’s the weird stuff. As Ilya Yablokov of the University of Leeds has observed, Russia Today seems to be trying to cultivate a US audience of conspiracy theorists. Yablokov notes that one of the first stories RT ran after launching RT America in 2010 was titled “911 Reasons Why 9/11 Was (Probably) an Inside Job”. The idea that the US government killed over 3000 of its own citizens, including 500 police officers and firefighters, as a pretext to invade Iraq, is deeply offensive to most Americans, and unlikely to win RT a broad US audience. But as Yablokov notes, that may not be the point.

There’s a long history in American politics of conspiracy theories gaining wide audiences. Historian Richard Hofstadter identified this in 1964 as “The Paranoid Style in American Politics”, a tendency for those who feel alienated and dispossessed to see America as controlled by a secret cabal. Knowing that it is unlikely to persuade the majority of Americans to see their government as a global hegemon and Russia as the tireless defender of sovereign nations, perhaps RT is appealing to those who are predisposed to “Question More”, as the network’s slogan suggests. While that approach won’t work for most Americans, it may work for the 19% of Americans who believe the government was responsible for the 9/11 attacks.

Bulgarian political scientist Ivan Krastev suggests that a Russian focus on conspiracy theories, especially about outside agitation in creating “color revolutions” is consistent with Russia’s preferred framing of the world – sovereignty versus agitation – rather than the US’s preferred framing – democracy versus authoritarianism. Brian Whitmore, a senior correspondent for RL/RFE, argues that conspiracy theories suggest a government incapable of taking citizen movements seriouslydocumented attempts by the government of Azerbaijan to portray the internet as a dangerous and lawless space, linking internet usage to sexual abuse of children, trafficking of women, breakdowns of marriages and mental illness. The campaign has been quite successful, keeping 86% of Azeri women offline, and helping ensure that internet penetration in Azerbaijan has stayed far behind of its neighbors, Georgia and Armenia. Turkish media scholar Zeynep Tufekci suggests that Erdogan’s government has deployed similar tactics in Turkey, working to demonize social media in the hopes of keeping his large support base off these networks, which are heavily used by opposition organizers.

Raising doubt in online media as a whole might help explain why a Russian firm would start easily dismissed rumors on American social networks. The net effect of these rumors has been to remind American Internet users that everything they read online should be doubted before being vetted and verified. And RT’s main brand message is that Americans shouldn’t trust their government or their media, as both are hiding the “other side” of the narrative, and the secrets behind far-reaching conspiracies.

But the question remains: who benefits from doubt?

Historians Naomi Oreskes and Erik M. Conway have a possible answer. Their book “Merchants of Doubt” looks at techniques used by energy industry lobbyists in the US to create uncertainty and doubt about climate change. They trace these techniques back to the tobacco industry, which used similar tactics for decades to prevent tobacco from being regulated as a drug. Their key weapon was doubt. Tobacco companies sponsored legitimate medical research on other causes for cancer and heart disease. The net result was that they kept alive the appearance of a debate about whether tobacco use was the primary cause of lung cancer for far longer than there was an actual scientific debate. Similarly, climate scientists sponsored by energy companies insist that there is a diversity of opinion about humans’ role in creating climate change, relying on the media’s tendency to tell both sides of a story and keep a “debate” alive years beyond when it would otherwise be settled.

Who benefits from doubt? Ask instead who benefits from stasis. So long as there was doubt that cigarettes caused cancer, regulators were less willing to label packages, restrict their sales or ban them altogether. So long as there is doubt about humanity’s role in climate change, governments are less likely to pass carbon taxes, ban the burning of coal or subsidize the shift to renewable energy. It’s not necessary to persuade people that cigarettes are safe to smoke or that we can burn coal indefinitely without raising global temperatures – it’s enough to raise sufficient doubt to lead to paralysis.

Stasis benefits the Russian state. People baffled by claims and counterclaims over whether Russian troops are in Ukraine or whether the US toppled the Yanukovych government are less likely to demand NATO military intervention in Crimea. Russian citizens who wonder whether Alexei Navalny is an embezzeler are less likely to support his candidacy. Internet users who doubt whatever they see online are less likely to use social media to organize and topple those who are currently in power.

It’s expensive to persuade someone to believe something that isn’t true. Persuading someone that _nothing_ is true, that every “fact” represents a hidden agenda, is a far more efficient way to paralyze citizens and keep them from acting. It’s a dark art, one with a long past in Russia and in the US, and one we’re now living with online.

by Ethan at July 23, 2015 05:10 PM

Nick Grossman
Here’s the solution to the Uber and Airbnb problems — and no one will like it

It’s been a fascinating week to watch the war between Uber and the De Blasio administration play out.

Not surprisingly, Uber ended up carrying the day using a combination of its dedicated user base and its sophisticated political machine.

This is yet another very early round in what will be a long and hard war — not just between Uber and NYC, or Uber and other cities, but between every high-growth startup innovating in a regulated sector and every regulator and lawmaker overseeing those sectors.

Watching the big battles that have played out so far — in particular around Uber and Airbnb — we’ve seen the same pattern several times over: new startup delivers a creative and delightful new service which breaks the old rules, ignoring those rules until they have critical mass of happy customers; regulators and incumbents respond by trying to shut down the new innovation; startups and their happy users rain hellfire on the regulators; questions arise about the actual impact of the new innovation; a tiny amount of data is shared to settle the dispute.  Rinse and repeat, over and over.

I am not sure there’s a near term alternative to this process — new ways of doing things will never see the light of day if step 1 is always “ask permission”.  The answer will nearly always be no, and new ideas won’t have a chance to prove themselves.

Luckily, though, we have somewhat of a model to follow for a better future.  It’s the way that these new platforms are regulating themselves.  My colleague Brad has long said that web platforms are like governments, and that’s becoming clearer by the day (just look at Reddit for the latest chapter).

The primary innovation that modern web platforms have created is, essentially, how to regulate, adaptively, at scale.  Using tons and tons of real-time data as their primary tool, they’ve inverted the regulatory model.  Rather than seek onerous up-front permission to onboard, users onboard easily, but are then held to strict accountability through the data about their actions:


Contrast this with the traditional regulatory model — the one government uses to regulate the private sector, and it’s the opposite — regulations focus on up-front permission as the primary tool:


The reason for this makes lots of sense: when today’s regulations were designed (largely at the beginning of the progressive era in the early 20th century), we didn’t have access to real-time data.  So the only feasible approach was to build high barriers to entry.

Today, things are different.  We have data, lots of it.   In the case of the relationship between web platforms (companies) and their users, we are leveraging that data to introduce a regulatory regime of data-driven accountability.  Just ask any Uber driver what their chief complaint is, and you’ll likely hear that they can get booted off the platform for poor performance, very quickly.

Now, the question is: how can we transform our public regulations to adopt this kind of model?  Here’s the part that no one will like:

1) Regulators need to accept a new model where they focus less on making it hard for people to get started.  That means things relaxing licensing requirements (for example, all the states working on Bitcoin licensing right now) and increase the freedom to operate.  This is critical for experimentation and innovation.

2) In exchange for that freedom to operate, companies will need to share data with regulators — un-massaged, and in real time, just like their users do with them.  AND, will need to accept that that data may result in forms of accountability.  For example, we should give ourselves the opportunity to enjoy the obvious benefits of the Ubers and Airbnbs of the world, but also recognize that Uber could be making NYC traffic worse, and Airbnb could be making SF housing affordability worse.

In other words, grant companies the freedoms they grant their users, but also bring the same data-driven accountability:


That is going to be a tough pill to swallow, on both sides, so I’m not sure how we get there.  But I believe that if we’re honest with ourselves, we will recognize that the approach to regulation that web platforms have brought to their users is an innovation in its own right, and is one that we should aim to apply to the public layer.

Over at TechCrunch, Kim-Mai Cutler has been exploring this idea in depth. In her article today, she rightly points out that “Those decisions are tough if no one trusts each other” — platforms (rightly) don’t trust regulators not to instinctively clamp down on new innovations, and regulators don’t trust platforms to EITHER play by the existing rules OR provide in-depth data for the sake of accountability.

In the meantime, we’ll get to observe more battles as the war wages on.

by Nick Grossman at July 23, 2015 03:41 PM

Bruce Schneier
Remotely Hacking a Car While It's Driving

This is a big deal. Hackers can remotely hack the Uconnect system in cars just by knowing the car's IP address. They can disable the brakes, turn on the AC, blast music, and disable the transmission:

The attack tools Miller and Valasek developed can remotely trigger more than the dashboard and transmission tricks they used against me on the highway. They demonstrated as much on the same day as my traumatic experience on I-64; After narrowly averting death by semi-trailer, I managed to roll the lame Jeep down an exit ramp, re-engaged the transmission by turning the ignition off and on, and found an empty lot where I could safely continue the experiment.

Miller and Valasek's full arsenal includes functions that at lower speeds fully kill the engine, abruptly engage the brakes, or disable them altogether. The most disturbing maneuver came when they cut the Jeep's brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch. The researchers say they're working on perfecting their steering control -- for now they can only hijack the wheel when the Jeep is in reverse. Their hack enables surveillance too: They can track a targeted Jeep's GPS coordinates, measure its speed, and even drop pins on a map to trace its route.

In related news, there's a Senate bill to improve car security standards. Honestly, I'm not sure our security technology is enough to prevent this sort of thing if the car's controls are attached to the Internet.

by Bruce Schneier at July 23, 2015 11:17 AM

Organizational Doxing of Ashley Madison

The -- depending on who is doing the reporting -- cheating, affair, adultery, or infidelity site Ashley Madison has been hacked. The hackers are threatening to expose all of the company's documents, including internal e-mails and details of its 37 million customers. Brian Krebs writes about the hackers' demands.

According to the hackers, although the "full delete" feature that Ashley Madison advertises promises "removal of site usage history and personally identifiable information from the site," users' purchase details -- including real name and address -- aren't actually scrubbed.

"Full Delete netted ALM $1.7mm in revenue in 2014. It's also a complete lie," the hacking group wrote. "Users almost always pay with credit card; their purchase details are not removed as promised, and include real name and address, which is of course the most important information the users want removed."

Their demands continue:

"Avid Life Media has been instructed to take Ashley Madison and Established Men offline permanently in all forms, or we will release all customer records, including profiles with all the customers' secret sexual fantasies and matching credit card transactions, real names and addresses, and employee documents and emails. The other websites may stay online."

Established Men is another of the company's sites; this one is designed to link wealthy men with young and pretty women.

This is yet another instance of organizational doxing:

Dumping an organization's secret information is going to become increasingly common as individuals realize its effectiveness for whistleblowing and revenge. While some hackers will use journalists to separate the news stories from mere personal information, not all will.

EDITED TO ADD (7/22): I don't believe they have 37 million users. This type of service will only appeal to a certain socio-economic demographic, and it's not equivalent to 10% of the US population.

This page claims that 20% of the population of Ottawa is registered. Given that 25% of the population are children, that means it's 30% of the adult population: 189,000 people. I just don't believe it.

by Bruce Schneier at July 23, 2015 06:33 AM

Joseph Reagle
The skew of rotten-apple jerks

An interesting study is being widely commented on, but, as is often the case, the press glosses about "harassment" are a bit askew. For instance, the Washington Post (cleverly) reports that "Men Who Harass Women Online are Quite Literally Losers." The actual study is entitled "Male Status and Performance Moderates Female-directed Hostile and Amicable Behaviour". In the study, Halo 3 games were recorded by way of three experimental player accounts: control (no voice channel), male, and female voices. Interactions with these accounts and other (focal) players were transcribed and coded (N=126) as positive, negative, or neutral. Skill level of focal players correlated with the valence of their comments; that is, higher skill male players correlated with more positive and less negative statements towards women.

The authors don't mention "harassment." Also, because of the small sample size and that only 13% of that (11 individuals) uttered hostile sexist statements, "We found that the presence of sexist statements was not determined by differences in maximum skill achieved." The paper is really about the extent to which lower-status male players are bigger jerks to women players. They did find this with respect to negative and positive statements -- and we could (rightfully) call this sexist itself -- but they didn't have the statistical power to conclude anything about hostile sexist statements.

What I found interesting, methodologically, is that for the analysis they had to exclude two mega-jerks as outliers. "For the examination of negative statements, there were two focal players in the female-voiced manipulation that made 10 more negative statements than the next highest individuals (greater than 5 standard deviations from the mean). As a result, we removed them from our analysis to ensure they did not skew our results towards significance." Given the "rotten apple" thesis (a minority of jerks can spoil the barrel), what they did for the purposes of statistical analysis doesn't correspond to the experience women players may have. A minority of awful people can send the majority of awfulness. That is, I believe, if we excluded 5% of the most awful people online as outliers, the Net would be a lovely place!

by Joseph Reagle at July 23, 2015 04:00 AM

Bruce Schneier
Malcolm Gladwell on Competing Security Models

In this essay/review of a book on UK intelligence officer and Soviet spy Kim Philby, Malcolm Gladwell makes this interesting observation:

Here we have two very different security models. The Philby-era model erred on the side of trust. I was asked about him, and I said I knew his people. The "cost" of the high-trust model was Burgess, Maclean, and Philby. To put it another way, the Philbyian secret service was prone to false-negative errors. Its mistake was to label as loyal people who were actually traitors.

The Wright model erred on the side of suspicion. The manufacture of raincoats is a well-known cover for Soviet intelligence operations. But that model also has a cost. If you start a security system with the aim of catching the likes of Burgess, Maclean, and Philby, you have a tendency to make false-positive errors: you label as suspicious people and events that are actually perfectly normal.

by Bruce Schneier at July 23, 2015 12:26 AM

July 22, 2015

Willow Brugh
Accountability in Response

I’ve started writing about response over on the Aspiration blog, but this one still has cursewords in it, and is very much in my own language, so I figured I’d post it here first.

The problems our planet is facing are becoming more extreme. People and politics mean there are larger populations more densely packed in cities. Nomadic populations traveling along their historical routes are now often crossing over arbitrary (have you *seen* some of the country lines people in Western countries have drawn in places they might never have even been!?) political boundaries, making them refugees or illegal immigrants. Climate change means more and more extreme events are impacting those populations. We have *got* to get our shit together.

In all this, the people who have been historically marginalized often become even more so as those in power see scarcity encroaching on their livelihoods. But the ability to hold people accountable in new ways (through things like social media), as well as (I hope) a real awareness and effort in the long arc towards equality, means there are groups of people seeking new ways to better allocate resources to those most affected by these events. Often, these groups are also in a post-scarcity mentality — that, when we work together, wisely, we can do a whole lot more with a whole lot less. These are folk who think we *can* reach zero poverty and zero emissions (within a generation). These are the folk who see joy in the world, and possibility.

The resource allocation and accountability necessary for these transitory steps towards a world that can survive and even thrive won’t happen in a vacuum. In the organizations, governments, and grassroots efforts there are entire supply chains, and ways of listening (and to whom), and self-reflexive mechanisms to consider. In these are embedded corruption, and paternalism, and colonialism. In these, too, are embedded individuals who have been Fighting The Good Fight for decades. Who have added useful checks and amplifiers and questions. It’s into this environment we step. It is, at its core, like any other environment. It has History.

It’s in this context that I’m so excited about Dialling Up Resilience. It taps into questions of efficacy in programming by using and contributing to metrics for success in building resilience. It assumes good faith in policy makers and implementers by offering up data for them to do their jobs better. It protects against bad actors by providing granular, speedy data aggregated enough to protect data providers but transparent enough to be clear when a program is working (or not, if those we’re assuming good faith in don’t actually deserve that). And, my favorite part — instead of contorting and posturing about what makes people able to bounce back faster after a climate-related shock… we just ask them. Of course, it’s a bit more complicated than that. But the core is there.

We’ll be working with a few different groups in Kenya, including the National Drought Management Authority (and their Ending Drought Emergencies program) and UNDP on their existing surveying initiatives, as well as groups like GeoPoll (SMS), Twaweza (call center), and Kobo (household) on stand-alone surveys about how communities estabilish and track their own resilience. If we get the grant extension, we’ll work more directly with communities using tools like Promise Tracker and Landscape (a digitized version of Dividers & Connectors) to be closer to their own data, and to subsequently be able to have more agency over their own improvement as well as accountability.

What’s also exciting is that our means and our ends match. I was recently in Nairobi for a stakeholder workshop with not only the project partners, but also with the organizations which would eventually make use of the data. We’ve been conducting community workshops to test our basic assumptions and methods against reality, as well as to be sure community voice is at the core of each component we consider. We’ve thrown a lot out… and added some amazing new things in. We’re hoping to break down the gatekeeper dynamic of accessing communities in the Horn of Africa, and we want to be coextensive with existing programs (rather than supplanting them). It’s feminist and it’s development and I’m kind of fucking thrilled.

by bl00 at July 22, 2015 06:52 PM

Justin Reich
Practical Guidance from MOOC Research: Students Learn by Doing
MOOC developers should focus on the creation of activities and interactives, and less on high quality video.

by Justin Reich at July 22, 2015 02:40 PM

July 21, 2015

Justin Reich
Practical Guidance from MOOC Research: Helping Busy Students Stick to Plans
Students leave MOOCs because they get too busy; can we help them stick their plans better?

by Justin Reich at July 21, 2015 11:48 PM

Welcome Gina James!


If someone were to draw a picture of me when I was a little girl: I’d be standing in the middle of a tomato garden with seeds and juice dripping off my chin.

If someone were to draw a picture of me last year: I’d be on stage, at a Moth StorySLAM, baring my soul to complete strangers.

If someone were to draw a picture of me last month: I’d be in a hot & sweaty muscle confusion class, Eye of the Tiger blasting in the background.

If someone were to draw a picture of me last week: I’d be sitting on a porch swing next to a 95 year old farmer in West Virginia, recording his life story.

If someone were to draw a picture of me today: I’d be literally jumping up & down with excitement – I’m an official part of the PRX Crew as the newest Manager of Development and Operations!!!

My name is Gina James. After studying Cultural Anthropology + Business Administration at BU, I’ve journeyed through various industries (education, travel, hospitality, tech) … to finally land where my heart has always been: public radio.

For the last 12 months I have focused mostly on the craft of gathering oral history. It was this past year when I had an ‘aha’ moment of a lifetime. I love listening to stories via audio because unlike other forms of media, you must rely on your heart to truly assess the content. You are able to experience an individual’s true voice instead of the masks that can be created through literary tricks.

Radio is Real.

I can’t wait to bring more of it to every pocket of our world.

The post Welcome Gina James! appeared first on PRX.

by Gina James at July 21, 2015 06:36 PM

metaLAB (at) Harvard
Beautiful Data II

Collecting inherently involves choices— what to acquire or not acquire, preserve or not preserve, and what to exhibit or not to exhibit, whether that collecting occurs in the physical or virtual realm.


We sift through what is available and sort out materials, media, and metadata based on subjective relevances and so when collections get established, reconfigured, appropriated, or integrated with others, they all surface themselves as problem collections. This was an underlying theme to the second edition of Beautiful Data, a 9-day workshop sponsored by the Getty Foundation, and this year hosted in a combination of spaces from the Harvard Art Museums and Carpenter Center for the Visual Arts.


Participants were able to dialogue with content in the Harvard Art Museums’ galleries, conservation center, and look at materials behind-the-scenes in the Arts Study center, while also having a space back at the Carpenter Center to engage on a journey of rapid prototyping ideas within in a cross-pollinated and interdisciplinary environment.



What unfolded was a dynamic mix of specific, micro-examinations of particular collections and enlarged perspectives at the system level with the macro perspective that metadata lends to help us question how we label and describe the world around us. But there were also many conceptual forays into qualitative and quantitative art scholarship, exhibition, and even law with a look at how museums convey terms of service.

One moment brought discourse on the decontextualization of King Tutankhamun’s loin cloths outside the original tomb, while the next would examine how art is catalogued in ways that exhibit absences and excesses of race. Cat photos lent themselves to discussion about web privacy and the Carpenter Center’s architectural features were used to create an algorithm for displaying media. As some proposed modes to reintroduce the use of senses forbidden to the museum visitor, others sought ways to capture asynchronous emotional engagement with art through a generative projection exercise.

Through paper, code, video, animation, wireframes, and other means, participants experimented and expressed curiosities as both individuals and teams with both personal and institutional goals. Much can be done to test ideas with simple arts and crafts materials, basic technology, or, even just some imagination. It was a thrill to see the community of Beautiful Data II test the limits with what was at their disposal, expanding the field of possibility and then working to isolate a few ideas to test from an abundance of concept.

by Cristoforo Magliozzi at July 21, 2015 06:04 PM

Berkman Center front page
Student Case Study Writing Competition: Innovative Multistakeholder Governance Groups


Deadline for Submissions: September 15th, 2015


The Berkman Center for Internet & Society is excited to announce a writing competition to identify innovative multistakeholder governance groups and help us understand the conditions under which they are most effective. 

The Berkman Center for Internet & Society is excited to announce a writing competition to identify innovative multistakeholder governance groups and help us understand the conditions under which they are most effective. We are seeking original papers (8 to 12 pages, single spaced) that help us better understand innovative, globally diverse governance groups.  The top submissions will receive cash stipends.

Although “multistakeholder governance” has many meanings, at its core it encompasses a variety of decision-making approaches that incorporate representatives from multiple groups in discussions and the formation of outcomes.  When we think of multistakeholder governance, ICANN and NETmundial are some of the most prominent examples that come to mind.  But multistakeholder governance has a rich and complex history, with many diverse and interesting examples within but also far beyond the realm of Internet governance.  This competition is an opportunity for students and post-doctoral scholars to help expand our understanding of multistakeholder governance groups and what we can learn from such groups with respect to Internet governance.

The Berkman Center, in collaboration with the Global Network of Interdisciplinary Internet & Society Research Centers (NoC), recently examined twelve diverse examples of multistakeholder governance groups. Those included:  the coordination of slotting guidelines for busy airports, Bitcoin code development processes, Creative Commons, anti-spam efforts in Brazil, the German Enquete Commission on Internet and Digital Society, water resource management in the White Volta River Basin, Internet exchange points, Israel’s National Cyber Bureau, Marco Civil, NETmundial, Switzerland’s coordinated deployment of fiber optic cable, and Turkey’s Internet Improvement Board.  

Through this writing competition, we are seeking submissions that will add to this list and help us help us explore other globally diverse and unique real-world examples.  A sample case study (based on the NoC case study about Switzerland’s fiber optic cable deployment) is available for reference here [PDF]. From the submitted case studies, we will select the top three.  First place will receive a cash stipend of $4000; second place will receive $3000; and third place will receive $2000.  Additional awards for honorable mention may be given at the discretion of the Berkman Center.  These cash stipends are made possible by a generous Google Research Grant awarded to the Berkman Center. In addition, the top case studies will be published as part of a forthcoming Berkman Center report on  multistakeholder governance groups.

Content Requirements:
Case studies must identify a single unique governance group and address the following questions:

  • How and why was the governance group formed?  

  • How does it operate?  

  • Who participated in the group and how?  

  • How did participation change over time?  

  • What challenges did the group face and how did it overcome them?

  • What makes this group unique?  

  • What makes this group successful?

The governance group does not have to relate to the Internet.  In fact, the more topically and geographically unique the case study, the more helpful it will be. To this end, case studies should not overlap with the existing case study research series.

  • Case studies may use interviews with participants.  They also should rely on peer-reviewed research sources or equivalent scholarship (e.g., scholarly articles) or reputable news stories. The papers must reflect that research accurately, and must appropriately attribute and cite that research.  

Case studies must be new work, written by the student specifically for this competition.

  • Case studies must be in English, well written, and accessible to both scholars and policy makers, in terms of language, style, and length.

  • The research can be from any one of a number of fields and disciplines, including social science, natural science, health, or law.

Length and Formatting requirements:

  • Case studies should be an appropriate length for the given audience and goal. Most papers should be 8 to 12 pages, single spaced.  A sample case study is available here (PDF) as an example of expectations with regard to style, structure, length, and format.
  • Each case study must be submitted in Microsoft Word, single-spaced 12-point Times New Roman font. Margins should be 1 inch at the top and bottom of the page and 1 ¼ inch from the left and right hand sides of the page.

  • Papers should include the author’s name, university affiliation, and contact information.

What to Submit and Deadline:  
Students should submit the following to no later than September 15, 2015:

The case study
The student’s CV

Review Criteria:
Case studies will be assessed on the following criteria:

  • How the case study would contribute to a growing catalog of innovative multistakeholder governance groups

  • How complementary it is to the previous NoC case studies

  • Whether it represents a geographically unique example

  • The quality of the analysis, writing, and research

  • The completeness of the case study and its research

  • How it advances our understanding about the role of multistakeholder governance groups

Case studies will be reviewed by experts in the field from the Berkman Center.

Acceptance Process: The Berkman Center will inform students in October 2015 whether or not their case study has won a prize.  If a paper is selected for inclusion with the Berkman Center’s research, the Center may provide the student with editing suggestions or requirements.

Prize Payment:  The Berkman Center will provide a stipend to the students selected for the best submissions, subject to applicable law.

Publication: Papers selected for inclusion with the Berkman Center’s research will be published on Berkman Center’s website, the Network of Center’s Publixphere Page, and uploaded to the Working Paper Series on SSRN.  

Eligibility:  Case studies must be written by students, over the age of 18, currently pursuing a post-secondary degree (e.g., bachelor’s, master’s, or PhD, post-doctoral research).  Students from outside the United States are strongly encouraged to apply.

Copyright: Winning submissions will be licensed under a Creative Commons Attribution 4.0 International License.  Authors retain ownership of their submissions, and Berkman’s use of the works will be fully attributed to the authors.

Reservation of Rights: At all steps during this process and at all times, the Berkman Center retains the sole right to decide whether or not to accept papers, publish papers, and/or remove papers from publication.  

Additional Information About The Berkman Center’s Internet Governance Research:
Internet governance is an increasingly complex concept that operates at multiple levels and in different dimensions, making it necessary to have a better understanding of both how multistakeholder governance groups operate and how they best achieve their goals. With this need in mind, at a point where the future of Internet governance is being re-envisioned, the Berkman Center worked with colleagues from several NoC institutions around the world to research and write twelve case studies examining a geographically and topically diverse set of local, national, and international governance models, components, and mechanisms from within and outside of the sphere of Internet governance. Key findings from these cases were summarized in a synthesis paper published in January 2015, which aims to deepen our understanding of the formation, operation, and critical success factors of governance groups and even challenge conventional thinking.

Swiss ComCom_abridged.pdf227.64 KB

by gweber at July 21, 2015 02:10 PM

Ethan Zuckerman
Instaserfs: Precarious Employment in the New – and Old – Economy

(This summer, I’m going to publish some of my work on FOLD, the beautiful platform my student Alexis Hope is building. There’s a graphically enhanced version of this story there.)

Benjamen Walker makes some of the best radio around. (Okay, it’s mostly digitally-delivered audio storytelling these days, but who’s counting?) His finest work tends to come out in series of podcasts, exploring a complex issue through interviews and stories that unfold over two or more sequential weekly episodes.

The most recently concluded series is called “Instaserfs” and it focuses on the “sharing economy”, aka “the 1099″ economy, the “gig economy” or as Ben offers, “the demand economy” or “the exploitation economy”. Struck by the ability to outsource virtually any task, Benjamen hires San Francisco native Andrew Callaway to make three episodes of his podcast as an “Instapodder”. The working method? Andrew’s task is to take on as many sharing economy jobs as he can and to report Benjamen about the experience, and whether he can pay his San Francisco rent with the money he earns. (Spoiler alert: he can’t.)

There’s no shortage of articles out there with titles like “I spent a week as a Lyft driver/ Taskrabbit/ Instacart shopper“, so this experiment is hardly original. But following along as a listener as Andrew goes through the process of deciding where to work, becoming a contractor, trying out the work arrangement and hearing the frustrations and small joys makes for some excellent listening. We get a taste of the horribly repetitive onboarding sessions, where the main point is to ensure the contractor knows that the company absolutely, positively won’t be responsible for anything bad that happens. We learn about the unpredictability of earning on the platforms, the radical difference between a good day and a bad day as a Lyft driver. We get a sense that some of these platforms treat their workers well – Taskrabbit and are ones Andrew expresses particular fondness for – though even good platforms change the rules of the game, and these changes always make things harder for the contractor. We learn that an alarming number of San Franciscans pay a sharp premium to have Chipotle burritos delivered to them.

Ben and Andrew identify the ways that these services create a conceptual gap between the haves and have nots, those who can afford a $9 delivery charge for a burrito, and those who wait in line to earn their share of the delivery fee. Losing that collective experience of waiting in line, the leveling effect of shared inconvenience, Andrew speculates, is making the wealthy into nastier people… and the behavior of some of the oafish tech bros he encounters as a Lyft driver makes the case that these services are somehow unhealthy for society as a whole.

There’s utility in this insight, and in the shame that Andrew sees in the users, who seem embarrassed that they’re paying people to do their laundry. Outsourcing your routine tasks to a poorly-paid contractor is good for efficiency, but likely bad for something else. And some of the services Andrew works for seem designed to create class warfare. In the third episode, Andrew begins working for ManServants, a company whose core premise is so uncomfortable, I spent an enjoyable hour trying to determine whether the company is real or a splendid art piece. (Yes, it’s a service to let women rent well-dressed men, at $125 per hour, to act as “personal photographer, bartender, bodyguard, and butler all in one.” Yes, it appears to be real – Lane Moore tried it out and wrote about it for Cosmo – and doesn’t appear to be stripper rental in disguise.) But the main point of Instaserfs, for me at least, was not that rising inequity is turning America into Downton Abbey, but how badly the service economy is stacked against its participants.

Near the end of the second episode, as Andrew settles into his new lifestyle, he begins interviewing other 1099 workers. Andrew confesses to a driver for Luxe, a company that provides valet parking services, that he’s terrified to try working for the company out of fear of damaging a client’s car. The Luxe driver tells him that he’s right to be worried – he dinged a client’s truck the other day and is now on the hook for the damages. Luxe insures customer’s vehicles, but contractors are liable to pay the $500 deductible if they damage the car. The Luxe contractor explains that the company will deduct the deductible from his paycheck automatically and break it up over the course of months, if need be.

Given the modest amounts these jobs pay, a $500 payment is a major, potentially crippling, setback (something that wouldn’t have been clear to me, had I not listened to two episodes of Andrew figuring out whether his jobs had paid enough to cover gasoline for his car.) This practice of limiting liability and transferring it to the “contractor” is routine for this emerging industry, and seems like the core sin of this business model. Yes, the work and pay are unpredictable, the workplace rules arbitrary and sometimes demeaning. But a job where it’s common to end up owing the employer more than when you started working sounds like something out of the days of the company store.

Benjamen and Andrew have fun exploring this question of capital and of risk. Andrew can’t get a job as an Uber driver because of a dent in his bumper, which will cost thousands to fix, and Benjamen is unwilling (and probably unable as a podcast producer) to invest that capital in Andrew’s “business”. Later, ManServants cuts Andrew off until he can upgrade his shoes, which don’t meet their high standards – in this case, Benjamen is willing to dip into his own funds in the hopes of obtaining tape of Andrew on the job. Benjamen interviewed Mansur Nurullah, a San Francisco grad student and cabbie, who became an Uber driver when the startup disrupted the taxi business to the point where he could no longer profitably drive a cab. Nurallah needed a car to become an Uber driver, but balked when the company steered him towards a 27% interest auto loan. (Uber’s lending partner, Santander, is under investigation for predatory lending. And Uber loans explicitly prohibit the vehicles purchased this way from being used for personal use… or for a competing service.)

The capital’s all yours to provide, and the risk is all yours to assume. Benjamen and Andrew never discuss whether the podcast will pay legal fees if Andrew’s arrested for solicitation while working his Manservants gig. But the rules within the 1099 economy are well established: if you park illegally while making a delivery for Postmates, the fine is yours to pay. Andrew shares a great exchange he has with his Postmates dispatcher as they try to calculate the smallest parking ticket he could risk to make an order. (Dispatch suggests he park in a driveway, because it will take longer for the homeowner to call the police or a tow truck, but makes clear that he can’t offer advice, as it’s the driver’s problem, not the company’s.)

Contractors provide the capital and assume the risk, while the companies collect the profits and the investments. But that’s not the core insight of Instaserfs – it’s that this blatantly unfair arrangement isn’t news to most working people.

Andrew interviews Brooklyn, a Taskrabbit worker and advocate for the sharing economy, who tells him she left a six figure job to have more control, freedom and flexibility. He’s hired Brooklyn to help him make a viral video protesting the 1099 economy. Instead, she sets him straight. As they talk, Andrew realizes, “What I find horrible about the sharing economy is what most Americans have been dealing with in the workplace for decades.” And Brooklyn replies, “Welcome to life. As a black, gay female, I have been dealing with this since I was born.”

Uncertain work hours, unpredictable income, onerous workplace rules, no benefits and zero job security? That’s a reality of the American workplace that Barbara Ehrenreich documented in Nickeled and Dimed, which Benjamen evokes in Instaserfs, hoping to extend her critiques to this proposed future. But if the working conditions and uncertainty of the 1099 economy aren’t new, the aspirational tone is. For the most part, low wage jobs don’t ask you to consider yourself an entrepreneur. They have their own ways of transferring cost and risk to you, but at least they don’t transfer blame. When you fail as a low wage worker, you fail because you’re living in a country that doesn’t mandate a living wage, and until recently, didn’t provide basic universal healthcare. Slowly, all too slowly, Americans are waking up to the reality that the deck is stacked against the working poor, that paying rent would require 80-120 hours a week of minimum wage work in most states.

But in the 1099 economy, you’re an entrepreneur. Your success or failure depends on your skill, your hustle and your drive. That company offering predatory loans and flooding the streets with drivers competing for your passengers is valued at $50 billion (larger than 80% of the top 500 S&P companies) and will be the hottest IPO in years when it inevitably goes public.

Instaserfs is the tale of two well-educated white guys discovering what people with fewer advantages have knows for decades: the game is rigged. Fortunately, Andrew is not going to be a delivery man for much longer – he’s a talented video producer whose skills should lead him to a less precarious freelance existence. The question is whether listeners to this excellent series will see the connections between the new exploitation economy and the old exploitation economy, and work towards a future of work where fewer people can rent manservants at $125 an hour, and fewer people need new shoes to work those servant jobs.

by Ethan at July 21, 2015 12:56 AM

July 20, 2015

Center for Research on Computation and Society (Harvard SEAS)
Postdoc Babis Tsourakakis to present poster at ICERM
July 20, 2015

Postdoc Babis Tsourakakis will present his work at the Institute for Computational and Experimental Research in Mathematics (ICERM), July 28-30, 2015. In particular, Babis will present his poster in the  "Mathematics in Data Science" workshop, which explores the role of mathematical sciences in the evolving data science discipline. 

by kmavon at July 20, 2015 02:27 PM

Berkman Center front page
Berkman Buzz: July 20, 2015


Teens and screens; digital disintermediation; Hey, Internet Monitor!

Thumbnail Image: 
READ: Community voices

Don't blame the screens. It's not that kids are obsessed with technology, argues danah boyd in her piece for The New York Times. It's that they're addicted to interaction with their friends. And rather than letting them roam, she says, we're raising them in captivity. 

Dissing digital disintermediation. The Internet in many ways has cut out the middle man, allowing creatives to reach huge audiences without relying on old-school institutions. Or has it? Leora Kornfeld and a dissatisfied student discuss.    

Deciding the future of the Internet. The online experience is changing rapidly, and not necessarily for the better, explains Jonathan Zittrain in this Big Think video. Your classic web surfing session where "there's lots of baskets with eggs all over the place and by just clicking on a link you can visit that new basket and not even feel the burdens of the journey," likely won't be the norm in a less open, app-driven future. 

Learning from Ebola. William Fisher and Quentin Palfrey assert in their post for IP Watch that there are steps we can take to better prepare for the next outbreak and save lives, including improving incentives for R&D, greater coordination and collaboration among agencies, and stockpiling effective drugs and vaccines. 

Tend your bridges. When Reddit fired a popular moderator -- one of the few employees who interacted with the online community regularly -- "her departure was seen not just as a loss but as a betrayal, as if RedditCo thinks it needs no bridges," explains David Weinberger in this article he co-wrote for the Harvard Business Review.
An Intern Explains: Internet Monitor
WATCH: An intern explains Internet Monitor

In this new video, intern Elizabeth Gillis explores Berkman's Internet Monitor project and describes why you should check it out, too. 
LISTEN: Fiber City

Why are over 450 towns in the US building their own high speed Internet networks?

On this week's show we talk about municipal fiber: what it is, why it matters, who's doing it and how. And we learn what happens when municipal utilities and companies compete to provide local Internet service.

In our orbit

Subscribe to the Buzz

Browse past editions


by gweber at July 20, 2015 01:51 PM

David Weinberger
How far wrong has the Net gone? A podcast with Mitch Joel

My friend Mitch Joel and I talk for about an hour (sorry) about whether our hopes for the Net have proven to be forlorn. You can listen here.

The spur for this conversation was my recent article in The Atlantic, “The Net that Was (and Still Could Be).”

The post How far wrong has the Net gone? A podcast with Mitch Joel appeared first on Joho the Blog.

by davidw at July 20, 2015 12:38 PM

Bruce Schneier
Google's Unguessable URLs

Google secures photos using public but unguessable URLs:

So why is that public URL more secure than it looks? The short answer is that the URL is working as a password. Photos URLs are typically around 40 characters long, so if you wanted to scan all the possible combinations, you'd have to work through 1070 different combinations to get the right one, a problem on an astronomical scale. "There are enough combinations that it's considered unguessable," says Aravind Krishnaswamy, an engineering lead on Google Photos. "It's much harder to guess than your password."

It's a perfectly valid security measure, although unsettling to some.

by Bruce Schneier at July 20, 2015 10:25 AM

July 19, 2015

David Weinberger
Wikipedia is too hard: A suggestion

Frequently we consult encyclopedias because a concept came up in conversation or in something we’re reading, and we need to know just enough about it to be able to move on. But it seems to me that more and more frequently Wikipedia’s explanations are too hard and too detailed for this.

For example, if Planck’s Constant came up in something I was reading and I needed to know just enough to make sense of it, here’s how Wikipedia begins its explanation:

Wikipedia first paragraph about Planck's Constant

That may be fine for a physics student, but I need something more like this:

Simple Wikipedia's Planck Constant explanation

Much better.

That happens to come from the Simple Wikipedia. If the article you’re looking at has the address, replace the “en” with “simple” ( and often you’ll get a far more intelligible answer. Well, “often” means 113,937 articles in English so far.

One of the reasons Simple Wikipedia’s opening paragraphs are clearer than Regular Old Wikipedia’s is that Regular’s explanations often think that links replace explanations: you don’t have to explain “proportionality constant” if you link it to its Wikipedia article. That’s great for browsing on a quiet Sunday afternoon, but not great if you’re looking up something in service of understanding something else. Linking instead of explaining seems to me to be lazy.

So here’s a request for someone to write a browser extension that, when you hover over a link in a WP page, pops up the first paragraph of the linked article. If there’s a Simple WP version, it should pop up that first paragraph. Getting an explanation without leaving the page is not just a convenience. It would help preserve the reading experience and improve comprehension.

If this also encouraged writing first paragraphs that are clear enough that they let us get a quick hit of understanding and then move on, so much the better.

In fact, if I were King of Wikipedia, I’d take the first paragraphs of all 113,937 Simple Wikipedia articles and make them the first paragraph of the articles of which they are the simplifications. And then I would retire to my Wiki Castle and drink some wiki mead.

As i was poking around for a bad example of a first paragraph, I came across many good examples. Here’s just one:

In the late 19th century, luminiferous aether, æther or ether, meaning light-bearing aether, was the postulated medium for the propagation of light. It was invoked to explain the ability of the apparently wave-based light to propagate through empty space, something that waves should not be able to do.

Got it! Thank you, Wikipedia!

The post Wikipedia is too hard: A suggestion appeared first on Joho the Blog.

by davidw at July 19, 2015 02:42 PM

July 17, 2015

Bruce Schneier
Friday Squid Blogging: Squid Giving Birth

I may have posted this short video before, but if I did, I can't find it. It's four years old, but still pretty to watch.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

by Bruce Schneier at July 17, 2015 09:09 PM

Using Secure Chat

Micah Lee has a good tutorial on installing and using secure chat.

To recap: We have installed Orbot and connected to the Tor network on Android, and we have installed ChatSecure and created an anonymous secret identity Jabber account. We have added a contact to this account, started an encrypted session, and verified that their OTR fingerprint is correct. And now we can start chatting with them with an extraordinarily high degree of privacy.

FBI Director James Comey, UK Prime Minister David Cameron, and totalitarian governments around the world all don't want you to be able to do this.

by Bruce Schneier at July 17, 2015 11:35 AM

July 16, 2015

Bruce Schneier
ProxyHam Canceled

The ProxyHam project (and associated Def Con talk) has been canceled under mysterious circumstances. No one seems to know anything, and conspiracy theories abound.

by Bruce Schneier at July 16, 2015 04:00 PM

Kate Krontiris
Immigration System, Meet the Digital Age

For the past few months, I’ve had the honor of conducting user research on behalf of the United States Digital Service.  Born from the recognition that solid policy implementation in today’s age requires stellar digital service delivery (ahem,, the White House has assembled a crack team of of digital experts to work on priority policy issues in collaboration with agencies across the federal government.

Last November, President Obama enacted a set of Executive Actions meant to improve the immigration system. In one, he requested that the federal agencies that administer our legal immigration system explore ways to modernize and streamline the activities, processes, systems, and service delivery. The goal was to “develop recommendations to bring the system into the 21st century.”


As an ethnographer, I was asked to help illuminate what it is actually like to apply for an immigrant visa to the United States.  People who seek to reunite with family, start a new career, or simply pursue a crazy dream need to secure this kind of visa – and depending on where you live, it can be a very challenging process.

So what did USDS do?  They – and the State Department – sent me to the places where people are actually applying for these visas to learn directly from the source what their experiences are.  In essence, our government conducted user-centered policy making – the first time in my career that I’ve seen the U.S. Government use this approach.

Now, this is a process that has a lot of different “users,” including all of the dedicated public servants whose responsibility it is to keep the trains on the track.  In addition to talking to people who are applying for visas, I spent time with the people who adjudicate their requests, examining their technical systems, service delivery mechanisms, and organizational structures.  As the report describes, I got to spend an equal amount of time shadowing visa seekers during their consular interviews, and interviewing them at their homes about their experiences.

The recommendations represent what the key agencies have committed to addressing moving forward, but I am particularly proud of the principles that informed these recommendations, highlighted on page 21 of the full report.  While they may seem obvious to the private sector, these ideas have only recently taken root in the public sector and it is significant that the Executive Office has chosen an approach that embodies a core human lens.

First, understand what people need. USDS began an assessment by exploring and pinpointing the needs of the people who use immigrant visa services, and the ways the service fits into their lives. Additionally, USDS looked at government collaborators as “users” of their own processes, and asked for insights about what could be improved. The needs of these users will inform technical and design decisions. 

Second, address the whole experience, from start to finish. Applicants are often overwhelmed by the multiple agencies that play a role in their immigration process. Integrating the activities of all the relevant agencies serves to minimize confusion for the user while streamlining the adjudication process to eliminate redundancy and increase efficiency. 

Third, make the process clear, simple, and intuitive, so that users succeed the first time, unaided. It is necessary to make our process as clear and simple as possible so that individuals understand the process, are fully prepared when they make their request, and can apply for the immigration benefit for which they qualify. 

Finally, be consistent by using the same language and design patterns when building digital services whenever possible. By creating consistency within design patterns, users become familiar with the services offered and can make reasonable assumptions and guesses regarding their next steps in the process. This is a principle that our peers in the United Kingdom Government Digital Service have emphasized and that is particularly relevant for the global nature of the responsibilities of State and DHS. Setting consistent goals as a government will empower agencies and consular posts to customize their process to meet local circumstances.

A blog post about the work from the White House is available here, and the full report can be read on a commute home from work.  Finally, here’s an easy-to-read overview from Wired (making the immigration process “suck less” seems like an accurate description of the goal).

I look forward to the phase in which we see these recommendations implemented, so that we might truly realize an immigration system that fully enables human potential and builds a better United States of America. 

July 16, 2015 01:58 AM

July 15, 2015

Meet The Sarahs: A New Audio Fiction Competition

It’s time audio fiction had its own red carpet

Introducing The Sarah Lawrence College International Audio Fiction Award

The Sarah Awards will celebrate and reward the best audio fiction works from around the world with $3,500 worth of prize money and an awards ceremony in New York in Spring 2016.

Get the guidelines, then get creative!

The early bird submission timeline is Nov. 23 – Dec. 21, so you have plenty of time to dig in and put your best fiction forward.

The Sarahs also includes:

BONUS: Winners of The Sarahs and the Very, Very, Short, Short Stories Contest will be featured on PRX Remix — PRX’s 24/7 stream of the best independently created audio stories — airing online, SiriusXM 123, and broadcast stations around the country.

The post Meet The Sarahs: A New Audio Fiction Competition appeared first on PRX.

by Genevieve at July 15, 2015 09:01 PM

Justin Reich
Digital Portfolios: The Art of Reflection
Beyond curating and publishing, digital portfolios play a critical role in helping students to determine not only what they learned but also how and why.

by Beth Holland at July 15, 2015 05:04 PM

Radio Berkman 223: Fiber City
Listen:or download | …also in Ogg Why are over 450 towns in the US building their own high speed Internet networks? Let’s look at the example of the small town of Holyoke, Massachusetts. A few years back the town’s mayor asked if the local cable or telephone companies wanted to build a fiber network to serve […]

by Berkman Center for Internet & Society at Harvard Law School ( at July 15, 2015 04:07 PM

Berkman Center front page
Radio Berkman 223: Fiber City


What happens when municipal utilities and companies compete to provide local Internet services? Find out on this week's podcast.

Thumbnail Image: 

Why are over 450 towns in the US building their own high speed Internet networks?

Let's look at the example of the small town of Holyoke, Massachusetts.

A few years back the town's mayor asked if the local cable or telephone companies wanted to build a fiber network to serve local schools and municipal buildings. The companies declined. The project was turned over to the local gas and electric utility, HG&E. Eighteen years later, HG&E have expanded this network to serve local businesses, and even other towns in the area. And it turns out this investment has more than paid for itself.

On this week's episode we talk about what happens when municipal utilities and companies compete to provide local Internet services.

Read the report: Holyoke: A Massachusetts Municipal Light Plant Seizes Internet Access Business Opportunities.

by djones at July 15, 2015 03:36 PM

Olivier Sylvain on Network Equality [AUDIO]
One of the few clear priorities of the federal Communications Act is to ensure that all Americans have reasonably comparable access to the Internet without respect to whom or where they are. Yet, in spite of this, the main focus of policymakers and legal scholars in Internet policy today has been on promoting innovation, a […]

by Berkman Center for Internet & Society at Harvard Law School ( at July 15, 2015 02:18 PM

Bruce Schneier
Crypto-Gram Is Moving

If you subscribe to my monthly e-mail newsletter, Crypto-Gram, you need to read this.

Sometime between now and the August issue, the Crypto-Gram mailing list will be moving to a new host. When the move happens, you'll get an e-mail asking you to confirm your subscription. In the e-mail will be a link that you will have to click in order to join the new list. The link will go to -- that's the new host -- not to It's just the one click, and you won't be asked for any additional information.

(Yes, I am asking you all to click on a link you've received in e-mail. The fact that I'm writing about this in Crypto-Gram and posting about it on this blog is the best confirmation I can provide.)

If for any reason you don't want to receive Crypto-Gram anymore, just don't click the confirmation link, and you'll automatically drop off the list.

I'll post updates on the status of the move on the main list page.

by Bruce Schneier at July 15, 2015 07:15 AM

July 14, 2015

Bruce Schneier
More about the NSA's XKEYSCORE

I've been reading through the 48 classified documents about the NSA's XKEYSCORE system released by the Intercept last week. From the article:

The NSA's XKEYSCORE program, first revealed by The Guardian, sweeps up countless people's Internet searches, emails, documents, usernames and passwords, and other private communications. XKEYSCORE is fed a constant flow of Internet traffic from fiber optic cables that make up the backbone of the world's communication network, among other sources, for processing. As of 2008, the surveillance system boasted approximately 150 field sites in the United States, Mexico, Brazil, United Kingdom, Spain, Russia, Nigeria, Somalia, Pakistan, Japan, Australia, as well as many other countries, consisting of over 700 servers.

These servers store "full-take data" at the collection sites -- meaning that they captured all of the traffic collected -- and, as of 2009, stored content for 3 to 5 days and metadata for 30 to 45 days. NSA documents indicate that tens of billions of records are stored in its database. "It is a fully distributed processing and query system that runs on machines around the world," an NSA briefing on XKEYSCORE says. "At field sites, XKEYSCORE can run on multiple computers that gives it the ability to scale in both processing power and storage."

There seems to be no access controls at all restricting how analysts can use XKEYSCORE. Standing queries -- called "workflows" -- and new fingerprints have an approval process, presumably for load issues, but individual queries are not approved beforehand but may be audited after the fact. These are things which are supposed to be low latency, and you can't have an approval process for low latency analyst queries. Since a query can get at the recorded raw data, a single query is effectively a retrospective wiretap.

All this means that the Intercept is correct when it writes:

These facts bolster one of Snowden's most controversial statements, made in his first video interview published by The Guardian on June 9, 2013. "I, sitting at my desk," said Snowden, could "wiretap anyone, from you or your accountant, to a federal judge to even the president, if I had a personal email."

You'll only get the data if it's in the NSA's databases, but if it is there you'll get it.

Honestly, there's not much in these documents that's a surprise to anyone who studied the 2013 XKEYSCORE leaks and knows what can be done with a highly customizable Intrusion Detection System. But it's always interesting to read the details.

One document -- "Intro to Context Sensitive Scanning with X-KEYSCORE Fingerprints (2010) -- talks about some of the queries an analyst can run. A sample scenario: "I want to look for people using Mojahedeen Secrets encryption from an iPhone" (page 6).

Mujahedeen Secrets is an encryption program written by al Qaeda supporters. It has been around since 2007. Last year, Stuart Baker cited its increased use as evidence that Snowden harmed America. I thought the opposite, that the NSA benefits from al Qaeda using this program. I wrote: "There's nothing that screams 'hack me' more than using specially designed al Qaeda encryption software."

And now we see how it's done. In the document, we read about the specific XKEYSCORE queries an analyst can use to search for traffic encrypted by Mujahedeen Secrets. Here are some of the program's fingerprints (page 10):


So if you want to search for all iPhone users of Mujahedeen Secrets (page 33):


fingerprint('encryption/mojahdeen2' and fingerprint('browser/cellphone/iphone')

Or you can search for the program's use in the encrypted text, because (page 37): "...many of the CT Targets are now smart enough not to leave the Mojahedeen Secrets header in the E-mails they send. How can we detect that the E-mail (which looks like junk) is in fact Mojahedeen Secrets encrypted text." Summary of the answer: there are lots of ways to detect the use of this program that users can't detect. And you can combine the use of Mujahedeen Secrets with other identifiers to find targets. For example, you can specifically search for the program's use in extremist forums (page 9). (Note that the NSA wrote that comment about Mujahedeen Secrets users increasing their opsec in 2010, two years before Snowden supposedly told them that the NSA was listening on their communications. Honestly, I would not be surprised if the program turned out to have been a US operation to get Islamic radicals to make their traffic stand out more easily.)

It's not just Mujahedeen Secrets. Nicholas Weaver explains how you can use XKEYSCORE to identify co-conspirators who are all using PGP.

And these searches are just one example. Other examples from the documents include:

  • "Targets using from a behind a large Iranian proxy" (here, page 7).

  • Usernames and passwords of people visiting (here, page 26 and following).

  • People in Pakistan visiting certain German-language message boards (here, page 1).

  • HTTP POST traffic from Russia in the middle of the night -- useful for finding people trying to steal our data (here, page 16).

  • People doing web searches on jihadist topics from Kabul (here).

E-mails, chats, web-browsing traffic, pictures, documents, voice calls, webcam photos, web searches, advertising analytics traffic, social media traffic, botnet traffic, logged keystrokes, file uploads to online services, Skype sessions and more: if you can figure out how to form the query, you can ask XKEYSCORE for it. For an example of how complex the searches can be, look at this XKEYSCORE query published in March, showing how New Zealand used the system to spy on the World Trade Organization: automatically track any email body with any particular WTO-related content for the upcoming election. (Good new documents to read include this, this, and this.)

I always read these NSA documents with an assumption that other countries are doing the same thing. The NSA is not made of magic, and XKEYSCORE is not some super-advanced NSA-only technology. It is the same sort of thing that every other country would use with its surveillance data. For example, Russia explicitly requires ISPs to install similar monitors as part of its SORM Internet surveillance system. As a home user, you can build your own XKEYSCORE using the public-domain Bro Security Monitor and the related Network Time Machine attached to a back-end data-storage system. (Lawrence Berkeley National Laboratory uses this system to store three months' worth of Internet traffic for retrospective surveillance -- it used the data to study Heartbleed.) The primary advantage the NSA has is that it sees more of the Internet than anyone else, and spends more money to store the data it intercepts for longer than anyone else. And if these documents explain XKEYSCORE in 2009 and 2010, expect that it's much more powerful now.

Back to encryption and Mujahedeen Secrets. If you want to stay secure, whether you're trying to evade surveillance by Russia, China, the NSA, criminals intercepting large amounts of traffic, or anyone else, try not to stand out. Don't use some homemade specialized cryptography that can be easily identified by a system like this. Use reasonably strong encryption software on a reasonably secure device. If you trust Apple's claims (pages 35-6), use iMessage and FaceTime on your iPhone. I really like Moxie Marlinspike's Signal for both text and voice, but worry that it's too obvious because it's still rare. Ubiquitous encryption is the bane of listeners worldwide, and it's the best thing we can deploy to make the world safer.

by Bruce Schneier at July 14, 2015 03:45 PM

The Risks of Mandating Backdoors in Encryption Products

Tuesday, a group of cryptographers and security experts released a major paper outlining the risks of government-mandated back-doors in encryption products: Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications, by Hal Abelson, Ross Anderson, Steve Bellovin, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Matthew Green, Susan Landau, Peter Neumann, Ron Rivest, Jeff Schiller, Bruce Schneier, Michael Specter, and Danny Weitzner.

Abstract: Twenty years ago, law enforcement organizations lobbied to require data and communication services to engineer their products to guarantee law enforcement access to all data. After lengthy debate and vigorous predictions of enforcement channels going dark, these attempts to regulate the emerging Internet were abandoned. In the intervening years, innovation on the Internet flourished, and law enforcement agencies found new and more effective means of accessing vastly larger quantities of data. Today we are again hearing calls for regulation to mandate the provision of exceptional access mechanisms. In this report, a group of computer scientists and security experts, many of whom participated in a 1997 study of these same topics, has convened to explore the likely effects of imposing extraordinary access mandates. We have found that the damage that could be caused by law enforcement exceptional access requirements would be even greater today than it would have been 20 years ago. In the wake of the growing economic and social cost of the fundamental insecurity of today's Internet environment, any proposals that alter the security dynamics online should be approached with caution. Exceptional access would force Internet system developers to reverse forward secrecy design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today's Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

It's already had a big impact on the debate. It was mentioned several times during yesterday's Senate hearing on the issue (see here).

Three blog posts by authors. Four different news articles, and this analysis of how the New York Times article changed. Also, a New York Times editorial.

EDITED TO ADD (7/9): Peter Swire's Senate testimony is worth reading.

EDITED TO ADD (7/10): Good article on these new crypto wars.

EDITED TO ADF (7/14): Two rebuttals, neither very convincing.

by Bruce Schneier at July 14, 2015 03:31 PM

Human and Technology Failures in Nuclear Facilities

This is interesting:

We can learn a lot about the potential for safety failures at US nuclear plants from the July 29, 2012, incident in which three religious activists broke into the supposedly impregnable Y-12 facility at Oak Ridge, Tennessee, the Fort Knox of uranium. Once there, they spilled blood and spray painted "work for peace not war" on the walls of a building housing enough uranium to build thousands of nuclear weapons. They began hammering on the building with a sledgehammer, and waited half an hour to be arrested. If an 82-year-old nun with a heart condition and two confederates old enough to be AARP members could do this, imagine what a team of determined terrorists could do.


Where some other countries often rely more on guards with guns, the United States likes to protect its nuclear facilities with a high-tech web of cameras and sensors. Under the Nunn-Lugar program, Washington has insisted that Russia adopt a similar approach to security at its own nuclear sites­ -- claiming that an American cultural preference is objectively superior. The Y-12 incident shows the problem with the American approach of automating security. At the Y-12 facility, in addition to the three fences the protestors had to cut through with wire-cutters, there were cameras and motion detectors. But we too easily forget that technology has to be maintained and watched to be effective. According to Munger, 20 percent of the Y-12 cameras were not working on the night the activists broke in. Cameras and motion detectors that had been broken for months had gone unrepaired. A security guard was chatting rather than watching the feed from a camera that did work. And guards ignored the motion detectors, which were so often set off by local wildlife that they assumed all alarms were false positives....

Instead of having government forces guard the site, the Department of Energy had hired two contractors: Wackenhut and Babcock and Wilcox. Wackenhut is now owned by the British company G4S, which also botched security for the 2012 London Olympics, forcing the British government to send 3,500 troops to provide security that the company had promised but proved unable to deliver. Private companies are, of course, driven primarily by the need to make a profit, but there are surely some operations for which profit should not be the primary consideration.

Babcock and Wilcox was supposed to maintain the security equipment at the Y-12 site, while Wackenhut provided the guards. Poor communication between the two companies was one reason sensors and cameras were not repaired. Furthermore, Babcock and Wilcox had changed the design of the plant's Highly Enriched Uranium Materials Facility, making it a more vulnerable aboveground building, in order to cut costs. And Wackenhut was planning to lay off 70 guards at Y-12, also to cut costs.

There's an important lesson here. Security is a combination of people, process, and technology. All three have to be working in order for security to work.

Slashdot thread.

by Bruce Schneier at July 14, 2015 10:53 AM

Hacking Team Is Hacked

Someone hacked the cyberweapons arms manufacturer Hacking Team and posted 400 GB of internal company data.

Hacking Team is a pretty sleazy company, selling surveillance software to all sorts of authoritarian governments around the world. Reporters Without Borders calls it one of the enemies of the Internet. Citizen Lab has published many reports about their activities.

It's a huge trove of data, including a spreadsheet listing every government client, when they first bought the surveillance software, and how much money they have paid the company to date. Not surprising, the company has been lying about who its customers are. Chris Soghoian has been going through the data and tweeting about it. More Twitter comments on the data here. Here are articles from Wired and The Guardian.

Here's the torrent, if you want to look at the data yourself. (Here's another mirror.) The source code is up on Github.

I expect we'll be sifting through all the data for a while.

Slashdot thread. Hacker News thread.

EDITED TO ADD: The Hacking Team CEO, David Vincenzetti, doesn't like me:

In another [e-mail], the Hacking Team CEO on 15 May claimed renowned cryptographer Bruce Schneier was "exploiting the Big Brother is Watching You FUD (Fear, Uncertainty and Doubt) phenomenon in order to sell his books, write quite self-promoting essays, give interviews, do consulting etc. and earn his hefty money."

Meanwhile, Hacking Team has told all of its customers to shut down all uses of its software. They are in "full on emergency mode," which is perfectly understandable.

EDITED TO ADD: Hacking Team had no exploits for an un-jail-broken iPhone. Seems like the platform of choice if you want to stay secure.

EDITED TO ADD (7/14): WikiLeaks has published a huge trove of e-mails.

Hacking Team had a signed iOS certificate, which has been revoked.

by Bruce Schneier at July 14, 2015 10:47 AM

July 13, 2015

Bruce Schneier
NSA Antennas

Interesting article on the NSA's use of multi-beam antennas for surveillance. Certainly smart technology; it can eavesdrop on multiple targets per antenna.

I'm surprised by how behind the NSA was on this technology. It's from at least 1973, and there was some commercialization as far back as 1981. Why did it take the NSA/GCHQ until 2010 to install this?

Here's a modern supplier.

by Bruce Schneier at July 13, 2015 08:01 PM

High-tech Cheating on Exams

India is cracking down on people who use technology to cheat on exams:

Candidates have been told to wear light clothes with half-sleeves, and shirts that do not have big buttons.

They cannot wear earrings and carry calculators, pens, handbags and wallets.

Shoes have also been discarded in favour of open slippers.

In India students cheating in exams have been often found concealing Bluetooth devices and mobile SIM cards that have been stitched to their shirts.

I haven't heard much about this sort of thing in the US or Europe, but I assume it's happening there too.

by Bruce Schneier at July 13, 2015 07:48 PM

Organizational Doxing

Recently, WikiLeaks began publishing over half a million previously secret cables and other documents from the Foreign Ministry of Saudi Arabia. It's a huge trove, and already reporters are writing stories about the highly secretive government.

What Saudi Arabia is experiencing isn't common but part of a growing trend.

Just last week, unknown hackers broke into the network of the cyber-weapons arms manufacturer Hacking Team and published 400 gigabytes of internal data, describing, among other things, its sale of Internet surveillance software to totalitarian regimes around the world.

Last year, hundreds of gigabytes of Sony's sensitive data was published on the Internet, including executive salaries, corporate emails and contract negotiations. The attacker in this case was the government of North Korea, which was punishing Sony for producing a movie that made fun of its leader. In 2010, the U.S. cyberweapons arms manufacturer HBGary Federal was a victim, and its attackers were members of a loose hacker collective called LulzSec.

Edward Snowden stole a still-unknown number of documents from the National Security Agency in 2013 and gave them to reporters to publish. Chelsea Manning stole three-quarters of a million documents from the U.S. State Department and gave them to WikiLeaks to publish. The person who stole the Saudi Arabian documents might also be a whistleblower and insider but is more likely a hacker who wanted to punish the kingdom.

Organizations are increasingly getting hacked, and not by criminals wanting to steal credit card numbers or account information in order to commit fraud, but by people intent on stealing as much data as they can and publishing it. Law professor and privacy expert Peter Swire refers to "the declining half-life of secrets." Secrets are simply harder to keep in the information age. This is bad news for all of us who value our privacy, but there's a hidden benefit when it comes to organizations.

The decline of secrecy means the rise of transparency. Organizational transparency is vital to any open and free society.

Open government laws and freedom of information laws let citizens know what the government is doing, and enable them to carry out their democratic duty to oversee its activities. Corporate disclosure laws perform similar functions in the private sphere. Of course, both corporations and governments have some need for secrecy, but the more they can be open, the more we can knowledgeably decide whether to trust them.

This makes the debate more complicated than simple personal privacy. Publishing someone's private writings and communications is bad, because in a free and diverse society people should have private space to think and act in ways that would embarrass them if public.

But organizations are not people and, while there are legitimate trade secrets, their information should otherwise be transparent. Holding government and corporate private behavior to public scrutiny is good.

Most organizational secrets are only valuable for a short term: negotiations, new product designs, earnings numbers before they're released, patents before filing, and so on.

Forever secrets, like the formula for Coca-Cola, are few and far between. The one exception is embarrassments. If an organization had to assume that anything it did would become public in a few years, people within that organization would behave differently.

The NSA would have had to weigh its collection programs against the possibility of public scrutiny. Sony would have had to think about how it would look to the world if it paid its female executives significantly less than its male executives. HBGary would have thought twice before launching an intimidation campaign against a journalist it didn't like, and Hacking Team wouldn't have lied to the UN about selling surveillance software to Sudan. Even the government of Saudi Arabia would have behaved differently. Such embarrassment might be the first significant downside of hiring a psychopath as CEO.

I don't want to imply that this forced transparency is a good thing, though. The threat of disclosure chills all speech, not just illegal, embarrassing, or objectionable speech. There will be less honest and candid discourse. People in organizations need the freedom to write and say things that they wouldn't want to be made public.

State Department officials need to be able to describe foreign leaders, even if their descriptions are unflattering. Movie executives need to be able to say unkind things about their movie stars. If they can't, their organizations will suffer.

With few exceptions, our secrets are stored on computers and networks vulnerable to hacking. It's much easier to break into networks than it is to secure them, and large organizational networks are very complicated and full of security holes. Bottom line: If someone sufficiently skilled, funded and motivated wants to steal an organization's secrets, they will succeed. This includes hacktivists (HBGary Federal, Hacking Team), foreign governments (Sony), and trusted insiders (State Department and NSA).

It's not likely that your organization's secrets will be posted on the Internet for everyone to see, but it's always a possibility.

Dumping an organization's secret information is going to become increasingly common as individuals realize its effectiveness for whistleblowing and revenge. While some hackers will use journalists to separate the news stories from mere personal information, not all will.

Both governments and corporations need to assume that their secrets are more likely to be exposed, and exposed sooner, than ever. They should do all they can to protect their data and networks, but have to realize that their best defense might be to refrain from doing things that don't look good on the front pages of the world's newspapers.

This essay previously appeared on I didn't use the term "organizational doxing," though, because it would be too unfamiliar to that audience.

by Bruce Schneier at July 13, 2015 07:44 PM

Amazon Is Analyzing the Personal Relationships of Its Reviewers

This is an interesting story of a reviewer who had her review deleted because Amazon believed she knew the author personally.

Leaving completely aside the ethics of friends reviewing friends' books, what is Amazon doing conducting this kind of investigative surveillance? Do reviewers know that Amazon is keeping tabs on who their friends are?

by Bruce Schneier at July 13, 2015 07:12 PM

David Weinberger
What open APIs could do for the news

In 2008-9, NPR, the NY Times, and The Guardian opened up public APIs, hoping that it would spur developers around the world to create wonderful and weird apps that would make use of their metadata and spread the availability of news.

Very few little happened. By any normal measure, the experiment would have to be deemed a failure.

These three news organizations are nevertheless fervid evangelists for the same APIs—for internal use. They provide an abstraction layer that makes the news media’s back ends far easier to maintain without disrupting their availability to users, they enable these organizations to adapt to new devices and workflows insanely quickly, they facilitate strategic partnerships, they lower the risk of experimentation, and more.

This was the topic of the paper I wrote during my fellowship at The Shorenstein Center. The paper then looks at ways we might still get to the open ecosystem for news that was first envisioned.

The full paper is available freely at the Shorenstein site.

There’s an op-ed length version at Nieman Reports.

The post What open APIs could do for the news appeared first on Joho the Blog.

by davidw at July 13, 2015 06:33 PM

Berkman Center front page
Berkman Buzz: July 13, 2015


Worries about the "great glitch," reasons for stealing data, big ideas about the future and the Internet, and more... in this week's Buzz.

Thumbnail Image: 
Worries about the Great Glitch, reasons for stealing data, transformational newsroom APIs, and big ideas about the future and the Internet. 
Berkman Center News
We're excited about the release the new report, "Holyoke: A Massachusetts Municipal Light Plant Seizes Internet Access Business Opportunities." As part of this initiative, we also hosted a successful symposium last week that brought together dozens of the state's municipal electric utilities.
READ: Community voices

"We are building skyscraper favelas in code — in earthquake zones." Zeynep Tufekci explains that while its nice to know that the "great glitch" of July 8th (which included a downed United fleet, a dark NYSE, and an offline WSJ site) wasn't due to cyberterrorism, we should still be worried because, well, "software sucks."

The growing trend of stealing data... to share it. More often organizations are being hacked not by criminals seeking credit card numbers, "but by people intent on stealing as much data as they can and publishing it," for whistleblowing or revenge, explains Bruce Schneier in an essay for (For example, the recent hacking of the Saudi Arabian Foreign Ministry's internal communications, which you can read more about on the Internet Monitor blog.)

What APIs can do for news. David Weinberger has published a new paper that explores the successes, challenges and opportunities for news organizations using APIs. In an article for Nieman Reports, he explains how the newsrooms at NPRThe Guardian, and The New York Times, were transformed by APIs in unexpected ways.

In case you missed it... Prof. Jonathan Zittrain was all over the Aspen Ideas Festival. Check out his talk from the "Is the Internet taking us where we want to go?" session, a panel discussion on "Data Ethics in the Age of the Quantified Society," and his debate with Andrew Keen titled "Smart Technology -- Future Employer or Job Destroyer?"
How Internet Censorship Works
WATCH: The Web We Want & The Ed We Want
Justin Reich at the Berkman Center on July 7, 2015

Reich is an educational researcher broadly interested in the future of learning in a networked world. In this talk, he highlights some of the exciting innovations within education that seek to put students and learners in charge of their online lives. 
LISTEN: Going Public
A new episode from Radio Berkman

We speak with Nieman Fellow Melody Kramer who's researching what it means to be a member of a public or community radio station. Kramer pulls from examples at stations all over the country of people supporting their public radio stations in non-financial ways, including code and story ideas.

In our orbit

Subscribe to the Buzz



by gweber at July 13, 2015 01:58 PM

Justin Reich
Practical Guidance from MOOC Research: Persistence and Activity
MOOC research has a mixed record on helping us understand which students are likely to persist through a course.

by Justin Reich at July 13, 2015 02:56 AM

July 11, 2015

Eszter Hargittai
Travel blogging: Zürichberg

I was in Zurich last week where my hosts kindly took me to a very nice restaurant on Zürichberg, a hill that offers pretty views and a peaceful environment of fields and forests. In addition to the garden restaurant of the Hotel Zurichberg, there is a terrace as well with even better views. It turns out, Zürichberg is host to all sorts of attractions: FIFA’s headquarters, the Zurich Zoo and a beautiful cemetery where James Joyce is buried.

FIFA’s headquarters greet you with three flags, the middle one proudly proclaiming “My Game is Fair Play.” It’s good that they cleared that up. I was curious to see a sculpture peeking out from behind some trees, but as we tried to enter the FIFA grounds, a security guard stopped us explaining that unless we were children playing in the soccer match nearby or their parents, we could not proceed. Nearby was a guard with a weapon as well, not a common occurrence in Zurich.

The highlight of this area for me was Friedhof Fluntern, a most charming cemetery, if that word is appropriate given the context (as aptly noted by a reviewer on TripAdvisor, “how do you rate a cemetery?”). Given the Swiss context, it is not a huge surprise that the grounds are very orderly. But there is more to it. It feels more like a garden than a cemetery. You can imagine spending time there to get away from the hustle and bustle of the city. A colleague even noted that he sometimes goes there to read. The headstones move past the usual venturing into the whimsy and artistic. The cemetery is on a hill, which adds to its character. I enjoyed going from row to row trying to peek into the lives of the people buried there through their names, the dates and notes on the stones, and the little sculptures honoring them. See more of my Friedhof Fluntern photos here.

It was too hot to proceed to the Zoo, but having later read that they have Galapagos giant tortoises, I was bummed by my decision to skip it and will be sure to visit next time I am in town.

To get to Zürichberg, take Tram 6 from Central to Zoo, which is a 2-minute walk heading east from the main train station, which is ten minutes from the airport by train. Zurich offers day tickets for its entire public transportation system. The 24 or 72-hour ZürichCARD can also be very beneficial if you plan to visit numerous attractions.

by eszter at July 11, 2015 02:07 PM

July 10, 2015

Bruce Schneier
Friday Squid Blogging: My Little Cephalopod

A cute series of knitted plushies.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

by Bruce Schneier at July 10, 2015 09:29 PM

Center for Research on Computation and Society (Harvard SEAS)
Postdoc Jasper Snoek joins Twitter
July 9, 2015

Congratulations to Jasper Snoek! Jasper has joined Twitter, alongside his Whetlab colleagues, to enhance their Machine Learning efforts.

by kmavon at July 10, 2015 08:02 PM

Postdoc Rafael M. Frongillo appointed Assistant Professor of Computer Science at the University of Colorado Boulder.)
July 10, 2015

Congratulations to postdoc Rafael M. Frongillo! Raf has been appointed to Assistant Professor of Computer Science at the University of Colorado, Boulder.

by kmavon at July 10, 2015 07:49 PM

Postdoc Rafael M. Frongillo presents at COLT 2015
July 7, 2015

Postdoc Rafael M. Frongillo presented "Vector-Valued Property Elicitation" at COLT 2015 in Paris, France.

by kmavon at July 10, 2015 07:47 PM

David Weinberger
A solution to the Greek crisis

The post A solution to the Greek crisis appeared first on Joho the Blog.

by davidw at July 10, 2015 07:11 PM

Berkman Center front page
Berkman Initiative Spotlights Lessons from Ebola outbreak


Global Access in Action (GAiA), an initiative of the Berkman Center for Internet & Society at Harvard, is hosting a workshop today to explore lessons from the recent Ebola outbreak for improving future preparedness for public health crises. Forty leaders from civil society, academia, international procurement and donor agencies, government, and the pharmaceutical industry will review the Ebola drug development landscape and explore ways to alter policies to strengthen ongoing research and reduce the incidence and severity of future outbreaks.

Optimizing drug R&D incentives, increasing access to medicines, and improving coordination can help save lives in the next global public health emergency.

July 10, 2015 (Cambridge, Massachusetts) - Global Access in Action (GAiA), an initiative of the Berkman Center for Internet & Society at Harvard, is hosting a workshop today to explore lessons from the recent Ebola outbreak for improving future preparedness for public health crises. Forty leaders from civil society, academia, international procurement and donor agencies, government, and the pharmaceutical industry will review the Ebola drug development landscape and explore ways to alter policies to strengthen ongoing research and reduce the incidence and severity of future outbreaks.

Participants at today’s workshop will grapple with difficult questions generated by the recent Ebola outbreak, such as:

  • Why there still is no cure or vaccine for Ebola almost forty years after the first outbreak and ten years after published reports that a vaccine candidate showed promise in non-human subjects?
  • What structural and/or policy changes could incentivize R&D into treatments for neglected diseases, including improved international coordination and non-patent incentives such as prizes, challenges, and advance market commitments?
  • What strategies should be adopted to improve future preparedness for similar public health emergencies, such as conditions on grant funding, and stockpiles or strike forces to improve humanitarian responses?

“The Ebola outbreak that killed more than 11,000 people in West Africa between 2013 and 2015 offers a painful illustration of how important it is for the world community to get drug development policy right,” said William Fisher, Global Access in Action co-director and WilmerHale Professor of Intellectual Property Law at Harvard Law School.

“Practical steps can save lives and improve responses to global health crises, such as the recent Ebola outbreak,” said Martha Minow, Dean of Harvard Law School. "Before the next crisis, it is vital to bring attention and political will to such practical steps, including nimble incentives for research and development of responsive drugs and coordination and commitment to enhance access to medicines.”

“Stakeholders across the world need to come together to develop systematic incentives for increasing R&D into diseases that disproportionately affect the global poor, and for which there are insufficient commercial incentives,” said Quentin Palfrey, Global Access in Action co-director and Special Counsel at the law firm WilmerHale.

“As the Ebola outbreak demonstrates, international coordination to respond to a burgeoning public health crisis is very difficult -- and the stakes are enormous,” added Mark Wu, Global Access in Action co-director, Berkman Center faculty co-director, and an Assistant Professor of Law at Harvard Law School. 

Global Access in Action explores laws and policies that govern innovation and commercialization of technologies for the poor. The project aims to develop pragmatic solutions to difficult problems that have tangible impact on the lives of the world’s poorest populations.

The workshop cuts across two Global Access in Action thematic topics:  access to lifesaving medicines for the poor and incentives for research into neglected diseases that disproportionately affect the poor. In 2014, the project hosted a workshop on ways to increase access to pharmaceutical products in the developing world, with particular emphasis on intra-country price discrimination and humanitarian licensing strategies. 

For news and developments about Global Access in Action, please visit If you wish to get in touch with the GAiA team, please email Quentin Palfrey <>.

About the Berkman Center for Internet & Society
Founded in 1997, the Berkman Center for Internet & Society at Harvard University is dedicated to exploring, understanding, and shaping the development of the digitally-networked environment. A diverse, interdisciplinary community of scholars, practitioners, technologists, policy experts, and advocates, we seek to tackle the most important challenges of the digital age while keeping a focus on tangible real-world impact in the public interest. Our faculty, fellows, staff and affiliates conduct research, build tools and platforms, educate others, form bridges and facilitate dialogue across and among diverse communities. More information at

by gweber at July 10, 2015 01:21 PM

Feeds In This Planet