Current Berkman People and Projects

Keep track of Berkman-related news and conversations by subscribing to this page using your RSS feed reader. This aggregation of blogs relating to the Berkman Center does not necessarily represent the views of the Berkman Center or Harvard University but is provided as a convenient starting point for those who wish to explore the people and projects in Berkman's orbit. As this is a global exercise, times are in UTC.

The list of blogs being aggregated here can be found at the bottom of this page.

December 22, 2014

Bruce Schneier
The Limits of Police Subterfuge

"The next time you call for assistance because the Internet service in your home is not working, the 'technician' who comes to your door may actually be an undercover government agent. He will have secretly disconnected the service, knowing that you will naturally call for help and -- ­when he shows up at your door, impersonating a technician­ -- let him in. He will walk through each room of your house, claiming to diagnose the problem. Actually, he will be videotaping everything (and everyone) inside. He will have no reason to suspect you have broken the law, much less probable cause to obtain a search warrant. But that makes no difference, because by letting him in, you will have 'consented' to an intrusive search of your home."

This chilling scenario is the first paragraph of a motion to suppress evidence gathered by the police in exactly this manner, from a hotel room. Unbelievably, this isn't a story from some totalitarian government on the other side of an ocean. This happened in the United States, and by the FBI. Eventually -- I'm sure there will be appeals -- higher U.S. courts will decide whether this sort of practice is legal. If it is, the country will slide even further into a society where the police have even more unchecked power than they already possess.

The facts are these. In June, Two wealthy Macau residents stayed at Caesar's Palace in Las Vegas. The hotel suspected that they were running an illegal gambling operation out of their room. They enlisted the police and the FBI, but could not provide enough evidence for them to get a warrant. So instead they repeatedly cut the guests' Internet connection. When the guests complained to the hotel, FBI agents wearing hidden cameras and recorders pretended to be Internet repair technicians and convinced the guests to let them in. They filmed and recorded everything under the pretense of fixing the Internet, and then used the information collected from that to get an actual search warrant. To make matters even worse, they lied to the judge about how they got their evidence.

The FBI claims that their actions are no different from any conventional sting operation. For example, an undercover policeman can legitimately look around and report on what he sees when he invited into a suspect's home under the pretext of trying to buy drugs. But there are two very important differences: one of consent, and the other of trust. The former is easier to see in this specific instance, but the latter is much more important for society.

You can't give consent to something you don't know and understand. The FBI agents did not enter the hotel room under the pretext of making an illegal bet. They entered under a false pretext, and relied on that for consent of their true mission. That makes things different. The occupants of the hotel room didn't realize who they were giving access to, and they didn't know their intentions. The FBI knew this would be a problem. According to the New York Times, "a federal prosecutor had initially warned the agents not to use trickery because of the 'consent issue.' In fact, a previous ruse by agents had failed when a person in one of the rooms refused to let them in." Claiming that a person granting an Internet technician access is consenting to a police search makes no sense, and is no different than one of those "click through" Internet license agreements that you didn't read saying one thing and while meaning another. It's not consent in any meaningful sense of the term.

Far more important is the matter of trust. Trust is central to how a society functions. No one, not even the most hardened survivalists who live in backwoods log cabins, can do everything by themselves. Humans need help from each other, and most of us need a lot of help from each other. And that requires trust. Many Americans' homes, for example, are filled with systems that require outside technical expertise when they break: phone, cable, Internet, power, heat, water. Citizens need to trust each other enough to give them access to their hotel rooms, their homes, their cars, their person. Americans simply can't live any other way.

It cannot be that every time someone allows one of those technicians into our homes they are consenting to a police search. Again from the motion to suppress: "Our lives cannot be private -- ­and our personal relationships intimate­ -- if each physical connection that links our homes to the outside world doubles as a ready-made excuse for the government to conduct a secret, suspicionless, warrantless search." The resultant breakdown in trust would be catastrophic. People would not be able to get the assistance they need. Legitimate servicemen would find it much harder to do their job. Everyone would suffer.

It all comes back to the warrant. Through warrants, Americans legitimately grant the police an incredible level of access into our personal lives. This is a reasonable choice because the police need this access in order to solve crimes. But to protect ordinary citizens, the law requires the police to go before a neutral third party and convince them that they have a legitimate reason to demand that access. That neutral third party, a judge, then issues the warrant when he or she is convinced. This check on the police's power is for Americans' security, and is an important part of the Constitution.

In recent years, the FBI has been pushing the boundaries of its warrantless investigative powers in disturbing and dangerous ways. It collects phone-call records of millions of innocent people. It uses hacking tools against unknown individuals without warrants. It impersonates legitimate news sites. If the lower court sanctions this particular FBI subterfuge, the matter needs to be taken up -- ­and reversed­ -- by the Supreme Court.

This essay previously appeared in The Atlantic.

by Bruce Schneier at December 22, 2014 03:39 AM

Friday Squid Blogging: Squid Beard


As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

by Bruce Schneier at December 22, 2014 03:31 AM

Friday Squid Blogging: Recreational Squid Fishing in Washington State

There is year-round recreational squid fishing from the Strait of Juan de Fuca to south Puget Sound.

A nighttime sport that requires simple, inexpensive fishing tackle, squid fishing-or jigging-typically takes place on the many piers and docks throughout the Puget Sound region

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

by Bruce Schneier at December 22, 2014 03:29 AM

Lessons from the Sony Hack

Earlier this month, a mysterious group that calls itself Guardians of Peace hacked into Sony Pictures Entertainment's computer systems and began revealing many of the Hollywood studio's best-kept secrets, from details about unreleased movies to embarrassing emails (notably some racist notes from Sony bigwigs about President Barack Obama's presumed movie-watching preferences) to the personnel data of employees, including salaries and performance reviews. The Federal Bureau of Investigation now says it has evidence that North Korea was behind the attack, and Sony Pictures pulled its planned release of "The Interview," a satire targeting that country's dictator, after the hackers made some ridiculous threats about terrorist violence.

Your reaction to the massive hacking of such a prominent company will depend on whether you're fluent in information-technology security. If you're not, you're probably wondering how in the world this could happen. If you are, you're aware that this could happen to any company (though it is still amazing that Sony made it so easy).

To understand any given episode of hacking, you need to understand who your adversary is. I've spent decades dealing with Internet hackers (as I do now at my current firm), and I've learned to separate opportunistic attacks from targeted ones.

You can characterize attackers along two axes: skill and focus. Most attacks are low-skill and low-focus -- people using common hacking tools against thousands of networks world-wide. These low-end attacks include sending spam out to millions of email addresses, hoping that someone will fall for it and click on a poisoned link. I think of them as the background radiation of the Internet.

High-skill, low-focus attacks are more serious. These include the more sophisticated attacks using newly discovered "zero-day" vulnerabilities in software, systems and networks. This is the sort of attack that affected Target, J.P. Morgan Chase and most of the other commercial networks that you've heard about in the past year or so.

But even scarier are the high-skill, high-focus attacks­ -- the type that hit Sony. This includes sophisticated attacks seemingly run by national intelligence agencies, using such spying tools as Regin and Flame, which many in the IT world suspect were created by the U.S.; Turla, a piece of malware that many blame on the Russian government; and a huge snooping effort called GhostNet, which spied on the Dalai Lama and Asian governments, leading many of my colleagues to blame China. (We're mostly guessing about the origins of these attacks; governments refuse to comment on such issues.) China has also been accused of trying to hack into the New York Times in 2010, and in May, Attorney General Eric Holder announced the indictment of five Chinese military officials for cyberattacks against U.S. corporations.

This category also includes private actors, including the hacker group known as Anonymous, which mounted a Sony-style attack against the Internet-security firm HBGary Federal, and the unknown hackers who stole racy celebrity photos from Apple's iCloud and posted them. If you've heard the IT-security buzz phrase "advanced persistent threat," this is it.

There is a key difference among these kinds of hacking. In the first two categories, the attacker is an opportunist. The hackers who penetrated Home Depot's networks didn't seem to care much about Home Depot; they just wanted a large database of credit-card numbers. Any large retailer would do.

But a skilled, determined attacker wants to attack a specific victim. The reasons may be political: to hurt a government or leader enmeshed in a geopolitical battle. Or ethical: to punish an industry that the hacker abhors, like big oil or big pharma. Or maybe the victim is just a company that hackers love to hate. (Sony falls into this category: It has been infuriating hackers since 2005, when the company put malicious software on its CDs in a failed attempt to prevent copying.)

Low-focus attacks are easier to defend against: If Home Depot's systems had been better protected, the hackers would have just moved on to an easier target. With attackers who are highly skilled and highly focused, however, what matters is whether a targeted company's security is superior to the attacker's skills, not just to the security measures of other companies. Often, it isn't. We're much better at such relative security than we are at absolute security.

That is why security experts aren't surprised by the Sony story. We know people who do penetration testing for a living -- real, no-holds-barred attacks that mimic a full-on assault by a dogged, expert attacker -- and we know that the expert always gets in. Against a sufficiently skilled, funded and motivated attacker, all networks are vulnerable. But good security makes many kinds of attack harder, costlier and riskier. Against attackers who aren't sufficiently skilled, good security may protect you completely.

It is hard to put a dollar value on security that is strong enough to assure you that your embarrassing emails and personnel information won't end up posted online somewhere, but Sony clearly failed here. Its security turned out to be subpar. They didn't have to leave so much information exposed. And they didn't have to be so slow detecting the breach, giving the attackers free rein to wander about and take so much stuff.

For those worried that what happened to Sony could happen to you, I have two pieces of advice. The first is for organizations: take this stuff seriously. Security is a combination of protection, detection and response. You need prevention to defend against low-focus attacks and to make targeted attacks harder. You need detection to spot the attackers who inevitably get through. And you need response to minimize the damage, restore security and manage the fallout.

The time to start is before the attack hits: Sony would have fared much better if its executives simply hadn't made racist jokes about Mr. Obama or insulted its stars -- or if their response systems had been agile enough to kick the hackers out before they grabbed everything.

My second piece of advice is for individuals. The worst invasion of privacy from the Sony hack didn't happen to the executives or the stars; it happened to the blameless random employees who were just using their company's email system. Because of that, they've had their most personal conversations -- gossip, medical conditions, love lives -- exposed. The press may not have divulged this information, but their friends and relatives peeked at it. Hundreds of personal tragedies must be unfolding right now.

This could be any of us. We have no choice but to entrust companies with our intimate conversations: on email, on Facebook, by text and so on. We have no choice but to entrust the retailers that we use with our financial details. And we have little choice but to use cloud services such as iCloud and Google Docs.

So be smart: Understand the risks. Know that your data are vulnerable. Opt out when you can. And agitate for government intervention to ensure that organizations protect your data as well as you would. Like many areas of our hyper-technical world, this isn't something markets can fix.

This essay previously appeared on the Wall Street Journal CIO Journal.

by Bruce Schneier at December 22, 2014 01:20 AM

December 21, 2014

Bruce Schneier
SS7 Vulnerabilities

There are security vulnerability in the phone-call routing protocol called SS7.

The flaws discovered by the German researchers are actually functions built into SS7 for other purposes -- such as keeping calls connected as users speed down highways, switching from cell tower to cell tower -- that hackers can repurpose for surveillance because of the lax security on the network.

Those skilled at the myriad functions built into SS7 can locate callers anywhere in the world, listen to calls as they happen or record hundreds of encrypted calls and texts at a time for later decryption. There also is potential to defraud users and cellular carriers by using SS7 functions, the researchers say.

Some details:

The German researchers found two distinct ways to eavesdrop on calls using SS7 technology. In the first, commands sent over SS7 could be used to hijack a cell phone's "forwarding" function -- a service offered by many carriers. Hackers would redirect calls to themselves, for listening or recording, and then onward to the intended recipient of a call. Once that system was in place, the hackers could eavesdrop on all incoming and outgoing calls indefinitely, from anywhere in the world.

The second technique requires physical proximity but could be deployed on a much wider scale. Hackers would use radio antennas to collect all the calls and texts passing through the airwaves in an area. For calls or texts transmitted using strong encryption, such as is commonly used for advanced 3G connections, hackers could request through SS7 that each caller's carrier release a temporary encryption key to unlock the communication after it has been recorded.

We'll learn more when the researchers present their results.

by Bruce Schneier at December 21, 2014 08:03 PM

December 20, 2014

Bruce Schneier
How the FBI Unmasked Tor Users

Kevin Poulson has a good article up on Wired about how the FBI used a Metasploit variant to identify Tor users.

by Bruce Schneier at December 20, 2014 11:11 PM

Over 700 Million People Taking Steps to Avoid NSA Surveillance

There's a new international survey on Internet security and trust, of "23,376 Internet users in 24 countries," including "Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey and the United States." Amongst the findings, 60% of Internet users have heard of Edward Snowden, and 39% of those "have taken steps to protect their online privacy and security as a result of his revelations."

The press is mostly spinning this as evidence that Snowden has not had an effect: "merely 39%," "only 39%," and so on. (Note that these articles are completely misunderstanding the data. It's not 39% of people who are taking steps to protect their privacy post-Snowden, it's 39% of the 60% of Internet users -- which is not everybody -- who have heard of him. So it's much less than 39%.)

Even so, I disagree with the "Edward Snowden Revelations Not Having Much Impact on Internet Users" headline. He's having an enormous impact. I ran the actual numbers country by country, combining data on Internet penetration with data from this survey. Multiplying everything out, I calculate that 706 million people have changed their behavior on the Internet because of what the NSA and GCHQ are doing. (For example, 17% of Indonesians use the Internet, 64% of them have heard of Snowden and 62% of them have taken steps to protect their privacy, which equals 17 million people out of its total 250-million population.)

Note that the countries in this survey only cover 4.7 billion out of a total 7 billion world population. Taking the conservative estimates that 20% of the remaining population uses the Internet, 40% of them have heard of Snowden, and 25% of those have done something about it, that's an additional 46 million people around the world.

It's probably true that most of those people took steps that didn't make any appreciable difference against an NSA level of surveillance, and probably not even against the even more pervasive corporate variety of surveillance. It's probably even true that some of those people didn't take steps at all, and just wish they did or wish they knew what to do. But it is absolutely extraordinary that 750 million people are disturbed enough about their online privacy that they will represent to a survey taker that they did something about it.

Name another news story that has caused over ten percent of the world's population to change their behavior in the past year? Cory Doctorow is right: we have reached "peak indifference to surveillance." From now on, this issue is going to matter more and more, and policymakers around the world need to start paying attention.

Related: a recent Pew Research Internet Project survey on Americans' perceptions of privacy, commented on by Ben Wittes.

This essay previously appeared on Lawfare.

EDITED TO ADD (12/15): Reddit thread.

EDITED TO ADD (12/16): SlashDot thread.

by Bruce Schneier at December 20, 2014 09:55 PM

Amanda Palmer
the hospital threads.

a story.

a few nights ago i was visiting anthony in the hospital, and i left on the late side. i didn’t feel great about the visit. anthony’s been having a hard enough time facing this bone marrow transplant, and it’s been even harder to field the human energy and all the people who are trying to help by visiting, but it’s not really helping, it’s irritating him. he’s hopped up on steroids. he’s facing things i can’t imagine. i feel so fucking powerless in the face of all this. i want to help, but there’s nothing, literally nothing i can do to help. not if he doesn’t want it. just wait, i guess. just be open to whatever’s going to need doing.

i was distracted when i left. as i’d already walked out the building and towards the hospital garage, i realized i’d left my keys in his room. i went back to the hospital. it was locked. as i stood there, facing my fate, a guy and gal walked up to the doors. they were probably in their early twenties. students, maybe. friendly-looking folk.

the temperature had plummeted and the wind had started in.

“are you trying to get in?” I asked. “…it’s locked. I’m trying to get in, too. i left my keys up there.”

i asked if they were visiting someone. they said yes.
who? i asked. their friend. she was just in a car accident.

“you’re amanda palmer, aren’t you?” said the guy.

“yeah” i said. “who are you guys?”

“i’m porsha”, said the gal. she had long hair, tucked into a hat. they looked distracted. i wondered why they were there.

“and i’m erik. your husband is my favorite writer. i saw you guys at porter square books a little while ago….and your friend….anthony? he was reading from his book.”

“ha. anthony’s the one upstairs.” I said. “He’s about to get radiated. what happened to your friend?”

“we don’t know. she was in a car accident. we’re the godparents of her baby.”

“oof. i hope she’s okay.”

we wandered around now, as a gang, trying to find an entrance to the hospital. these doors were locked. those doors were locked.

we finally wound up circling to the emergency room entrance. there was a guy there, with a huge cut on his head, blood still drying. he needed a light for his cigarette. i didn’t have one.

“did you hear?” he asked them.

this was her boyfriend brett. the babydaddy.

“no….” they said.

and in one gruesome mime-motion, he sliced his finger across his throat. i thought, in that moment, that she’d died. but no, her neck had broken, and she was paralyzed, from the neck down. erik and porsha both sort of went into shock.

i hugged erik and porsha, felt useless, said something to the effect of “please take care of yourselves and get out of the cold” and went up to find my keys. a nurse was in his room, changing one of the liquid bags above him. anthony wasn’t mad. just tired. he waved. i made some comment about being an idiot. my keys. i told him i loved him.
i took the elevator back downstairs. i’m getting to know this hospital so well.

when i got back back down to the emergency room entrance, they were still there. porsha was on the phone. erik was staring into space.

“what are you doing right now, do you have a plan? where did you come from? where are you going?”

“we live about 45 minutes away. we drove straight here when we heard.”

“you shouldn’t drive. not now. you’re in shock.” i said.

her name, they said, was alexandria. they were waiting for alexandria’s mom to come out of the hospital, too. she was banged up, they said, but not badly. i felt like i had to do something. brett, the babydaddy, still hadn’t found a light. i know this feeling.

so i did the only thing i could do: i went to find brett a light for his cigarette.

nobody in that whole fucking hospital had a cigarette. i asked 15 people. i finally asked a doctor. i told him it was for a guy outside whose girlfriend had just been paralyzed in an accident.

“aie aie.” he said. “yes, i know. i’m…that doctor. so sad…” he shook his head. “let me see if i can find you a light. i have an idea….” and he headed back into the ER.

i checked my twitter feed while I waited; to cheer me up.

there was a hostage situation in sydney. my oz friends started tweeting and texting me: some had friends and family blocks away from the store. i started retweeting the news, sharing information. watching people yell and scramble. “DON’T TWEET ABOUT THIS” some said: “we know you’re trying to help, but the police have requested….”

i put my phone down. i was not going to be able to help. whatever. fuck it.

the doctor came back.

“i’m sorry to say….but, my friend, we are living in different time. i couldn’t find a light anywhere back there.”

i laughed. “well, maybe that’s good….right? not so many people dying of cigarette poisoning?”

he laughed back.

i went back outside.

the sidewalk was empty. they were all gone.

i sighed. i’d tried. i walked through the cold over to the garage, loading my twitter feed and reading about the hostage crisis at the lindt chocolate shop in sydney.

and there they were. erik and porsha, and brett the babydaddy with the bleeding head, and they’d been joined by a fourth: brett’s brother.

“what are you guys….doing?” i asked.

they didn’t really have a plan.

so i did the only thing i could do, and i got to feel useful for the first time that day. i walked them to the hotel across the street, helped them get three rooms, made sure they had toothbrushes, gave them my contact info, and i left.

erik sent me an email today.
they’re all fine.
they’re really sad.

and they’ve set up a crowdfund for alexandria. it’s already made $11k in just a few days. you may know her, or you may just feel like chipping in a symbolic $5 because…i dunno, because it’s that time of year and want do something token-sized to feel connected to this sea of humanity.

i get asked a dozen times a day to help with various crowdfunds, and i don’t help most of them because…you can’t help everybody.

mostly i really only help the people i know. you can’t help everybody. it can drive a person crazy.

but now i know these people and, because i forgot my keys, i shared their tragedy. and something became real, in that instance.

so now i’m sharing it with you. just click on this picture to go their page…

here’s the thing.
erik came with me to the hotel desk when i was checking them in and asked why i was doing this, why i was helping them.

and here’s the answer: because i could. you help your crowd. you help your friends. even new friends.

the threads can be made of any material.

i love you.

p.s. i strung some lights in his room.

by admin at December 20, 2014 08:51 PM

December 19, 2014

Bruce Schneier
ISIS Cyberattacks

Citizen Lab has a new report on a probable ISIS-launched cyberattack:

This report describes a malware attack with circumstantial links to the Islamic State in Iraq and Syria. In the interest of highlighting a developing threat, this post analyzes the attack and provides a list of Indicators of Compromise.

A Syrian citizen media group critical of Islamic State of Iraq and Syria (ISIS) was recently targeted in a customized digital attack designed to unmask their location. The Syrian group, Raqqah is being Slaughtered Silently (RSS), focuses its advocacy on documenting human rights abuses by ISIS elements occupying the city of Ar-Raqah. In response, ISIS forces in the city have reportedly targeted the group with house raids, kidnappings, and an alleged assassination. The group also faces online threats from ISIS and its supporters, including taunts that ISIS is spying on the group.

Though we are unable to conclusively attribute the attack to ISIS or its supporters, a link to ISIS is plausible. The malware used in the attack differs substantially from campaigns linked to the Syrian regime, and the attack is focused against a group that is an active target of ISIS forces.

News article.

by Bruce Schneier at December 19, 2014 06:24 PM

Fake Cell Towers Found in Norway

In yet another example of what happens when you build an insecure communications infrastructure, fake cell phone towers have been found in Oslo. No one knows who has been using them to eavesdrop.

This is happening in the US, too. Remember the rule: we're all using the same infrastructure, so we can either keep it insecure so we -- and everyone else -- can use it to spy, or we can secure it so that no one can use it to spy.

by Bruce Schneier at December 19, 2014 09:46 AM

Kate Krontiris
Duty Free is really #winning on package design and product...

Duty Free is really #winning on package design and product display. Creative, weird, colorful.

Istanbul, Winter 2014

December 19, 2014 12:48 AM

December 18, 2014

OpenNet Initiative
Looking Forward: A Note of Appreciation and Closure on a Decade of Research

After a decade of collaboration in the study and documentation of Internet filtering and control mechanisms around the world, the OpenNet Initiative partners will no longer carry out research under the ONI banner. The website, including all reports and data, will be maintained indefinitely to allow continued public access to our entire archive of published work and data.

After a decade of collaboration in the study and documentation of Internet filtering and control mechanisms around the world, the OpenNet Initiative partners will no longer carry out research under

read more

by ONI Team at December 18, 2014 06:38 PM

Nick Grossman
Increasing trust, safety and security using a Regulation 2.0 approach

This is the latest post in a series on Regulation 2.0 that I’m developing into a white paper for the Program on Municipal Innovation at the Harvard Kennedy School of Government.

Yesterday, the Boston Globe reported that an Uber driver kidnapped and raped a passenger.  First, my heart go out to the passenger, her friends and her family.  And second, I take this as yet another test of our fledgling ability to create scalable systems for trust, safety and security built on the web.

This example shows us that these systems are far from perfect. This is precisely the kind of worst-case scenario that anyone thinking about these trust, safety and security issues wants to prevent.  As I’ve written about previously, trust, safety and security are pillars of successful and healthy web platforms:

  • Safety is putting measures into place that prevent user abuse, hold members accountable, and provide assistance when a crisis occurs.
  • Trust, a bit more nuanced in how it’s created, is creating the explicit and implicit contracts between the company, customers and employees.
  • Security protects the company, customers, and employees from breach: digital or physical all while abiding by local, national and international law.

An event like this has compromised all three.  The question, then, is how to improve these systems, and then whether, over time, the level of trust, safety and security we can ultimately achieve is better than what we could do before.

The idea I’ve been presenting here is that social web platforms, dating back to eBay in the late 90s, have been in a continual process of inventing “regulatory” systems that make it possible and safe(r) to transact with strangers.

The working hypothesis is that these systems are not only scalable in a way that traditional regulatory systems aren’t — building on the “trust, then verify” model — but can actually be more effective than traditional “permission-based” licensing and permitting regimes.  In other words, they trade access to the market (relatively lenient) for hyper-accountability (extremely strict).  Compare that to traditional systems that don’t have access to vast and granular data, which can only rely on strict up-front vetting followed by limited, infrequent oversight.  You might describe it like this:


This model has worked well in relatively low-risk for personal harm situations.  If I buy something on eBay and the seller never ships, I’ll live.  When we start connecting real people in the real world, things get riskier and more dangerous.  There are many important questions that we as entrepreneurs, investors and regulators should consider:

  • How much risk is acceptable in an “open access / high accountability” model and then how could regulators mitigate known risks by extending and building on regulation 2.0 techniques?
  • How can we increase the “lead time” for regulators to consider these questions, and come up with novel solutions, while at the same time incentivizing startups to “raise their hand” and participate in the process, without fear of getting preemptively shut down before their ideas are validated?
  • How could regulators adopt a 2.0 approach in the face of an increasing number of new models in additional sectors (food, health, education, finance, etc)?

Here are a few ideas to address these questions:

With all of this, the key is in the information.  Looking at the diagram above, “high accountability” is another way of saying “built on information”.  The key tradeoff being made by web platforms and their users is access to the market in exchange for high accountability through data.  One could imagine regulators taking a similar approach to startups in highly regulated sectors.

Building on this, we should think about safe harbors and incentives to register.  The idea of high-information regulation only works if there is an exchange of information!  So the question is: can we create an environment where startups feel comfortable self-identifying, knowing that they are trading freedom to operate for accountability through data.  Such a system, done right, could give regulators the needed lead time to understand a new approach, while also developing a relationship with entrepreneurs in the sector.  Entrepreneurs are largely skeptical of this approach, given how much the “build an audience, then ask for forgiveness” model has been played out.  But this model is risky and expensive, and now having seen that play out a few times, perhaps we can find a more moderate approach.

Consider where to implement targeted transparency.  One of the ways web platforms are able to convince users to participate in the “open access for accountability through data” trade is that many of the outputs of this data exchange are visible.  This is part of the trade.  I can see my eBay seller score; Uber drivers can see their driver score; etc.  A major concern that many companies and individuals have is that increased data-sharing with the government will be a one-way street; targeted transparency efforts can make that clearer.

Think about how to involve third-party stakeholders in the accountability process.  For example, impact on neighbors has been one of the complaints about the growth of the home-sharing sector.   Rather than make a blanket rule on the subject, how might it be possible to include these stakeholders in the data-driven accountability process?  One could imagine a neighbor hotline, or a feedback system, that could incentivize good behavior and allow for meaningful third-party input.

Consider endorsing a right to an API key for participants in these ecosystems.  Such a right would allow / require actors to make their reputation portable, which would increase accountability broadly. It also has implications for labor rights and organizing, as Albert describes in the above linked post.  Alternatively, or in addition, we could think about real-time disclosure requirements for data with trust and safety implications, such as driver ratings.  Such disclosures could be made as part of the trade for the freedom to operate.

Related, consider ways to use encryption and  aggregate data for analysis to avoid some of the privacy issues inherent in this approach.  While users trust web platforms with very specific data about their activities, how that data is shared with the government is not typically part of that agreement, and this needs to be handled carefully.  For example, even though Apple knows how fast I’m driving at any time, we would be surprised and upset if they reported us to the authorities for speeding.  Of course, this is completely different for emergent safety situations, such as the Uber example above, where platforms cooperate regularly and swiftly with law enforcement.

While it is not clear that any of these techniques would have prevented this incident, or that it might have been possible to prevent this at all, my idealistic viewpoint is that by working to collaborate on policy responses to the risks and opportunities inherent in all of these new systems, we can build stronger, safer and more scalable approaches.

// thanks to Brittany Laughlin and Aaron Wright for their input on this post

by Nick Grossman at December 18, 2014 01:15 PM

Bruce Schneier
Not Enough CISOs to Go Around

This article is reporting that the demand for Chief Information Security Officers far exceeds supply:

Sony and every other company that realizes the need for a strong, senior-level security officer are scrambling to find talent, said Kris Lovejoy, general manager of IBM's security service and former IBM chief security officer.

CISOs are "almost impossible to find these days," she said. "It's a bit like musical chairs; there's a finite number of CISOs and they tend to go from job to job in similar industries."

I'm not surprised, really. This is a tough job: never enough budget, and you're the one blamed when the inevitable attacks occur. And it's a tough skill set: enough technical ability to understand cybersecurity, and sufficient management skill to navigate senior management. I would never want a job like that in a million years.

Here's a tip: if you want to make your CISO happy, here's her holiday wish list.

"My first wish is for companies to thoroughly test software releases before release to customers...."

Can we get that gift wrapped?

by Bruce Schneier at December 18, 2014 01:17 AM

December 17, 2014

Berkman Center front page
Upcoming Events: The Great Firewall Inverts (1/13); Disconnected: Youth, New Media, and the Ethics Gap (1/20)
Berkman Events Newsletter Template
berkman luncheon series

The Great Firewall Inverts

Tuesday, January 13, 12:30pm ET, Berkman Center for Internet & Society, 23 Everett St, 2nd Floor. This event will be webcast live.


In the last few years, usage of the mobile messaging app WeChat (微信 Weixin), has skyrocketed not only inside China, but outside, as well. For mainland Chinese, Wechat is one of the only options available, due to frequent blockage of apps like Viber, Line, Twitter and Facebook. However, outside of China, fueled by a massive marketing campaign and the promise of "free calls and texts", overseas Chinese students and family, Tibetan exiles, and Bollywood celebrities also use the app as their primary mobile communications service. It is this phenomenon that might be called an inversion of the Great Firewall. Instead of Chinese users scaling the wall to get out, people around the world are walking up to the front gate, and asking to be let in.

Combined with the rise of attractive, low-cost mobile handsets from Huawei and Xiaomi that include China-based cloud services, being sold in India and elsewhere, the world is witnessing a massive expansion of Chinese telecommunications reach and influence, powered entirely by users choosing to participate in it. Due to these systems being built upon proprietary protocols and software, their inner workings are largely opaque and mostly insecure. Like most social media apps, the WeChat app has full permission to activate microphones and cameras, track GPS, access user contacts and photos, and copy all of this data at any time to their servers. Recently, it was discovered that Xiaomi MIUI phones sent all text messages through the companies cloud servers in China, without asking the user (Though, once this gained broad coverage in the news, the feature was turned off by default).

The fundamental question is do the Chinese companies behind these services have any market incentive or legal obligation to protect the privacy of their non-Chinese global userbase? Do they willingly or automatically turn over all data to the Ministry of Public Security or State Internet Information Office? Will we soon see foreign users targeted or prosecuted due to "private" data shared on WeChat? Finally, from the Glass Houses Department, is there any fundamental diffence in the impact on privacy freedom for an American citizen using WeChat versus a Chinese citizen using WhatsApp or Google?

Nathan Freitas leads the Guardian Project, an open-source mobile security software project, and directs technology strategy and training at the Tibet Action Institute. His work at the Berkman Center focuses on tracking the legality and prosecution risks for mobile security apps users worldwide. RSVP Required. more information on our website>

berkman luncheon series

Disconnected: Youth, New Media, and the Ethics Gap

Tuesday, January 20, 12:30pm ET, Berkman Center for Internet & Society, 23 Everett St, 2nd Floor. This event will be webcast live.


Fresh from a party, a teen posts a photo on Facebook of a friend drinking a beer. A college student repurposes an article from Wikipedia for a paper. A group of players in a multiplayer online game routinely cheat new players by selling them worthless virtual accessories for high prices. In her book, Disconnected, Carrie James explores how young people approach situations such as these as well as more dramatic ethical dilemmas that arise in digital contexts. Based on qualitative research carried out as part of the Good Play Project, Disconnected is an account of how youth, and the adults in their lives, think about— and often don’t think about — the moral and ethical dimensions of their participation in online communities. In this talk, James will share key insights from the book and related work on supporting meaningful and civil dialogue online.

Carrie James is a Research Director and Principal Investigator at Project Zero, and Lecturer on Education at the Harvard Graduate School of Education. Her research explores young people’s digital, moral, and civic lives. She co-directs the Good Play Project, a research and educational initiative focused youth, ethics, and the new digital media, and the Good Participation project, a study of how youth “do civics” in the digital age. RSVP Required. more information on our website>


Jessica Silbey on The Eureka Myth: Creators, Innovators and Everyday Intellectual Property


Why do people create and innovate? And how does intellectual property law encourage, or discourage, the process? In this talk Jessica Silbey -- Professor at Suffolk University Law School -- discusses her recent book The Eureka Myth: Creators, Innovators, and Everyday Intellectual Property, which investigates the motivations and mechanisms of creative and innovative activity in everyday professional life. Based on over fifty face-to-face interviews, the book centers on the stories told by interviewees describing how and why they create and innovate and whether or how IP law plays a role in their activities. The goal of the empirical project was to figure out how IP actually works in creative and innovative fields, as opposed to how we think or say it works (through formal law or legislative debate). Breaking new ground in its qualitative method examining the economic and cultural system of creative and innovative production, The Eureka Myth draws out new and surprising conclusions about the sometimes misinterpreted relationships between creativity, invention and intellectual property protections. video/audio on YouTube>

Other Events of Note

Local, national, international, and online events that may be of interest to the Berkman community:

You are receiving this email because you subscribed to the Berkman Center's Weekly Events Newsletter. Sign up to receive this newsletter if this email was forwarded to you. To manage your subscription preferences, please click here.

Connect & get involved: Jobs, internships, and more iTunes Facebook Twitter Flickr YouTube RSS

See our events calendar if you're curious about future luncheons, discussions, lectures, and conferences not listed in this email. Our events are free and open to the public, unless otherwise noted.

by ashar at December 17, 2014 07:08 PM

Bruce Schneier
Comments on the Sony Hack

I don't have a lot to say about the Sony hack, which seems to still be ongoing. I want to highlight a few points, though.

  1. At this point, the attacks seem to be a few hackers and not the North Korean government. (My guess is that it's not an insider, either.) That we live in the world where we aren't sure if any given cyberattack is the work of a foreign government or a couple of guys should be scary to us all.

  2. Sony is a company that hackers have loved to hate for years now. (Remember their rootkit from 2005?) We've learned previously that putting yourself in this position can be disastrous. (Remember HBGary.) We're learning that again.

  3. I don't see how Sony launching a DDoS attack against the attackers is going to help at all.

  4. The most sensitive information that's being leaked as a result of this attack isn't the unreleased movies, the executive emails, or the celebrity gossip. It's the minutiae from random employees:

    The most painful stuff in the Sony cache is a doctor shopping for Ritalin. It's an email about trying to get pregnant. It's shit-talking coworkers behind their backs, and people's credit card log-ins. It's literally thousands of Social Security numbers laid bare. It's even the harmless, mundane, trivial stuff that makes up any day's email load that suddenly feels ugly and raw out in the open, a digital Babadook brought to life by a scorched earth cyberattack.

    These people didn't have anything to hide. They aren't public figures. Their details aren't going to be news anywhere in the world. But their privacy has been violated, and there are literally thousands of personal tragedies unfolding right now as these people deal with their friends and relatives who have searched and read this stuff.

    These are people who did nothing wrong. They didn't click on phishing links, or use dumb passwords (or even if they did, they didn't cause this). They just showed up. They sent the same banal workplace emails you send every day, some personal, some not, some thoughtful, some dumb. Even if they didn't have the expectation of full privacy, at most they may have assumed that an IT creeper might flip through their inbox, or that it was being crunched in an NSA server somewhere. For better or worse, we've become inured to small, anonymous violations. What happened to Sony Pictures employees, though, is public. And it is total.

    Gizmodo got this 100% correct. And this is why privacy is so important for everyone.

I'm sure there'll be more information as this continues to unfold.

EDITED TO ADD (12/12): There are two comment threads on this post: Reddit and Hacker News.

by Bruce Schneier at December 17, 2014 06:00 PM

Making “customer experience” a first person thing

“Customer experience” (abbreviated CX) is a hot topic in business. Which makes sense. Business needs customers, and should care about customers’ experiences with business. Problem is, all this concern, so far, is kinda one-sided.

According to Wikipedia (as of today), “Customer experience is the sum of all experiences a customer has with a supplier of goods and/or services, over the duration of their relationship with that supplier.”

Note that frame of reference: a supplier.

It continues, “This can include awareness, discovery, attraction, interaction, purchase, use, cultivation and advocacy.”

Three of those are experiences customers know and care about: interaction, purchase and use. The others — awareness, discovery, attraction, cultivation and advocacy — might be things customers experience, but are mostly marketing jive.

Two paragraphs later it says “Analysts and commentators who write about customer experience and customer relationship management have increasingly recognized the importance of managing the customer’s experience.” The italics are mine.

Who wants their experience of anything managed by somebody else?

Stop here and think about how you function independently as a customer, and the tools you use to manage your own customer experiences, across every company you deal with. Chances are you use some combination of these:

  • Wallet and/or purse
  • Cash
  • Credit or debit cards
  • Car
  • Mobile phone or tablet
  • Computer
  • Apps (not just for commercial interactions, but for managing budgets and expenses, paying bills and filling out tax forms)

Your list may be different, but  what matters is that those tools are yours. Yes, your car may be a rental, and your credit cards belong to a bank; but they are your tools, and — here’s the key: you use them to deal with many different companies in identical or similar ways. They each express your agency:  the power to act with full effect in the world, as an independent human being.

Your experience with those tools is also personal, meaning yours alone.  You can tell they are yours because you speak of them, and think about them, using the first person singular possessive voice: my car, my cash, my credit card, my phone. They are first person technologies that enlarge and enhance what you can do with your body.

Here’s another way to look at them: they give you scale.

What we need from CX is scale for us, not just for companies wanting to give us a better experience of them. That scale is what VRM is about, and it can only work if it’s good for both sides.

We can’t get there if we start on the company’s side. We can only get there by starting with the individual customer, and working toward scale for him or her.

This can be scary and alien to companies used to thinking that the customer needs to be “owned,” “managed” or “locked in” somehow. What companies need to think about are the benefits both sides get from first person technologies.

I think there’s a good place to start working on new first person technologies that work better for everybody, and I’ll lay that out in the next post.

by Doc Searls at December 17, 2014 02:53 PM

Nick Grossman
Regulation and the peer economy: a 2.0 framework

As part of my series on Regulation 2.0, which I’m putting together for the Project on Municipal Innovation at the Harvard Kennedy School, today I am going to employ a bit of a cop-out tactic and rather than publish my next section (which I haven’t finished yet, largely because my whole family has the flu right now), I will publish a report written earlier this year by my friend Max Pomeranc.

Max is a former congressional chief of staff, who did his masters at the Kennedy School last year.  For his “policy analysis exercise” (essentially a thesis paper) Max looked at regulation and the peer economy, exploring the idea of a “2.0” approach.  I was Max’s advisor for the paper, and he has since gone on to a policy job at Airbnb.

Max did a great job of looking at two recent examples of peer economy meets regulation: the California ridesharing rules, and the JOBS act for equity crowdfunding, and exploring some concepts which could be part of a “2.0” approach to regulation.  His full report is here. Relatively quick read, a good starting place for thinking about these ideas.

I am off to meet Max for breakfast as we speak!

More tomorrow.

by Nick Grossman at December 17, 2014 02:50 PM

Joseph Reagle
Measure, manage, manipulate

The aphorism “If you can’t measure it, you can’t manage it” is common in contemporary life. It is often attributed to business guru Peter Drucker, and, even if he did not say it, the notion has become a slogan for the quantified, big-data world in which we live. In boardrooms, non-profits, and universities, we are fixated on quantifiable measures. Otherwise, how do you know what to improve? Another aphorism I find equally compelling is Goodhart’s law which, in Marilyn Strathern’s words, states “When a measure becomes a target it ceases to be a good measure” (Strathern, 1997: 308). Why? Because measures which become targets are soon subject to manipulation. I refer to this as the 3-M’s paradox (measure/manage/manipulate). I first thought about this in research about ratings and rankings at an online photography sharing. I concluded that evaluation in the digital age is characterized by the following.

  1. It’s hard to quantify the qualitative: there was much experimentation with rating and ranking systems.
  2. Quantitative mechanisms beget their manipulation: people “mate” rated friends, “revenge” rated enemies, and inflated their own standing.
  3. “Fixes” to manipulation have their own, often unintended, consequences and are also susceptible to manipulation: non-anonymous ratings led to rating inflation.
  4. Quantification (and the how one implements it) privileges some things over others: nudes were highly rated, more so when measured by number of comments, not so with photos of flowers.
  5. Any “fixes” often take the form of more elaborate, automated, and meta quantification: such as making some users “curators” or labeling them as “helpful.”

Of course, this extends beyond online ratings communities. When politicians sought to manage primary schools on the basis of measures of student achievement, cheating soon followed. My favorite example of this is in Texas where administrators “disappeared” poorly performing students so that they could not take the standardized tests. Colleges can be measured with respect to class size and selectivity; this too can be “gamed.”.

What is most interesting about ranking systems that reduce multiple variables into a single index is how arbitrary they often are. In a classic paper Richard Becker and his colleagues looked at how they could manipulate the outcomes of the best places to live. While the methods used to construct the rankings show fairly good agreement at the top and bottom ends, the choice of the ranking method and how the variables were weighted did make significant differences in order (Becker et al., 1987). Malcolm Gladwell described this problem as “A ranking can be heterogeneous … as long as it doesn’t try to be too comprehensive. And it can be comprehensive as long as it doesn’t try to measure things that are heterogeneous” (Gladwell, 2011). Yet, many schemes try to do both, including U.S. News’ college rankings. (To get a feel for this, you can play Jeffrey Stake’s ranking game of law schools.)

Honestly, I’m confused by all of this. Clearly, we need to measure some things, but we also need to be highly skeptical of what we choose to measure, how we do so, and what we do with the resulting data.

Becker RA, Denby L, Mcgill R, et al. (1987) Analysis of data from the places rated almanac. American Statistician, 41(3), 169–186, Available from: (accessed 19 August 2011).

Gladwell M (2011) The order of things. The New Yorker, Available from: (accessed 18 December 2014).

Strathern M (1997) 'Improving ratings’: audit in the British University system. European Review, 5(3), 305–321, Available from:

by Joseph Reagle at December 17, 2014 05:00 AM

Sonya Song
Q&A on Censorship with the Oxford Internet Institute
After presenting my study on China's censorship of online news at the Oxford Internet Institute (OII), I had a great talk with David Sutcliffe, the editor of the OII Policy and Internet Blog, and went through the following questions. The full conversation is published on the blog post titled Uncovering the patterns and practice of censorship in Chinese news sites 
  1. How much work has been done on censorship of online news in China? What are the methodological challenges and important questions associated with this line of enquiry?
  2. You found that party organs, ie news organizations tightly affiliated with the Chinese Communist Party, published a considerable amount of deleted news. Was this surprising?
  3. How sensitive are citizens to the fact that some topics are actively avoided in the news media? And how easy is it for people to keep abreast of these topics (eg the “three Ts” of Tibet, Taiwan, and Tiananmen) from other information sources?
  4. Is censorship of domestic news (such as food scares) more geared towards “avoiding panics and maintaining social order”, or just avoiding political embarrassment? For example, do you see censorship of environmental issues and (avoidable) disasters?
  5. You plotted a map to show the geographic distribution of news deletion: what does the pattern show?
  6. What do you think explains the much higher levels of censorship reported by others for social media than for news media? How does geographic distribution of deletion differ between the two?
  7. Can you tell if the censorship process mostly relies on searching for sensitive keywords, or on more semantic analysis of the actual content? ie can you (or the censors..) distinguish sensitive “opinions” as well as sensitive topics?
  8. It must be a cause of considerable anxiety for journalists and editors to have their material removed. Does censorship lead to sanctions? Or is the censorship more of an annoyance that must be negotiated?
  9. What do you think explains the lack of censorship in the overseas portal? (Could there be a certain value for the government in having some news items accessible to an external audience, but unavailable to the internal one?)

by 2014 Berkman Fellow at Harvard; 2013 Knight-Mozilla OpenNews Fellow; 2012 Google Policy Fellow; PhD Candidate in Media and Information at MSU; @sonya2song, me® ( at December 17, 2014 04:17 AM

Talk on News Censorship

I'm fortunately funded by the Knight-Mozilla OpenNews Fellowship program to attend a conference on China and the New Internet World organized by the Oxford Internet Institute.  There I will give a presentation on China's news censorship.  I've uploaded the full paper and the slides online, please feel free to download them for more information.  Also, I have more data and preliminary findings unpublished and I'd love to share and discuss them.  My email address is songyan at msu dot edu

Prior and Ongoing Research on Internet Censorship

Internet censorship has been attracting much attention from various academics and institutes.  For example, the Open Net Initiative (ONI) has been constantly testing the availability of websites in 74 countries and rating government control of content related to politics, social issues, Internet tools, and conflict/security (Palfrey, 2010).  The Open Internet Tool Project (OpenITP) surveyed circumvention tool users living in China to understand how they bypass the Great Firewall in hopes of building better tools to serve the needs of internet users in China and other censored regimes (Robinson et al., 2013).

Among the empirical studies focused on online media, Bamman et al.’s (2012) work claimed to be “the first large–scale analysis of political content censorship” that investigates messages deleted from Sina Weibo, a Chinese equivalent to Twitter.  They found 16.25% of posts were deleted after their publication time and recognized some characteristics related to post deletions, including 295 sensitive keywords and the outlying provinces such as Tibet and Qinghai.  Beyond Sina Weibo and on an even larger scale, King et al. (2013) collected data from nearly 1,400 Chinese social media platforms and analyzed the deleted messages with the aid of linguistic software.  In contrast to previous presumptions that its harsh criticism of the government is the target of censors, King et al. found that indeed it's ongoing and potential collective activities that the state aims to prevent and suppress. 

Research Methods in a Nutshell

To our best knowledge, however, censorial practices in online news media have never been studied, not to mention extensively investigated through computing approaches.  Therefore, our study may be the first empirical attempt that systematically examined the news articles deleted from the Chinese cyberspace.  

We developed scripts to collect news articles published on NetEase and Sina, two major news aggregators headquartered in China.  Meanwhile we continuously checked whether or not these articles remained available and we marked a news article as deleted once its link was found broken.  In fact, to make sure that the news story was really deleted due to its content rather than editorial or technical reasons, we searched across the websites for the articles with the same title but under a different link.  Only when duplicates were unavailable did we claim that a particular story was deleted. 

After collecting thousands of deleted news stories, we ran a regression over these data to detect patterns associated with deletion.  The technique we adopted is ReLogit (King and Zeng, 2001a and 2001b), a logistic regression handling rare events data.  This tool was developed by political scientists to analyze rare events, such as wars and coups.  For this reason, this is an appropriate tool for our study because the over deletion rates across the two websites were under 1%, as summarized below. 

Findings and Conclusions

During the course of our study, on each website, about two articles were deleted per day and the overall deletion rate was 0.05% on NetEase and 0.13% on Sina Beijing.

Several similar patterns have been found across the two news portals: 
  • Domestic news had a significantly higher chance of being deleted than international news: twice as likely for NetEase, and about six times for Sina Beijing.
  • News covering Beijing had twice the chance for deletion compared to news covering other places in China.
  • Tibet as a subject matter had little relation with deletion. 
  • National, compared to local, news was significantly associated with deletion for both websites: For NetEase, one and a half times as likely to be deleted, and for Sina Beijing one third times as likely to be deleted.
  • Nature of events was another strong indicator. Compared to neutral stories, for NetEase, positive news had one third the chance to be deleted whereas negative news nearly four times, and for Sina Beijing, negative news had three times to be deleted.
  • Five out of 13 coded news topics were strongly associated with news deletions, including politics, business, foreign affairs, food and drugs, and military, although the strengths varied across the categories and the websites.
From this evidence, we reached the following conclusions: 
  • The two Chinese news portals deleted news with similar patterns.
  • These similarities are translated to the practice of systematic control, the quintessential component of the definition of censorship (Peleg, 1993). 
  • Hence, for the first time, we have confirmed and quantified the online news censorship in China. 

Taboo Words

Beyond news deletion, I've been examining comment deletions as well.  I've created some word clouds with the help of Wordle and highlighted the keywords most commonly found in deleted comments.  They're not included in the paper or the slides. 

These keywords are aligned with our general understanding of taboo topics, such as land acquisition, death toll, social unrest, food safety, pollution, and lamentable work environment. 


Comments Prohibited and Suppressed

A second research topic of mine is how comments are manipulated and what patterns are associated with the manipulation.  Various types of manipulation have been observed and they include having commenting function disabled, screening and filtering submitted comments before publication (i.e., pre-censorship), and deleting published comments after publication (i.e., post-censorship).  This topic isn't included in the paper or the slides. 

To make this research topic more understandable, I'll first elaborate on the general practice of Chinese news portals.  Most of the time, news portals welcome and encourage comments because interactions boost web traffic.  However, a small portion of news stories have their commenting feature disabled.  There are two way to implement this function.  On NetEase, a notification is put under a story, informing "commenting is disabled" and the button for commenting is unavailable.  Sina takes a more subtle approach and puts no such a notification and meanwhile users can submit comments as usual but the comments are never displayed on the website.  These are pre-censorship techniques.  As to post-censorship, both websites simply remove comments quietly after their publication.  A third type of manipulative technique is different from passively pre- or post-censoring comments, but to proactively hire Internet commentators, or so-called 50 Cent Party, to propagate orthodox ideas endorsed by the government. 

The following time-series chart demonstrates the first type of comment manipulation, which is to prohibit comments.  In this way, party organs attempt to impose official opinions through one-way communication on issues on North Korea, outlying provinces, controversial territories, major criminal case, and so on. 
More subtly, Sina "allows" comments but never shows some of them on the website.  I've figured out how to send parameters to the API to request the numbers of pre-censored comments and drawn the following chart that shows the new stories having no comment at all although their commenting function is "available". 


The third time-series chart exhibits the amount of comment deletions on a weekly basis.  The topics found in the deleted comments are fairly aligned with those deleted from news stories. 


This study was funded by the Google Policy Fellowship 2012 and collaborated between the Quello Center for Telecom Management and Law at MSU and the Center for Communication Research at the City University of Hong Kong.  Please send your comments and questions to songyan at msu dot edu.  Thank you for reading this post.  

by 2014 Berkman Fellow at Harvard; 2013 Knight-Mozilla OpenNews Fellow; 2012 Google Policy Fellow; PhD Candidate in Media and Information at MSU; @sonya2song, me® ( at December 17, 2014 04:17 AM

December 16, 2014

Bruce Schneier
Effects of Terrorism Fears

Interesting article: "How terrorism fears are transforming America's public space."

I am reminded of my essay from four years ago: "Close the Washington Monument."

by Bruce Schneier at December 16, 2014 10:50 PM

Sara M. Watson
Dada Data and the Internet of Paternalistic Things

This piece of speculative fiction exploring a possible data-driven future first appeared in Internet Monitor project's second annual report, Internet Monitor 2014: Reflections on the Digital World. Check it out for more from my Berkman colleagues on the interplay between technological platforms and policy; growing tensions between protecting personal privacy and using big data for social good; the implications of digital communications tools for public discourse and collective action; and current debates around the future of Internet governance.



My stupid refrigerator thinks I’m pregnant.

I reached for my favorite IPA, but the refrigerator wouldn’t let me take one from the biometrically authenticated alcohol bin. 

Our latest auto-delivery from peaPod included pickles, orange juice, and prenatal vitamins. We never have orange juice in the house before because I find it too acidic. What machine-learning magic produced this produce? 

And I noticed the other day that my water target had changed on my Vessyl, and I wasn’t sure why. I figured I must have just been particularly dehydrated. 

I guess I should have seen it coming. Our Fountain tracking toilet noticed when I got off hormonal birth control and got an IUD instead. But I thought our toilet data was only shared between Nest and our doctors? What tipped off our Samsung fridge? 

I got a Now notification that I was ovulating a few weeks ago. I didn’t even know it had been tracking my cycle, let alone by basal body temperature through my wearable iRing. I certainly hadn’t turned that feature on. We’re not even trying to have a baby right now. Or maybe my Aria scale picked up on some subtle change in my body fat? 

Or maybe it was ComWarner? All our appliances are hooked up through one @HomeHub. I didn’t think twice about it because it just worked—every time we upgraded the dishwasher, the thermostat. Could it be that the @HomeHub is sharing data between the toilet and our refrigerator? 

I went into our @HomeHub interface. It showed a bunch of usage graphs (we’ve been watching a “below average” amount of TV lately), but I couldn’t find anything that looked like a pregnancy notification. Where was this bogus conception data coming from? 

My iWatch pinged me. The lights in the room dimmed, and a connected aromatherapy candle lit up. The heart monitor on my bra alerted me that my heart rate and breathing was irregular, and that I should stop for some meditative breathing. I sat down on my posture-tracking floor pillow, and tried to sink in.

But I couldn’t keep my mind from wandering. Was it something in the water? Something in my Snap-Texts with Kathryn? If it was true, why hadn’t my doctor called yet? Could I actually be pregnant? 

I turned on the TVTab to distract me, but I was bombarded with sponsored ads for “What to Expect When You’re Expecting 9.0” and domain squatter sites that search for a unique baby name. 

I searched for similar incidents on the Quorums: “pregnancy Samsung refrigerator,” “pregnancy Fountain toilet.” Nothing. I really wanted to talk to someone, but I couldn’t call Google because they don’t have customer service for @HomeHub products. I tried ComWarner. After waiting for 37 minutes to speak with a representative, I was told that the he couldn’t give out any personal data correlations over the phone. What bureaucratic bullshit! 

It can’t be true. Russell has been away in Addis Ababa on business for the three weeks. And I’ve still got the IUD. We aren’t even trying yet. This would have to be a bio-correlative immaculate conception. 

I tapped Russell on his iWatch three times, our signal to call me when he is done with his meeting. I was freaking out. 

I could have really used that beer. But the fridge still wouldn’t let me take it. What if I am really pregnant? I opened up Taskr to see if could get an old fashioned birth control test delivered, but price was three times as expensive as it normally would be. I considered CVS, but I thought better of it since you can’t go in there anymore without a loyalty card. It was far, but I skipped the self-driving Uber shuttle and walked the extra mile to the place that accepts crypto, where I wouldn’t be tracked. I think. And that’s when I got the notification that my funding interview for my new project the following morning had been canceled. 


Read more in the Berkman Center’s Internet Monitor 2014: Reflections on the Digital World.

by Sara M. Watson at December 16, 2014 05:22 PM

Mapping the Data Ecosystem

This first appeared in Internet Monitor project's second annual report, Internet Monitor 2014: Reflections on the Digital World. Check it out for more from my Berkman colleagues on the interplay between technological platforms and policy; growing tensions between protecting personal privacy and using big data for social good; the implications of digital communications tools for public discourse and collective action; and current debates around the future of Internet governance.


What would it take to map the Internet? Not just the links, connecting the web of sites to each other, or some map of the network of networks. That’s hard enough in itself. 

What if we were to map the flows of data around the Internet? Not just delivering packets, but what those packets contain, where they propagate, how they are passed on, and to what ends they are used. 

Between our browser history, cookies, social platforms, sensors, brokers, and beyond, there are myriad parties with economic interests in our data. How those parties interconnect and trade in our data is, for the most part, opaque to us. 

The data ecosystem mirrors the structure of the Internet. No single body has dominion or a totalizing view over the flows of information. That also means that no one body is accountable for quality or keeping track of data as it changes hands and contexts. 

Data-driven companies like Facebook, Google, Acxiom, and others are building out their proprietary walled gardens of data. They are doing everything they can to control for privacy and security while also keeping control over their greatest assets. Still, they aren’t held accountable for the ads individuals purchase and target on their platforms, or for tertiary uses of data once it leaves their kingdom. 

Complexity obscures causality. So many variables are fed into the algorithm and spit back out on a personalized, transient platform that no one can tell you exactly why you saw one post over another one in the feed or that retargeted ad over this one. We conjure up plausible explanations and grasp at folk theories that engineers offer up to explain their outputs. 

We have given data so much authority without any of the accountability we need to have confidence in its legitimacy to govern our lives. 

As everything, refrigerators and crockpots included, expand the Internet and the ecosystem of data that runs on top of it, everything will leave a data trail. Going forward we have to assume that what can be codified and digitized will become data. What matters is how that data will be used, now and in the future. 

The potential harms are hard to pin down, primarily because we won’t know when they are happening. We can’t investigate discrimination that replaces pre-digital prejudice markers like race and sex with proxies correlated from behavioral data. And we run into invisible walls based on statistical assumptions that anticipate our needs but get us wrong if we fall outside the curve. It’s nearly impossible to catch these slights and even harder to develop normative stances on grounds we cannot see. 

Before we can start to discuss normative judgments about the appropriate uses of data, we have to understand the extent of what is technically possible. We cannot hope to regulate the misuse of data without means to hold all interconnected parties accountable for the uses and flows of data.

We need to map these relationships and data patterns. Who are the parties involved? How are they collecting, cleansing, inferring and interpreting data? To what ends is the data being used? 

Linked Data is one technical solution to this problem. Standards make data flows both machine readable and human legible. Policies that travel as metadata are another approach to distributed accountability. We can also hold some of the largest brokers and users of data to higher standards of ethics. But markets of users won’t move against these systems until we have a better map of the ecosystem. 


Read more in the Berkman Center’s Internet Monitor 2014: Reflections on the Digital World.

by Sara M. Watson at December 16, 2014 05:21 PM

Tim Davies
Internet Monitor 2014 chapter on Data Revolutions: Bottom-Up Participation or Top-Down Control?

Internet Monitor[Summary: cross-posting article from from the 2014 Internet Monitor]

The 2014 Internet Monitor Report has just been launched. It’s packed with over 35 quick reads on the landscape of contemporary Internet & Society issues, from platforms and policy, to public discourse. This years edition also includes a whole section on ‘Data and privacy’. My article in the collection, written earlier this year, is below to archive. I encourage you to explore the whole collection – including some great inputs from Sara Watson and Malavika Jayaram exploring how development agencies are engaging with data, and making the case for building better maps of the data landscape to inform regulation and action.

Data Revolutions: Bottom-Up Participation or Top-Down Control?

In September 2015, through the United Nations, governments will agree upon a set of new Sustainable Development Goals (SDGs) replacing the expired Millennium Development Goals and setting new globally agreed targets on issues such as ending poverty, promoting healthy lives, and securing gender equality.1 Within debates over what the goals should be, discussions of online information and data have played an increasingly important role.

Firstly, there have been calls for a “Data Revolution” to establish better monitoring of progress towards the goals: both strengthening national statistical systems and exploring how “big data” digital traces from across the Internet could enable real-time monitoring.2 Secondly, the massive United Nations-run MyWorld survey, which has used online, mobile, and offline data collection to canvas over 4 million people across the globe on their priorities for future development goals, consistently found “An honest and accountable government” amongst people’s top five priorities for the SDGs.3 This has fueled advocacy calls for explicit open government goals requiring online disclosure of key public information such as budgets and spending in order to support greater public oversight and participation.

These two aspects of “data revolution” point to a tension in the evolving landscape of governments and data. In the last five years, open data movements have made rapid progress spreading the idea that government data (from data on schools and hospitals locations to budget datasets and environmental statistics) should be “open by default”: published online in machine-readable formats for scrutiny and re-use. However, in parallel, cash-strapped governments are exploring the greater use of private sector data as policy process inputs, experimenting with data from mobile networks, social media sites, and credit reference agencies amongst others (sometimes shared by those providers under the banner of “data philanthropy”). As both highly personal and commercially sensitive data, these datasets are unlikely to ever be shared en-masse in the public domain, although this proprietary data may increasingly drive important policy making and implementation.

In practice, the evidence so far suggests that the “open by default” idea is struggling to translate into widespread and sustainable access to the kinds of open data citizens and civil society need to hold powerful institutions to account. The multi-country Open Data Barometer study found that key accountability datasets such as company registers, budgets, spending, and land registries are often unavailable, even where countries have adopted open data policies.4 And qualitative work in Brazil has found substantial variation in how the legally mandated publication of spending data operates across different states, frustrating efforts to build up a clear picture of where public money flows.5 Furthermore, studies regularly emphasize the need not only to have data online, but also the need for data literacy and civil society capacity to absorb and work with the data that is made available, as well as calling for the creation of intermediary ecosystems that provide a bridge between “raw” data and its civic use.

Over the last year, open data efforts have also had to increasingly grapple with privacy questions.6 Concerns have been raised that even “non-personal” datasets released online for re-use could be combined with other public and private data and used to undermine privacy.7 In Europe, questions over what constitutes adequate anonymization for opening public data derived from personally identifying information have been hotly debated.8

The web has clearly evolved from a platform centered on documents to become a data-rich platform. Yet, it is public policy that will shape whether it is ultimately a platform that shares data openly about powerful institutions, enabling bottom up participation and accountability, or whether data traces left online become increasingly important, yet opaque, tools of governance and control. Both open data campaigners and privacy advocates have a key role in securing data revolutions that will ultimately bring about a better balance of power in our world.


  • 1: UN High-Level Panel of Eminent Persons on the Post-2015 Development Agenda, “A New Global Partnership: Eradicate poverty and transform economies through sustainable development,” 2013, HLP_P2015_Report.pdf.
  • 2: Independent Expert Advisory Group on the Data Revolution,
  • 3: MyWorld Survey,
  • 4: World Wide Web Foundation, “Open Data Barometer,” 2013, http://www.opendatabarometer. org.
  • 5: N. Beghin and C. Zigoni, “Measuring open data’s impact of Brazilian national and sub-national budget transparency websites and its impacts on people’s rights,” 2014,
  • 6: Open Data Research Network, “Privacy Discussion Notes,” 2013, open-data-privacy-discussion-notes.
  • 7: Steve Song, “The Open Data Cart and Twin Horses of Accountability and Innovation,” June 19, 2013, https://
  • 8: See the work of the UK Anonymisation Network,

(Article under Creative Commons Attribution 3.0 Unported)

by Tim at December 16, 2014 03:31 PM

Berkman Center front page
2014 Internet Monitor Annual Report: “Reflections on the Digital World”

Internet Monitor is delighted to announce the publication of Internet Monitor 2014: Reflections on the Digital World, the project's second annual report. The report is a collection of roughly three dozen short contributions that highlight and discuss some of the most compelling events and trends in the digitally networked environment over the past year.

The publication, intended for a general interest audience, covers a broad range of issues and regions, including an examination of Europe’s “right to be forgotten," a review of the current state of mobile security, an exploration of a new wave of movements attempting to counter hate speech online, and a speculative fiction story exploring what our increasingly data-driven world might bring. The report focuses on the interplay between technological platforms and policy; growing tensions between protecting personal privacy and using big data for social good; the implications of digital communications tools for public discourse and collective action; and current debates around the future of Internet governance.

This year we are especially excited to share our "Year in Review" interactive timeline, which highlights the year's most fascinating Internet-related news stories, from censorship to Heartbleed to the Pirate Bay raid just last week. We've also included a “By the Numbers” section that is slightly tongue-in-cheek and offers a look at the year’s important digital statistics such as the number of tweets per minute in 2014 (up 155,000 from last year) and the number of the top 100 accounts on Twitter that belong to Bollywood stars.

The full report, individual chapters, and interactive timeline are available at the Internet Monitor website.

About Internet Monitor
Internet Monitor, based at the Berkman Center for Internet & Society, is a research project to evaluate, describe, and summarize the means, mechanisms, and extent of Internet content controls and Internet activity around the world. The project compiles and curates data from multiple sources, including primary data collected by the Berkman Center and our partners, as well as relevant secondary data. The Internet Monitor platform is a freely available online fact base that gives policy makers, digital activists, researchers, and user communities an authoritative, independent, and multi-faceted set of quantitative data on the state of the global Internet. Internet Monitor also provides expert analysis on the state of the global Internet via our special report series and our annual reports on notable events and trends in the digital space.


by rheacock at December 16, 2014 03:00 PM

RB214: CopyrightXXX

From the Radio Berkman podcast:

Not long ago, illegally downloading a movie could land you in court facing millions of dollars in fines and jailtime. But Hollywood has begun to weather the storm by offering alternatives to piracy — same day digital releases, better streaming, higher quality in-theater experiences — that help meet some of the consumer demand that piracy captured.

But the porn industry is not Hollywood.

While the web has created incredible new economic opportunities for adult entertainers — independent production has flourished, as well as new types of production, which we won’t go into here simply to preserve our G-rating — few other industries on the web face the glut of competition from services that offer similar content for free or in violation of copyright.

Simply put, there’s so much free porn on the net that honest pornographers can’t keep up.

It’s hard to get accurate numbers on how much revenue is generated from online porn. It’s believed to be in the billions, at least in the United States. But it’s even more difficult to get a picture of how much revenue is lost in the adult entertainment industry due to copyright violation.

Surprisingly though, the porn industry doesn’t seem that interested in pursuing copyright violators. Intellectual property scholar Kate Darling studied how the industry was responding to piracy, and it turned out that — by and large — adult entertainment creators ran the numbers and found that it simply cost more from them to fight copyright violators than it was worth.

For today’s episode, Berkman alum and journalist Leora Kornfeld sat down with Kate Darling to talk to her about how porn producers are losing the copyright battle, and why many don’t care.


by djones at December 16, 2014 01:47 PM

Berkman Community Newcomers: Nathanial Freitas

This post is part of a series featuring interviews with some of the fascinating individuals who joined our community for the 2014-2015 year. Conducted by our 2014 summer interns (affectionately known as "Berkterns"), these snapshots aim to showcase the diverse backgrounds, interests, and accomplishments of our dynamic 2014-2015 community.

Profile of Nathanial Freitas

Berkman fellow and director of the Guardian Project
Interviewed in summer 2014 by Berkterns Anna Myers and Brett Weinstein

In what is possibly Nathanial Freitas’s earliest public talk on technology, seven-year-old Nathanial discusses string variables and demonstrates how to program on an Apple II computer.  He also shares with the viewers of the local public access show his aspirations to become a computer programmer. Technology was just part of life for Freitas, who always had a computer while growing up in Northern California.

Freitas achieved his early career goal to become a computer programmer and now uses technology to further human rights through the Guardian Project and the Tibet Action Institute. At the Berkman Center, Freitas will focus on giving “liberation tech” tool developers insight to the legal risks their users face by developing an online resource to assist in mapping the intersection between cryptography, communications law, and actual enforcement.

Nathanial Freitas uses technology to further human rights because technology is integral to humanity and culture.

Freitas’s passion for human rights oriented technology stems from the “do no harm” ideology. Freitas designs his technology with the legal and cultural barriers his users may face in mind. Specifically, his work with the Tibet Action Institute promotes the use of technology to support the free flow of information and ideas within the Tibet movement. The Tibet Action Institutes also provides online security and safety education to the next generation of Tibetan leaders.

Nathanial Freitas designs technology with the legal landscape in mind to promote accountability in the global community.

In the United States there is legal protection under the First Amendment for freedom of speech, expression, assembly, and the press. In countries without these freedoms, technology users are subject to government monitoring and surveillance to enforce such legal restrictions. Freitas developed a secure smart camera, called ObscuraCam, to circumvent some of these challenges. ObscuraCam can automatically pixelate or black out faces it detects in images to protect the identity of, for example, a human rights protester. ObscuraCam also can upload footage slowly so that it appears like normal Internet traffic and doesn’t raise any red flags. 

Nathanial Freitas provides open source security tools free of cost because it allows him to help the broadest number of people possible.

Open source code allows for more trust because your users can see the code. In contrast, companies without open source code rely on the reputation of the individuals involved to gain the trust of their users. When companies acquire legal protections for projects they later abandon, it can prevent advancement. The code from those projects could be useful to others in ways that cannot be predicted and those benefits remain unrealized because the code is buried under corporate intellectual property protections.

In his own words from his Berkman Fellow application, Freitas seeks “to understand better the different global, legal, and cultural contexts in which tools for privacy, security and expression are utilized for social change… While the tool builder’s goal is to develop and provide a tangible tool for someone to fight back against oppression and corruption with, they are often unwittingly turning those they want to help into practitioners of a type of civil disobedience without explaining to them what the risks of that are.”

Freitas looks forward to finding allies, and wisdom and guidance from others at the Berkman Center during his fellowship.

by ctian at December 16, 2014 11:57 AM

Nick Grossman
Web platforms as regulatory systems

This is part 3 in a series of posts I’m developing into a white paper on “Regulation 2.0″ for the Program on Municipal Innovation Harvard Kennedy School of Government.  For many tech industry readers of this blog, these ideas may seem obvious, but they are not intended for you!  They are meant to help bring a fresh perspective to public policy makers who may not be familiar with the trust and safety systems underpinning today’s social/collaborative web platforms.

Twice a year, a group of regulators and policymakers convenes to discuss their approaches to ensuring trust, safety and security in their large and diverse communities. Topics on the agenda range from financial fraud, to bullying, to free speech, to transportation, to child predation, to healthcare, to the relationship between the community and law enforcement.

Each is experimenting with new ways to address these community issues. As their communities grow (very quickly in some cases), and become more diverse, it’s increasingly important that whatever approaches they implement can both scale to accommodate large volumes and rapid growth, and adapt to new situations. There is a lot of discussion about how data and analytics are used to help guide decisionmaking and policy development. And of course, they are all working within the constraints of relatively tiny staffs and relatively tiny budgets.

As you may have guessed, this group of regulators and policymakers doesn’t represent cities, states or countries. Rather, they represent web and mobile platforms: social networks, e-commerce sites, crowdfunding platforms, education platforms, audio & video platforms, transportation networks, lending, banking and money-transfer platforms, security services, and more. Many of them are managing communities of tens or hundreds of millions of users, and are seeing growth rates upwards of 20% per month. The event is Union Square Ventures’ semiannual “Trust, Safety and Security” summit, where each company’s trust & safety, security and legal officers and teams convene to learn from one another.

In 2010, my colleague Brad Burnham wrote a post suggesting that web platforms are in many ways more like governments than traditional businesses. This is perhaps a controversial idea, but one thing is unequivocally true: like governments, each platform is in the business of developing policies which enable social and economic activity that is vibrant and safe.

The past 15 or so years has been a period of profound and rapid “regulatory” innovation on the internet. In 2000, most people were afraid to use a credit card on the internet, let alone send money to a complete stranger in exchange for some used item. Today, we’re comfortable getting into cars driven by strangers, inviting strangers to spend an evening in our apartments (and vice versa), giving direct financial support to individuals and projects of all kinds, sharing live video of ourselves, taking lessons from unaccredited strangers, etc. In other words, the new economy being built in the internet model is being regulated with a high degree of success.

Of course, that does not mean that everything is perfect and there are no risks. On the contrary, every new situation introduces new risks. And every platform addresses these risks differently, and with varying degrees of success. Indeed, it is precisely the threat of bad outcomes that motivates web platforms to invest so heavily in their “trust and safety” (i.e., regulatory) systems & teams. If they are not ultimately able to make their platforms safe and comfortable places to socialize & transact, the party is over.

As with the startup world in general, the internet approach to regulation is about trying new things, seeing what works and what doesn’t work, and making rapid (and sometimes profound) adjustments. And in fact, that approach: watch what’s happening and then correct for bad behavior, is the central idea.

So: what characterizes these “regulatory” systems? There are a few common characteristics that run through nearly all of them:

Built on information: The foundational characteristic of these “internet regulatory systems” is that they wouldn’t be possible without large volumes of real-time data describing nearly all activity on the platform (when we think about applying this model to the public sector this raises additional concerns, which we’ll discuss later). This characteristic is what enables everything that follows, and is the key distinguishing idea between these new regulatory systems from the “industrial model” regulatory systems of the 20th century.

Trust by default (but verify): Once we have real-time and relatively complete information about platform/community activity, we can radically shift our operating model. We can then, and only then, move from an “up front permission” model, to a “trust but verify” model. Following from this shift are two critical operating models: a) the ability to operate at a very large scale, at low cost, and b) the ability to explicitly promote “innovation” by not prescribing outcomes from the get go.

Busier is better: It’s fascinating to think about systems that work better the busier they are. Subways, for instance, can run higher-frequency service during rush hour due to steady demand, thereby speeding up travel times when things are busiest. Contrast that to streets which perform the worst when they are needed most (rush hour). Internet regulatory systems — and eventually all regulatory systems that are built on software and data — work better the more people use them: they are not only able to scale to handle large volumes, but they learn more the more use they see.

Responsive policy development: Now, given that we have high quality, relatively comprehensive information, we’ve adopted a “trust but verify” model that allows for many actors to begin participating, and we’ve invited as much use as we can, we’re able to approach policy development from a very different perspective. Rather than looking at a situation and debating hypothetical “what-ifs”, we can see very concretely where good and bad activity is happening, and can begin experimenting with policies and procedures to encourage the good activity and limit the bad.

If you are thinking: wow, that’s a pretty different, and powerful but very scary approach, you are right! This model does a lot of things that our 20th century common sense should be wary of. It allows for widespread activity before risk has been fully assessed, and it provides massive amounts of real-time data, and massive amounts of power, to the “regulators” who decide the policies based on this information.

So, would it be possible to apply these ideas to public sector regulation? Can we do it in such a way that actually allows for new innovations to flourish, pushing back against our reflexive urge to de-risk all new activities before allowing them? Can & should the government be trusted with all of that personal data? These are all important questions, and ones that we’ll address in forthcoming sections. Stay tuned.

by Nick Grossman at December 16, 2014 11:56 AM

December 15, 2014

RB 214: CopyrightXXX
Listen:or download | …also in Ogg Not long ago, illegally downloading a movie could land you in court facing millions of dollars in fines and jailtime. But Hollywood has begun to weather the storm by offering alternatives to piracy — same day digital releases, better streaming, higher quality in-theater experiences — that help meet some […]

by Berkman Center for Internet & Society at Harvard Law School ( at December 15, 2014 08:00 PM

Berkman Center front page
Berkman Buzz: December 15, 2014

The Berkman Buzz is a weekly collection of work and conversations from around the Berkman community.

New Radio Berkman Episode
After a short hiatus, the Radio Berkman podcast is back. In this new episode, we talk with intellectual property scholar and Berkman Fellow Kate Darling about her research on copyright violations and the adult entertainment industry. Darling looked into how the industry was responding to piracy and found out that by and large, it wasn't. Listen to the interview

Hasit Shah analyzes the digital news delivery landscape in India

Quotation mark

Indians love the news. Uniquely for any of the world's major nations, their newspaper industry has been growing and TV news ratings are up.

Now, with smartphone sales booming and half of the population—600 million people—under the age of 25, there is a digital news market in India that will surely continue to expand.

But it is also perhaps the most uniquely difficult digital audience to reach in the world. More than a billion people in India still aren't connected to the Internet. Three hundred million don't have electricity and a similar number can't read. For some, other areas of development are a greater priority: half the population doesn't even have a toilet at home.

From his article for the HBS Digital Initiative, "Digital News, Devices, and Design Thinking in India"
About Hasit | @HasitShah

Amanda Palmer writes about art and business

Quotation mark

The first rule of Art Club? Don't talk about how you run Art Club - that is, don't talk about your risks, your losses and definitely don't discuss your eccentric shortcuts or the expenditures that ultimately win you a customer base. You probably want to avoid even calling them "customers", even though that's precisely what your fans are at the point of sale. Even though they may - if you've developed a friendly relationship with them - take pride in their role as buyers of your art.

The mostly-unspoken rule that artists aren't supposed to talk about their businesses reveals plenty about how we tend to think of "art" and "business" as mutually exclusive - and have double (or even triple) standards about what artists are and are not allowed to say about their money and still be considered artists.

From her Guardian piece, "Art is a business - and, yes, artists have to make difficult, honest business decisions"
About Amanda | @amandapalmer

Susan Crawford defends NYC mayor's Wi-Fi plan


Last week, City Controller Scott Stringer and the five borough presidents called upon Mayor de Blasio to substantially revise his plans to transform payphones across the city into wireless hotspots - with their criticism rooted in the notion that the LinkNYC system is somehow unfair to low-income New Yorkers.

This is a deeply misinformed attack on a visionary plan - an attack that, if successful, could widen, not shrink, the digital divide over the long term.

New York City is miles from the global cutting edge when it comes to Internet access: People in Hong Kong pay about $35 a month for Internet access service, with equal download and upload speeds of 500 Mbps.

From her New York Daily News piece, "Taking cheap shots at a visionary plan"
About Susan | @scrawford

The Cyberlaw Clinic files amicus letter concerning anti-SLAPP law in CA

Quotation mark

On Friday the Cyberlaw Clinic filed an amicus letter on behalf of Global Voices Advocacy and the Media Legal Defence Initiative on an important case concerning anti-SLAPP law in California, currently being petitioned for review by the Supreme Court of California. Anti-SLAPP laws exist in numerous states to protect those speaking in government proceedings or on matters of public concern from facing frivilous lawsuits designed to dissuade them from speaking out. ("SLAPP" is an acronym for "Strategic Lawsuits Against Public Participation.") In order to quickly remove vexatious lawsuits while allowing valid claims to go through, courts considering an anti-SLAPP motion require plaintiffs to show that a lawsuit has merit before before allowing the litigation go forward. Under California's anti-SLAPP law, this means the plaintiff must state and substantiate all elements of their claim if they want to proceed. When a lawsuit is based on a claim of defamation, this includes proving that the speaker acted with fault, either with negligence or "actual malice."

From the blog post, "Protecting Anonymous Speech Under California's Anti-SLAPP Law"
About the Cyberlaw Clinic | @cyberlawclinic

Digital Problem Solving Initiative teams share progress


On Thursday, December 4, members of the Digital Problem-Solving Initiative (DPSI) community gathered to hear from members of the seven DPSI teams. DPSI teams feature a diverse group of learners (students, faculty, fellows, and staff) working on projects addressing problems and opportunities across the university. DPSI participants have had the novel opportunity to enhance and cultivate competency in various digital literacies as teams engage with research, design, and policy relating to the digital world.

Each team had 5 minutes to present and 5 minutes of feedback from the DPSI community audience.

From the DPSI blog post, "DPSI Final Presentations"
About DPSI

Bruce Schneier argues that concern about online privacy is growing


There's a new international survey on Internet security and trust, of "23,376 Internet users in 24 countries," including "Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey and the United States." Amongst the findings, 60% of Internet users have heard of Edward Snowden, and 39% of those "have taken steps to protect their online privacy and security as a result of his revelations."

The press is mostly spinning this as evidence that Snowden has not had an effect: "merely 39%," "only 39%," and so on. (Note that these articles are completely misunderstanding the data. It's not 39% of people who are taking steps to protect their privacy post-Snowden, it's 39% of the 60% of Internet users -- which is not everybody -- who have heard of him. So it's much less than 39%.)

Even so, I disagree with the "Edward Snowden Revelations Not Having Much Impact on Internet Users" headline. He's having an enormous impact.

From his blog post, "Over 700 Million People Taking Steps to Avoid NSA Surveillance"
About Bruce | @schneierblog

Ukrainian Hackers Leak Russian Interior Ministry Docs with 'Evidence' of Russian Invasion

Quotation mark

Hacking collectives on both sides of the Ukraine-Russia information war have been instrumental in revealing key facts and documents that some would prefer to remain hidden. The latest leak by Ukrainian hackers purports to reveal new evidence of Russian soldiers' presence in Ukraine.

On Friday, Ukrainian activist Evgeniy Dokukin and Ukrainian Cyber Forces, the hacktivist group he founded earlier this year, released 1.7GB of files taken from the Russian Interior Ministry. Later, Dokukin released an additional 34GB of data from the Interior Ministry servers, most of which has not yet been fully analyzed by journalists.

From Aric Toler's Global Voices article, "Ukrainian Hackers Leak Russian Interior Ministry Docs with 'Evidence' of Russian Invasion"
About Global Voices Online | @globalvoices

Manage subscription preferences

by gweber at December 15, 2014 07:07 PM

Nick Grossman
Technological revolutions and the search for trust

For the past several years, I have been an advisor to the Data-Smart City Solutions initiative at the Harvard Kennedy School of Government.  This is a group tasked with helping cities consider how to govern in new ways using the volumes of new data that are now available.  An adjacent group at HKS is the Program on Municipal Innovation (PMI), which brings together a large group of city managers (deputy mayors and other operational leaders) twice a year to talk shop.  I’ve had the honor of attending this meeting a few times in the past, and I must say it’s inspiring and encouraging to see urban leaders from across the US come together to learn from one another.

One of the PMI’s latest projects is an initiative on regulatory reform — studying how, exactly, cities can go about assessing existing rules and regulations, and revising them as necessary.  As part of this initiative, I’ve been writing up a short white paper on “Regulation 2.0” — the idea that government can adopt some of the “regulatory” techniques pioneered by web platforms to achieve trust and safety at scale.  Over the course of this week, I’ll publish my latest drafts of the sections of the paper.

Here’s the outline I’m working on:

  1. Regulation 1.0 vs. Regulation 2.0: an example
  2. Context: technological revolutions and the search for trust
  3. Today’s conflict: some concrete examples
  4. Web platforms as regulatory systems
  5. Regulation 2.0: applying the lessons of web platform regulation to the real world

Section 1 will be an adaptation of this post from last year.  My latest draft of section 2 is below.  I’ll publish the remaining sections over the course of this week.

As always, any and all feedback is greatly appreciated!


Technological revolutions and the search for trust

The search for trust amidst rapid change, as described in the Seattle ridesharing example, is not a new thing.  It is, in fact, a natural and predictable response to times when new technologies fundamentally change the rules of the game.

We are in the midst of a major technological revolution, the likes of which we experience only once or twice per century.  Economist Carlota Perez describes these waves of massive technological change as “great surges”, each of which involves “profound changes in people, organizations and skills in a sort of habit-breaking hurricane.”[1]

This sounds very big and scary, of course, and it is.  Perez’s study of technological revolutions over the past 250 years — five distinct great surges lasting roughly fifty years each — shows that as we develop and deploy new technologies, we repeatedly break and rebuild the foundations of society: economic structures, social norms, laws and regulations.  It’s a wild, turbulent and unpredictable process.

Despite the inherent unpredictability with new technologies, Perez found that each of these great surges does, in fact, follow a common pattern:

First: a new technology opens up a massive new opportunity for innovation and investment. Second, the wild rush to explore and implement this technology produces vast new wealth, while at the same time causing massive dislocation and angst, often resulting in a bubble bursting and a recession.  Finally, broader cultural adoption paired with regulatory reforms set the stage for a smoother and more broadly prosperous period of growth, resulting in the full deployment of the mature technology and all of its associated social and institutional changes.  And of course, by the time each fifty-year surge concluded, the seeds of the next one had been planted.


image: The Economist

So essentially: wild growth, societal disruption, then readjustment and broad adoption.  Perez describes the “readjustment and broad adoption” phase (the “deployment period” in the diagram above), as the percolating of the “common sense” throughout other aspects of society:

“the new paradigm eventually becomes the new generalized ‘common sense’, which gradually finds itself embedded in social practice, legislation and other components of the institutional framework, facilitating compatible innovations and hindering incompatible ones.”[2]

In other words, once the established powers of the previous paradigm are done fighting off the new paradigm (typically after some sort of profound blow-up), we come around to adopting the techniques of the new paradigm to achieve the sense of trust and safety that we had come to know in the previous one.  Same goals, new methods.

As it happens, our current “1.0” regulatory model was actually the result of a previous technological revolution.  In The Search for Order: 1877-1920[2], Robert H. Wiebe describes the state of affairs that led to the progressive era reforms of the early 20th century:

Established wealth and power fought one battle after another against the great new fortunes and political kingdoms carved out of urban-industrial America, and the more they struggled, the more they scrambled the criteria of prestige. The concept of a middle class crumbled at the touch. Small business appeared and disappeared at a frightening rate. The so-called professions meant little as long as anyone with a bag of pills and a bottle of syrup could pass for a doctor, a few books and a corrupt judge made a man a lawyer, and an unemployed literate qualified as a teacher.

This sounds a lot like today, right?  A new techno-economic paradigm (in this case, urbanization and inter-city transportation) broke the previous model of trust (isolated, closely-knit rural communities), resulting in a re-thinking of how to find that trust.  During the “bureaucratic revolution” of the early 20th century progressive reforms, the answer to this problem was the establishment of institutions — on the private side, firms with trustworthy brands, and on the public side, regulatory bodies — that took on the burden of ensuring public safety and the necessary trust & security to underpin the economy and society.

Coming back to today, we are currently in the middle of one of these 50-year surges — the paradigm of networked information — and that we are roughly in the middle of the above graph — we’ve seen wild growth, intense investment, and profound conflicts between the new paradigm and the old.

What this paper is about, then, is how we might consider adopting the tools & techniques of the networked information paradigm to achieve the societal goals previously achieved through the 20th century’s “industrial” regulations and public policies.  A “2.0” approach, if you will, that adopts the “common sense” of the internet era to build a foundation of trust and safety.

Coming up: a look at some concrete examples of the tensions between the networked information era and the industrial era; a view into the world of web platforms’ “trust and safety” teams and the model of regulation they’re pioneering; and finally, some specific recommendations for how we might envision a new paradigm for regulation that embraces the networked information era.




  1. Perez, p.4
  2. Perez, p. 16
  3. Weibe, p. 13.  Hat tip to Rit Aggarwala for this reference, and the idea of the “first bureaucratic revolution”

by Nick Grossman at December 15, 2014 05:29 PM

David Weinberger
[cluetrain] How Uber could end its PR nightmare

Uber’s hamfisted behavior continues to get it bad press. The latest: its “surge” pricing, algorithmically set according to demand, went up 400% in Sydney during the hostage-taking event.

Uber has responded appropriately, offering refunds, and providing free rides out of the area. At the same time, it’s keeping its pricing elevated to encourage more Uber drivers to get into their cars to pick up passengers there.

Some of my friends are suggesting that when someone at Uber notices surge prices spiking and it’s not snowing or rush hour, they ought to look into it. Fine, but here’s a radical idea for decentralizing that process:

Uber creates a policy that says that Uber drivers are first and foremost members of their community, and are thus empowered and encouraged to take the initiative in times of crisis, whether that’s to stop for someone in need on the street or to help the population get out of harm’s way during a civic emergency.

Then Uber rewards drivers for doing so.

That is, Uber’s new motto could be “Don’t be a dick.”


And for the other side of humanity: The #illridewithyou [I’ll ride with you] hashtag – Sydney folks offering to accompany Moslems who fear a backlash — makes you proud to be a human.

by davidw at December 15, 2014 05:05 PM

Justin Reich
Some Thoughts and Data from Teach to One about Recent Study
New Classrooms, the non-profit behind Teach to One: Math, offers some additional data and insights about the recent study by Teachers College professor Douglas Ready.

by Justin Reich at December 15, 2014 03:47 PM

Bruce Schneier
Incident Response Webinar on Thursday

On 12/18 I'll be part of a Co3 webinar where we examine incident-response trends of 2014 and look ahead to 2015. I tend not to do these, but this is an exception. Please sign up if you're interested.

by Bruce Schneier at December 15, 2014 02:15 PM

NSA Hacking of Cell Phone Networks

The Intercept has published an article -- based on the Snowden documents -- about AURORAGOLD, an NSA surveillance operation against cell phone network operators and standards bodies worldwide. This is not a typical NSA surveillance operation where agents identify the bad guys and spy on them. This is an operation where the NSA spies on people designing and building a general communications infrastructure, looking for weaknesses and vulnerabilities that will allow it to spy on the bad guys at some later date.

In that way, AURORAGOLD is similar to the NSA's program to hack sysadmins around the world, just in case that access will be useful at some later date; and to the GCHQ's hacking of the Belgian phone company Belgacom. In both cases, the NSA/GCHQ is finding general vulnerabilities in systems that are protecting many innocent people, and exploiting them instead of fixing them.

It is unclear from the documents exactly what cell phone vulnerabilities the NSA is exploiting. Remember that cell phone calls go through the regular phone network, and are as vulnerable there as non-cell calls. (GSM encryption only protects calls from the handset to the tower, not within the phone operators' networks.) For the NSA to target cell phone networks particularly rather than phone networks in general means that it is interested in information specific to the cell phone network: location is the most obvious. We already know that the NSA can eavesdrop on most of the world's cell phone networks, and that it tracks location data.

I'm not sure what to make of the NSA's cryptanalysis efforts against GSM encryption. The GSM cellular network uses three different encryption schemes: A5/1, which has been badly broken in the academic world for over a decade (a previous Snowden document said the NSA could process A5/1 in real time -- and so can everyone else); A5/2, which was designed deliberately weak and is even more easily broken; and A5/3 (aka KASUMI), which is generally believed to be secure. There are additional attacks against all A5 ciphers as they are used in the GSM system known in the academic world. Almost certainly the NSA has operationalized all of these attacks, and probably others as well. Two documents published by the Intercept mention attacks against A5/3 -- OPULENT PUP and WOLFRAMITE -- although there is no detail, and thus no way to know how much of these attacks consist of cryptanalysis of A5/3, attacks against the GSM protocols, or attacks based on exfiltrating keys. For example, GSM carriers know their users' A5 keys and store them in databases. It would be much easier for the NSA's TAO group to steal those keys and use them for real-time decryption than it would be to apply mathematics and computing resources against the encrypted traffic.

The Intercept points to these documents as an example of the NSA deliberately introducing flaws into global communications standards, but I don't really see the evidence here. Yes, the NSA is spying on industry organizations like the GSM Association in an effort to learn about new GSM standards as early as possible, but I don't see evidence of it influencing those standards. The one relevant sentence is in a presentation about the "SIGINT Planning Cycle": "How do we introduce vulnerabilities where they do not yet exist?" That's pretty damning in general, but it feels more aspirational than a statement of practical intent. Already there are lots of pressures on the GSM Association to allow for "lawful surveillance" on users from countries around the world. That surveillance is generally with the assistance of the cell phone companies, which is why hacking them is such a priority. My guess is that the NSA just sits back and lets other countries weaken cell phone standards, then exploits those weaknesses.

Other countries do as well. There are many vulnerabilities in the cell phone system, and it's folly to believe that only the NSA and GCHQ exploits them. And countries that can't afford their own research and development organization can buy the capability from cyberweapons arms manufacturers. And remember that technology flows downhill: today's top-secret NSA programs become tomorrow's PhD theses and the next day's hacker tools.

For example, the US company Verint sells cell phone tracking systems to both corporations and governments worldwide. The company's website says that it's "a global leader in Actionable Intelligence solutions for customer engagement optimization, security intelligence, and fraud, risk and compliance," with clients in "more than 10,000 organizations in over 180 countries." The UK company Cobham sells a system that allows someone to send a "blind" call to a phone -- one that doesn't ring, and isn't detectable. The blind call forces the phone to transmit on a certain frequency, allowing the sender to track that phone to within one meter. The company boasts government customers in Algeria, Brunei, Ghana, Pakistan, Saudi Arabia, Singapore, and the United States. Defentek, a company mysteriously registered in Panama, sells a system that can "locate and track any phone number in the world...undetected and unknown by the network, carrier, or the target." It's not an idle boast; telecommunications researcher Tobias Engel demonstrated the same capability at a hacker conference in 2008. Criminals can purchase illicit products to let them do the same today.

As I keep saying, we no longer live in a world where technology allows us to separate communications we want to protect from communications we want to exploit. Assume that anything we learn about what the NSA does today is a preview of what cybercriminals are going to do in six months to two years. That the NSA chooses to exploit the vulnerabilities it finds, rather than fix them, puts us all at risk.

This essay has previously appeared on the Lawfare blog.

by Bruce Schneier at December 15, 2014 05:09 AM

December 14, 2014

Bruce Schneier
Who Might Control Your Telephone Metadata

Remember last winter when President Obama called for an end to the NSA's telephone metadata collection program? He didn't actually call for an end to it; he just wanted it moved from an NSA database to some commercial database. (I still think this is a bad idea, and that having the companies store it is worse than having the government store it.)

Anyway, the Director of National Intelligence solicited companies who might be interested and capable of storing all this data. Here's the list of companies that expressed interest. Note that Oracle is on the list -- the only company I've heard of. Also note that many of these companies are just intermediaries that register for all sorts of things.

by Bruce Schneier at December 14, 2014 07:06 PM

David Weinberger
Jeff Jarvis on journalism as a service

My wife and I had breakfast with Jeff Jarvis on Thursday, so I took the opportunity to do a quick podcast with him about his new book Geeks Bearing Gifts: Imagining New Futures for News.

I like the book a lot. It proposes that we understand journalism as a provider of services rather than of content. Jeff then dissolves journalism into its component parts and asks us to imagine how they could be envisioned as sustainable services designed to help readers (or viewers) accomplish their goals. It’s more a brainstorming session (as Jeff confirms in the podcast) than a “10 steps to save journalism” tract, and some of the possibilities seem more plausible — and more journalistic — than others, but that’s the point.

If I were teaching a course on the future of journalism, or if I were convening my newspaper’s staff to think about the future of our newspaper, I’d have them read Geeks Bearing Gifts if only to blow up some calcified assumptions.

by davidw at December 14, 2014 03:50 PM

December 13, 2014

David Weinberger
[2b2k] The Harvard Business School Digital Initiative’s webby new blog

The Harvard Business School Digital Initiative [twitter:digHBS] — led by none other than Berkman‘s Dr. Colin Maclay — has launched its blog. The Digital Initiative is about helping HBS explore the many ways the Net is affecting (or not affecting) business. From my point of view, it’s also an opportunity to represent, and advocate for, Net values within HBS.[1] (Disclosure: I am officially affiliated with the Initiative as an unremunerated advisor. Colin is a dear friend.[2])

The new blog is off to a good start:

I also have a post there titled “Generative Business and the Power of What We Ignore.” Here’s how it starts:

“I ignore them. That’s my conscious decision.”

So replied CV Harquail to a question from HBS professor Karim Lakhani about the effect of bad actors in the model of “generative business” she was explaining in a recent talk sponsored by the Digital Initiative.

Karim’s question addressed an issue that more than one of us around the table were curious about. Given CV’s talk, the question was perhaps inevitable.

CV’s response was not inevitable. It was, in fact, surprising. And it seems to me to have been not only entirely appropriate, but also brave… [more]


[1] I understand that the Net doesn’t really have values. It’s shorthand.
[2] I’m very happy to say that more than half of the advisors are women.

by davidw at December 13, 2014 03:12 PM

Cézanne’s unfortunate wife

We went to the Metropolitan Museum of Art for its amazing, bottomless collection, but while we were there we visited the Madame Cézanne exhibit. It’s unsettling and, frankly, repellant.

Please note that I understand that I don’t know what I’m talking about. I’m the sort of museum-goer who likes the works that he likes. I can’t even predict what is going to touch me, much less make sense of it. Which is, I believe, more or less the opposite of how actual criticism works.

The Met has assembled twenty-four paintings and sketches by Cézanne of his wife Hortense. As compositions some are awesome (he is Cézanne after all), but as portraits they seem technically pretty bad: her face is sometimes unrecognizable from one picture to the next, even ones that were painted within a couple of years of one another.

Madame Cézanne (Hortense Fiquet, 1850–1922) in the Conservator

Hortense Fiquet in a striped skirt

But what does that matter so long as Cézanne has expressed her soul, or his feelings about her, or both? Or, in this case, neither. You stare at those portraits and ask what he loved in her. Or, for that matter, hated in her? Did he feel anything at all about her?

The exhibit’s helpful wall notes explain that in fact there seems to have been little love in their relationship, at least on his part. The NY Times review of the show musters all the sympathy it can for Hortense and is well worth reading for that.

We know little about Madame Cézanne. And we learn little more from these portraits. It is fine to say that Cézanne was interested in shape, form, and light, not personality. But the fact that he had her sit immobile for countless hours so he could paint a still life made of flesh is a problem, especially since Cézanne seems to have loved his peaches and pears more than he loved this woman.

Cézanne: Still life with apples

Here’s a little more eye-bleach for you: a quick Picasso painting of a woman who sleeping is yet more alive than Madame Cézanne as represented in her husband’s careful artistry:

Picasso's Repose


On the far more positive side, we also went to the Museum of Modern Art’s exhibit of Matisse’s cut-outs.

Matisse's cut-outs, at MOMA

I’ve always liked Matisse, but have never taken him too seriously because he seems incapable of conveying anything except joy — although a full range of joy, from the sensuous to the spiritual. I’m sure I’m not appreciating him fully, but not matter what, oh my, what a genius of shape and color. I didn’t want to leave.

If you can see this collection, do. So much fun.

by davidw at December 13, 2014 05:06 AM

December 12, 2014

Nick Grossman
The magic of making hard things easy

I wrote earlier this week about how life is, generally, hard.  There’s no question about that.

One of my favorite things about the Internet, and probably the most exciting thing about working in venture capital, is being around people who are working to re-architect the world to make hard things easier.  And by easier, I mean: by designing clever social / technical / collaborative hacks that redesign the problem and the solution.

Yesterday, I was out in SF for USV’s semiannual Trust, Safety and Security summit — Brittany runs USV portfolio summits twice a month and one of the ones I don’t miss is this one.  It brings together folks working on Trust and Safety issues (everything from fraud, to bullying, to child safety, to privacy) and Security issues (securing offices & servers; defending against hacker attacks, etc.).  Everyone learns from everyone else about how to get better at all of these important activities.

Trust, Safety and Security teams are the unsung heroes of every web platform.  What they do is largely invisible to end users, and you usually only hear about them when something goes wrong.  They are the ones building the internal systems that make it possible to buy from a stranger online, to get into someone’s car, to let your kid use the internet.  If web platforms were governments, they would be the legislature, law enforcement, national security, and social services.

Often times at these summits, we bring in outside guests who have particular expertise in some area.  At yesterday’s summit, our guest was Alex Rice, formerly head of Product Security at Facebook, and now founder of HackerOne.  Side note: it was fascinating to hear about how Facebook bakes security into every product and engineering team — subject for a later post.  For today: HackerOne is a fascinating platform that takes something really hard — security testing — and architects it to be (relatively) easy, by incentivizing the identification and closing out of security holes in web applications and open source projects.

The magic of HackerOne is solving for incentives and awkwardness, on both sides (tech cos and security researchers).  Security researchers are infamous for finding flaws in web platforms, and then, if the platforms don’t respond and fix it, going public.  This is only a semi-effective system, and it’s very adversarial.  HackerOne solves for this by letting web platforms sign up (either in public or private) and attract hackers/researchers, and mediating the process of identifying, fixing, and publicizing bugs, and paying out “bug bounties” to the hackers.  Platforms get stronger, hackers get paid.  In the year that it’s been operating, HackerOne has solved over 5,000 bugs and paid out over $1.6mm in bug bounties.

Thinking about this, it strikes me that there are a few common traits of platforms that successfully re-architect something from hard –> easy:

Structure and incentives: The secret sauce here mediating the tasks in a new way, and cleverly building incentives for everyone to participate.  Companies don’t like to admit they might have security holes. They don’t like to engage with abrasive outside researchers.  Email isn’t a very accountable mode of communication for this.  But HackerOne is figuring out how to solve for that — if every company has a HackerOne page, there’s nothing to fear about having one.  Building a workflow around bug finding / solving / publicizing solves a lot of practical problems (like making payments and getting multi-party sign off on going public).  Money that’s small for a big company is big for an individual researcher — one hacker earned $20k in bug bounties in a single month, for a single company, recently  Essentially, HackerOne is doing to security bugs what StackOverflow has done for technical Q&A: take a messy, hard, unattractive problem with a not-very-effective solution and re-architect it to be easy, attractive and magical.

Vastly broadening the pool of participants:  After the summit, I asked Alex how old the youngest successful bug finder on the platform is.  Any guesses?  11.  Right: an 11 year old found a security hole in a website and got paid for it.  Every successful hard –> easy solution on the internet does this.  Another of my favorite examples is CrowdMed, where a community of solvers makes hard medical diagnoses that other specialists could not – 70% of the solvers are not doctors.  (They typically solve it with an “oh, my friend has those symptoms; maybe it’s ____” approach, which you can only do at web scale).

Deep personal experience: It takes a lot of subject matter expertise to get these nuances right.  It makes sense that Alex was a security specialist, that Joel at stack overflow has been building developer tools for nearly two decades, and that Jared at CrowdMed was inspired by his own sister’s experience with a rare, difficult-to-diagnose disease.  I would like to think that it’s also possible to do this without that deep expertise, but it seems clear that it helps a lot.

The fact that it’s not only possibly to make hard things easy, but that smart people everywhere are building things that do it right now, is what gets gets me going every day.

by Nick Grossman at December 12, 2014 07:06 PM

Bruce Schneier
Friday Squid Blogging: Squid Poaching off the Coast of Japan

There has been an increase in squid poaching by North Korea out of Japanese territorial waters.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

by Bruce Schneier at December 12, 2014 03:41 PM

Regin Malware

Last week, we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It's more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there's substantial evidence that it was built and operated by the United States.

This isn't the first government malware discovered. GhostNet is believed to be Chinese. Red October and Turla are believed to be Russian. The Mask is probably Spanish. Stuxnet and Flame are probably from the U.S. All these were discovered in the past five years, and named by researchers who inferred their creators from clues such as who the malware targeted.

I dislike the "cyberwar" metaphor for espionage and hacking, but there is a war of sorts going on in cyberspace. Countries are using these weapons against each other. This affects all of us not just because we might be citizens of one of these countries, but because we are all potentially collateral damage. Most of the varieties of malware listed above have been used against nongovernment targets, such as national infrastructure, corporations, and NGOs. Sometimes these attacks are accidental, but often they are deliberate.

For their defense, civilian networks must rely on commercial security products and services. We largely rely on antivirus products from companies such as Symantec, Kaspersky, and F-Secure. These products continuously scan our computers, looking for malware, deleting it, and alerting us as they find it. We expect these companies to act in our interests, and never deliberately fail to protect us from a known threat.

This is why the recent disclosure of Regin is so disquieting. The first public announcement of Regin was from Symantec, on November 23. The company said that its researchers had been studying it for about a year, and announced its existence because they knew of another source that was going to announce it. That source was a news site, the Intercept, which described Regin and its U.S. connections the following day. Both Kaspersky and F-Secure soon published their own findings. Both stated that they had been tracking Regin for years. All three of the antivirus companies were able to find samples of it in their files since 2008 or 2009.

So why did these companies all keep Regin a secret for so long? And why did they leave us vulnerable for all this time?

To get an answer, we have to disentangle two things. Near as we can tell, all the companies had added signatures for Regin to their detection database long before last month. The VirusTotal website has a signature for Regin as of 2011. Both Microsoft security and F-Secure started detecting and removing it that year as well. Symantec has protected its users against Regin since 2013, although it certainly added the VirusTotal signature in 2011.

Entirely separately and seemingly independently, all of these companies decided not to publicly discuss Regin's existence until after Symantec and the Intercept did so. Reasons given vary. Mikko Hyponnen of F-Secure said that specific customers asked him not to discuss the malware that had been found on their networks. Fox IT, which was hired to remove Regin from the Belgian phone company Belgacom's website, didn't say anything about what it discovered because it "didn't want to interfere with NSA/GCHQ operations."

My guess is that none of the companies wanted to go public with an incomplete picture. Unlike criminal malware, government-grade malware can be hard to figure out. It's much more elusive and complicated. It is constantly updated. Regin is made up of multiple modules -- Fox IT called it "a full framework of a lot of species of malware" -- making it even harder to figure out what's going on. Regin has also been used sparingly, against only a select few targets, making it hard to get samples. When you make a press splash by identifying a piece of malware, you want to have the whole story. Apparently, no one felt they had that with Regin.

That is not a good enough excuse, though. As nation-state malware becomes more common, we will often lack the whole story. And as long as countries are battling it out in cyberspace, some of us will be targets and the rest of us might be unlucky enough to be sitting in the blast radius. Military-grade malware will continue to be elusive.

Right now, antivirus companies are probably sitting on incomplete stories about a dozen more varieties of government-grade malware. But they shouldn't. We want, and need, our antivirus companies to tell us everything they can about these threats as soon as they know them, and not wait until the release of a political story makes it impossible for them to remain silent.

This essay previously appeared in the MIT Technology Review.

by Bruce Schneier at December 12, 2014 12:41 PM

December 11, 2014

The Future of State of the Re:Union

Dear Stations,

Seven years ago, the CPB-funded “Public Radio Talent Quest” went looking for new voices. A defining quality of the search was “hostiness,” people an audience would want to spend time with, and explore with. One of those voices was Al Letson, who created and has been producing State of the Re:Union since 2008 for NPR, PRX, and the more than 200 stations that have supported each season.

This winter, Al Letson will partner with PRX to host Reveal, a new, weekly investigative news program from The Center for Investigative Reporting and PRX. Watch for Reveal‘s debut in January 2015.

In turn, in Spring 2015, following the release of new Black History Month (February 2015) and National Poetry Month (April 2015) programs, State of the Re:Union will end production of its 10-programs-a-year seasons. However, Al and WJCT (SOTRU’s producing station) are exploring opportunities for additional SOTRU specials in 2015.

The 2014 fall season of five SOTRU programs is available now to all NPR Member Stations, on both Content Depot and — have a listen now. It’s filled with the kind of work that won Al Letson and producer Laura Starecheski an Edward R. Murrow Award for the episode, “The Hospital Always Wins,” last season. SOTRU has been recognized with the Murrow two years in a row.

NPR and PRX’s collaboration with Al, CPB and WJCT/Jacksonville to share the program is something we’re all are proud of. Keep an eye out for more on Al’s new show, as well as details on possible SOTRU specials in 2015. We thank the stations who have and will continue to present Al Letson’s work, and the man himself for telling the story of America, one community at a time.


Israel Smith, NPR
John Barth, PRX

The post The Future of State of the Re:Union appeared first on PRX.

by John at December 11, 2014 07:01 PM

Bruce Schneier
The Future of Auditory Surveillance

Interesting essay on the future of speech recognition, microphone miniaturization, and the future ubiquity of auditory surveillance.

by Bruce Schneier at December 11, 2014 10:26 AM

December 10, 2014

Justin Reich
Explaining MOOC Completion and Retention in the Context of Student Intent
A conversation with a journalist at the Chronicle of Higher Education reveals I could have done a better job explaining findings from a new paper about MOOC completion rates and student intentions.

by Justin Reich at December 10, 2014 08:13 PM

Willow Brugh
Museo aero solar

Years ago, after Chaos Congress, Rubin insisted we go to some art show. I, as always, preferred to stay at home — whatever continent, country, city home might be in that day. But Rubin can be lovingly persistent. It would be worth it. It would be beautiful. We went, mere hours before I boarded a plane from somewhere to somewhere else.
blue-haired willow has her back to the camera, focused on a large transparent orb. Children play in the orb, suspended on a clear sheet of plastic. black lacing holds the orb in place. Rubin took this picture, and Willow is fond of Rubin.
Biosphere was a study in liminality to me, suspended spaces tethered to more commonly understood as habitable floors and walls. Perfectly clear water in heavy plastic and vast space define in clarity and iridescence. It was a liminal future, an in-between home, the moment the wheels leave the runway. The terror of my anxiety and the complete love for the possibility of Something Different, wrapped up in the moments of stepping into the future. In short, Rubin was right.

Jump forward a few years.
When Pablo invited me to Development and Climate Days in Lima, I was glad to go. Even before the deeply pleasurable and productive Nairobi gig with Red Cross Red Crescent Climate Centre and Kenya Red Cross, I trusted Pablo to have spot-on inklings of the future. Maybe all that climate forecasting has gotten into his social forecasting as well. His efforts around serious games resulting in their now being generally accepted, he told about an art-involved step to get people to think about the future differently. Something about plastic bags, and lighter than air balloons. Would I be willing to, in addition to my talk, document their process of creating a Museo Aero Solar for others to participate and replicate? Of course – distribution of knowledge, especially with illustration and technology is kind of my jam. It would also help me venture into the city.

I arrived at 2a to a deserted city and a vast and rolling Pacific out the cab window. I cracked jokes with the driver based on my poor grasp of Spanish (“ehhhh, Pacifico es no muy grande.”) He humored me. And in the morning, mango that tasted like sunlight, and instant coffee, and the Climate Centre team of whom I am becoming increasingly fond. And a new person – the artist Tomas, with whom Pablo and I ventured to an art space to join the already-started process of community building and art creation, large bags full of plastic shopping bags ready for cutting and taping. Pablo eventually had to go spend time at COP20, I relished not going.
a phone-camera captured image of a pamphlet instructing how to create a museo aero solar. it instructs the collecting, cutting, taping, and combining of plastic bags.
I took such specific, ritualistic care with each plastic bag. Cut off the bottom, cut off the handles, cut a side to make a long rectangle. Lay it gently on top of the pile, pressing down to smooth and order. Pick up the next bag. Feel it on my hands. The crinkle, the color. Smooth it out. Cut. Place. The sound of tape being pulled, torn, applied, and stories told in Spanish. The slow joining of each hand-cut rectangle. I smiled, to dedicate so much care to so many iterations of things which are the detritus of life. Francis laughed with me, saying she felt the weight of each one. A heavy statement for something so light. Tomas walking around, constantly seeming to have attracted a bit of plastic bag handle to his heel, no matter how many he peeled off, a persistent duckling of artist statement.

We went to the Lima FabLab to speak to a hackathon about making a GPS and transponder so we could let the creation fly free without endangering air traffic. And this time I saw it from the outside – seeing Tomas speak to a group of self- and community-taught Peruvian coders, and seeing their faces display disbelief and verge protection against the temporal drain of those outside your reality. Then, as he showed step by step, and finally an image, that these can fly, their cousins can lift a person, grins break out. Peoples’ hearts lift, new disbelief replaces the jaded. There is laughter and a movement to logistical details.

And then we took it to the D&C venue, and it worked.

I imagine what Pablo must have gone through, to get bureaucratic sign-off on this. No metric of success. No Theory of Change. Him, fighting tooth and nail for a large and hugely risk-adverse organization to trust fall into the arms of a community, an artist, a facilitator, and a game maker. And they did. And it changed the entire event. People in suits crawling into this cathedral made of plastic bags, each individually cut and added with love to the whole. A pile of fancy shoes outside the entrance, like a ballroom bouncy castle. People’s unabashed joy watching art some of them had made become a room, and then lift off to become a transport.

This future we want — it’s hard work, it can seem impossible. But it’s right here, we made it. It works, and it is beautiful.

I brought up ways for other people to participate. In a beautiful act I would associate with Libre ethics, the Lima crew have opened up not only our stories, but our process. We want you to join us. We want you to be a part of this future, and it means hard work. The fledgling wiki and mailing list can be found here. I hope you hop on.

by bl00 at December 10, 2014 07:38 PM

John Palfrey
All School Meeting Address: Winter Welcome 2014 and Discussions in the Wake of Ferguson

Good morning, Andover!

Over the Thanksgiving break, I wrote to you all an email, asking that you take some time to understand what was happening in Ferguson, Missouri.  A few members of the community — a student and a parent, in particular — wrote me back, respectfully, with deep concerns about what I had written, along with Dean Murphy [our Dean of Students] and LCG [Dean Linda Carter Griffith, our Dean of Community and Multicultural Development].  I wanted to respond to those concerns and also to explain why I think this attention and this discourse are so important.  [The original email is here.]

I asked you to pay attention to what happened in Ferguson, Missouri, not because I want you to think something in particular. In fact, while I do have a point of view on this issue, and I’m happy to share that view with any of you anytime, I very much do not want for 1129 young people to think what I think – what a disaster that would be!  In fact, let’s agree to start from a perspective of valuing intellectual freedom and the importance of being open to hear every voice in our academic community.

I asked you to pay attention for two reasons. One is that, despite the common phrase, we do not live in a “bubble” in Andover. We live in a community that is deeply connected to the world outside our beautiful campus. We live in a world where students are required to go off-campus – whether home or elsewhere – during breaks. We live in a world in which all students have friends and family who live outside of our little world here. And we live in a world that is increasingly complex – more global, more interconnected, more diverse, and moving ever more quickly.

The other reason I asked you to pay attention to what happened in Ferguson is because I think it matters a great deal in an historic sense. It matters to every single one of us – Latino/a, Asian, Black, White, regardless of the race, or races, or ethnicity or ethnicities, that you claim. It matters to each person, perhaps in a different way. But it matters to all of us because it stands for a few important things. It stands for the difficulty we continue to have in talking about race and difference in the world. I know, in what I will say to you today, I will offend one or more of you; or perhaps I will stumble badly over my words.  We must each run that risk — of offending one another, of saying the wrong thing, on the way to the truth and to productive dialogue.  This issue also stands for the very real challenge of effective law enforcement and global security — which we must accomplish with real effectiveness — and to do so in a world in which it is not possible to ignore the inequities between people in our society.

I would not have wanted for the world to be in the position that faced the policeman, Darren Wilson that night. I would not have wanted for the world to be in the position that faced Michael Brown that night — and I know, because of the color of my skin and other factors, that I am highly unlikely ever to be. I would not wish on anyone the job of being on that Grand Jury. My heart breaks for every one of their families and friends. Ditto for what happened in Staten Island, in the death of Eric Garner. Ditto for hundreds, if not thousands, of similar cases in recent years. This is hard, and this is heart-breaking. These events happen all too often in this country and in countries around the world.

We need to be better – and it starts here, in this august high school. We need to do better – and we can. We can prove that we can be empathetic toward one another. We can prove that such a diverse community can work, that we can listen and learn from one another, and that we can work toward a more just and sustainable world.

More broadly, these matters speak to more than race. These matters call the question: What does it mean to be a citizen in a republic? What it means to me is that you must have a point of view. There is a cost of freedom; there is a cost to having a say in who governs and how they do it. That cost is that you must engage. You must learn. You must listen. You must come to have a point of view on issues that matter; we cannot govern ourselves if we do not. And you must act upon it. You have no choice.  That might mean that you start a new journal, as some of your colleagues have recently done, on matters of fiscal policy; it might mean that you organize a forum and a candlelight vigil; it might mean that you put yourself into the public arena with a point of view on something else that matters to you.  But to make democracy work, you must find your path toward being a true citizen.

It may be that one of us in this room will be in the position of Darren Wilson one day; maybe one of us will be in Michael Brown’s shoes; in America, we will all be on that Grand Jury; we will all be their friends and family. Not in exactly the same way, and – we pray – not with the same outcome. But when we sign up for life in a republic, we sign up to do the work of being a citizen — to being on that jury, to making those hard decisions, to figuring out how we can have effective law enforcement and global security in a way that is consonant with the Constitution and with international norms of human rights. That work is hard; it matters; and it is all of our work.

I could not be more proud to live in this country; I could not be more proud to be an American.  I could not be more proud to live and work at Andover; I could not be more proud to be your head of school.  Neither America nor Andover is perfect. Neither one is completely exceptional. But on their best days, they are both completely wonderful.  We can and must make both of them better – and with them, the world at large. Andover, it starts here – it starts with each of us and with our community.  We can show that democracy works in the context of free, open, orderly discussion on topics that matter — whether they relate to what is right in front of us or what is occurring in the world at large.

I will end with a quote that I love.  I know that there are valid critiques of this quote, but I love it – for its spirit and for what it calls on each of us to do. It is a quote from Theodore Roosevelt, 26th president of the United States. He almost certainly did not have in mind as inclusive a community as I do today, but he got the call to engaged citizenship just right.  Where I say “man”, you can choose to hear “person.”  Otherwise, please just listen to it for the spirit and the challenge it presents:

“It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood, who strives valiantly; who errs and comes short again and again; because there is not effort without error and shortcomings; but who does actually strive to do the deed; who knows the great enthusiasm, the great devotion, who spends himself in a worthy cause, who at the best knows in the end the triumph of high achievement and who at the worst, if he fails, at least he fails while daring greatly. So that his place shall never be with those cold and timid souls who know neither victory nor defeat.”

All School Meeting dismissed.

by jgpalfrey at December 10, 2014 07:03 PM

Jessica Silbey on The Eureka Myth: Creators, Innovators and Everyday Intellectual Property [AUDIO]
Why do people create and innovate? And how does intellectual property law encourage, or discourage, the process? In this talk Jessica Silbey — Professor at Suffolk University Law School — discusses her recent book The Eureka Myth: Creators, Innovators, and Everyday Intellectual Property, which investigates the motivations and mechanisms of creative and innovative activity in […]

by Berkman Center for Internet & Society at Harvard Law School ( at December 10, 2014 03:30 PM

Tim Davies on Unpacking Open Data: Power, Politics and the Influence of Infrastructures [AUDIO]
Countries, states & cities across the globe are embracing the idea of ‘open data’: establishing platforms, portals and projects to share government managed data online for re-use. Yet, right now, the anticipated civic impacts of open data rarely materialize, and the gap between the promise and the reality of open data remains wide. In this […]

by Berkman Center for Internet & Society at Harvard Law School ( at December 10, 2014 03:18 PM

Nick Grossman
Everyone is broken and life is hard

That’s a pretty depressing and fatalistic post title, but I actually mean it in a positive and encouraging way.  Let me explain.

It’s easy to go about your life, every day, feeling like everyone else has their shit together and that the things you struggle with are unique to you.

But then, when you get down to it, it turns out that everyone — every single person I know — is dealing with profoundly difficult and stressful things.  Sometimes that’s money, sometimes it’s health, sometimes it’s work or family or relationships.

It’s worth remembering this so that we cultivate some empathy when dealing with people — in general and in particular in difficult situations.

For example, with all of the controversy and strife over police brutality and race relations in the US, it’s easy for both sides to look at the other and not understand.  My personal default stance on all of that is: of course police treat black males unfairly, and black people in the US are so structurally fucked over that it’s hard to really comprehend it.

I also have a police detective as a future brother-in-law, who sees it from a different perspective.  From his, and my sister-in-law’s point of view, he does something incredibly dangerous and scary, for the safety of all of us; and further, he’s a good person and so are his colleagues.  He also sent me this video (graphic) which grounds those sentiments in reality.  And of course, he’s right.

Or take congress.  It’s poisonous there.  I went down to DC last week, and met with two Republican senate staffers, two Democrats, and an independent.  Reasonable people, all of them, and I’m sure each with their own struggles.  Now, I’m not in the thick of the DC mess, but it seems to me that it’s easy to lose sight of that and just fucking hate everyone in the heat of the fight.

Or the torture report. Jesus.

Or look at celebrities, or the ultra rich.  I have an old friend who is very wealthy and just went through a really painful divorce that broke up his family.  The number of privileged kids with broken lives due to substance abuse is staggering.

The number of upper middle class, middle class, and poor people with broken lives due to substance abuse is staggering. A fabulous couple I know, with one of the best relationships I’ve ever seen, is on the brink of losing it because of stress and alcohol.

We’ve got two close friends dealing with life-threatening cancer right now.  Someone in their thirties and someone in their sixties.

Everyone has these things, either directly or adjacently.  And they all go to work every day (or don’t), and get on twitter, and blog, and talk on TV, and run companies, and etc.

I am not exactly sure what my point is here, except to say that thinking about it this way really makes me want to redouble my support for my friends and family, and to give everyone (including myself) a break now and then, because there are things in their life that are broken, and life is hard for everyone.

by Nick Grossman at December 10, 2014 12:25 PM

Bruce Schneier
Quantum Attack on Public-Key Algorithm

This talk (and paper) describe a lattice-based public-key algorithm called Soliloquy developed by GCHQ, and a quantum-computer attack on it.

News article.

by Bruce Schneier at December 10, 2014 11:42 AM

Rapiscan Full-Body Scanner for Sale

Government surplus. Only $8,000 on eBay. Note that this device has been analyzed before.

by Bruce Schneier at December 10, 2014 11:09 AM

December 09, 2014

danah boyd
Data & Civil Rights: What do we know? What don’t we know?

From algorithmic sentencing to workplace analytics, data is increasingly being used in areas of society that have had longstanding civil rights issues.  This prompts a very real and challenging set of questions: What does the intersection of data and civil rights look like? When can technology be used to enable civil rights? And when are technologies being used in ways that undermine them? For the last 50 years, civil rights has been a legal battle.  But with new technologies shaping society in new ways, perhaps we need to start wondering what the technological battle over civil rights will look like.

To get our heads around what is emerging and where the hard questions lie, the Data & Society Research Institute, The Leadership Conference on Civil and Human Rights, and New America’s Open Technology Institute teamed up to host the first “Data & Civil Rights” conference.  For this event, we brought together diverse constituencies (civil rights leaders, corporate allies, government agencies, philanthropists, and technology researchers) to explore how data and civil rights are increasingly colliding in complicated ways.

In preparation for the conversation, we dove into the literature and see what is known and unknown about the intersection of data and civil rights in six domains: criminal justice, education, employment, finance, health, and housing.  We produced a series of primers that contextualize where we’re at and what questions we need to consider.  And, for the conference, we used these materials to spark a series of small-group moderated conversations.

The conference itself was an invite-only event, with small groups brought together to dive into hard issues around these domains in a workshop-style format.  We felt it was important that we make available our findings and questions.  Today, we’re releasing all of the write-ups from the workshops and breakouts we held, the videos from the level-setting opening, and an executive summary of what we learned.  This event was designed to elicit tensions and push deeper into hard questions. Much is needed for us to move forward in these domains, including empirical evidence, innovation, community organizing, and strategic thinking.  We learned a lot during this process, but we don’t have clear answers about what the future of data and civil rights will or should look like.  Instead, what we learned in this process is how important it is for diverse constituencies to come together to address the challenges and opportunities that face us.

Moving forward, we need your help.  We need to go beyond hype and fear, hope and anxiety, and deepen our collective understanding of technology, civil rights, and social justice. We need to work across sectors to imagine how we can create a more robust society, free of the cancerous nature of inequity. We need to imagine how technology can be used to empower all of us as a society, not just the most privileged individuals.  This means that computer scientists, software engineers, and entrepreneurs must take seriously the costs and consequences of inequity in our society. It means that those working to create a more fair and just society need to understand how technology works.  And it means that all of us need to come together and get creative about building the society that we want to live in.

The material we are releasing today is a baby step, an attempt to scope out the landscape as best we know it so that we can all work together to go further and deeper.  Please help us imagine how we should move forward.  If you have any ideas or feedback, don’t hesitate to contact us at nextsteps at

(Image by Mark K.)

by zephoria at December 09, 2014 04:47 PM

Feeds In This Planet