Keep track of Berkman-related news and conversations by subscribing to this page using your RSS feed reader. This aggregation of blogs relating to the Berkman Center does not necessarily represent the views of the Berkman Center or Harvard University but is provided as a convenient starting point for those who wish to explore the people and projects in Berkman's orbit. As this is a global exercise, times are in UTC.
The list of blogs being aggregated here can be found at the bottom of this page.
Last month, I blogged about security researcher Chris Roberts being detained by the FBI after tweeting about avionics security while on a United flight:
But to me, the fascinating part of this story is that a computer was monitoring the Twitter feed and understood the obscure references, alerted a person who figured out who wrote them, researched what flight he was on, and sent an FBI team to the Syracuse airport within a couple of hours. There's some serious surveillance going on.
We know a lot more of the back story from the FBI's warrant application. He had been interviewed by the FBI multiple times previously, and was able to take control of at least some of the planes' controls during flight.
During two interviews with F.B.I. agents in February and March of this year, Roberts said he hacked the inflight entertainment systems of Boeing and Airbus aircraft, during flights, about 15 to 20 times between 2011 and 2014. In one instance, Roberts told the federal agents he hacked into an airplane's thrust management computer and momentarily took control of an engine, according to an affidavit attached to the application for a search warrant.
"He stated that he successfully commanded the system he had accessed to issue the 'CLB' or climb command. He stated that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane during one of these flights," said the affidavit, signed by F.B.I. agent Mike Hurley.
Roberts also told the agents he hacked into airplane networks and was able "to monitor traffic from the cockpit system."
According to the search warrant application, Roberts said he hacked into the systems by accessing the in-flight entertainment system using his laptop and an Ethernet cable.
Wired has more.
This makes the FBI's behavior much more reasonable. They weren't scanning the Twitter feed for random keywords; they were watching his account.
We don't know if the FBI's statements are true, though. But if Roberts was hacking an airplane while sitting in the passenger seat...wow, is that a stupid thing to do.
From the Christian Science Monitor:
But Roberts' statements and the FBI's actions raise as many questions as they answer. For Roberts, the question is why the FBI is suddenly focused on years-old research that has long been part of the public record.
"This has been a known issue for four or five years, where a bunch of us have been stood up and pounding our chest and saying, 'This has to be fixed,'" Roberts noted. "Is there a credible threat? Is something happening? If so, they're not going to tell us," he said.
Roberts isn't the only one confused by the series of events surrounding his detention in April and the revelations about his interviews with federal agents.
"I would like to see a transcript (of the interviews)," said one former federal computer crimes prosecutor, speaking on condition of anonymity. "If he did what he said he did, why is he not in jail? And if he didn't do it, why is the FBI saying he did?"
The real issue is that the avionics and the entertainment system are on the same network. That's an even stupider thing to do. Also last month, I wrote about the risks of hacking airplanes, and said that I wasn't all that worried about it. Now I'm more worried.
Secretary of State John Kerry gave a speech in Seoul yesterday about the Internet, setting out five principles of cybersecurity.
The talk is quite enthusiastic and progressive about the Net. Sort of. For example, he says, “[t]he United States considers the promotion of an open and secure internet to be a key component of our foreign policy,” but he says this in support of his idea that it’s crucial to govern the Internet. On the third hand, the governance he has in mind is designed to keep the Net open to all people and all ideas. On the fourth hand, predictably, we don’t know how much structural freedom he’s willing to give up to stop the very Worst People on Earth: those who share content they do not own.
Overall, it’s a speech that we can be pretty proud of.
Here’s why he thinks the Net is important:
…to begin with, America believes – as I know you do – that the internet should be open and accessible to everyone. We believe it should be interoperable, so it can connect seamlessly across international borders. We believe people are entitled to the same rights of free expression online as they possess offline. We believe countries should work together to deter and respond effectively to online threats. And we believe digital policy should seek to fulfill the technology’s potential as a vehicle for global stability and sustained economic development; as an innovative way to enhance the transparency of governments and hold governments accountable; and also as a means for social empowerment that is also the most democratic form of public expression ever invented.
At its best, the internet is an equal-opportunity platform from which the voice of a student can have as much reach as that of a billionaire; a chief executive may be able to be out-debated by an entry-level employee – and there’s nothing wrong with that.
Great, although why he needed to add a Seinfeldian “Not that there’s anything wrong with that” is a bit concerning.
He then goes on to say that everyone’s human rights extend to online behavior, which is an important position, although it falls short of Hillary Clinton’s claim while Secretary of State that there is a universal “freedom to connect.”
He then in an odd way absolves the Internet from blame for the disruption it seems to cause:
The internet is, among many other things, an instrument of freedom. It’s a tool people resort to in response to the absence and failure or abuse of government…Anyone who blames the internet for the disorder or turmoil in today’s world is just not using their head to connect the dots correctly. And banning the internet in a misguided attempt to impose order will never succeed in quashing the universal desire for freedom.
This separates him from those who think that the Net actually gives people an idea of freedom, encourages them to speak their minds, or is anything except a passive medium. But that’s fine since in this section he’s explaining why dictators shouldn’t shut down the Net. So we can just keep the “inspires an ambition for political freedom” part quiet for now.
“The remedy for the speech that we do not like is more speech,” he says, always a good trope. But he follows it up with an emphasis on bottom-up conversation, which is refreshing: “It’s the credible voices of real people that must not only be enabled, but they need to be amplified.”
To make the point that the Net empowers all sectors of society, and thus it would be disastrous if it were disrupted globally, he suggests that we watch The Day the Earth Stood Still, which makes me think Secretary Kerry has not watched either version of that movie lately. Klaatu barada nikto, Mr. Kerry.
To enable international commerce, he opposes data localization standards, in the course of which he uses “google” as a verb. Time to up your campaign contributions, Bing.
Kerry pre-announces an international initiative to address the digital divide, “in combination with partner countries, development banks, engineers, and industry leaders.” Details to follow.
Kerry tries to position the NSA’s data collection as an enlightened policy:
Further, unlike many, we have taken steps to respect and safeguard the privacy of the citizens of other countries and to use the information that we do collect solely to address the very specific threat to the United States and to our allies. We don’t use security concerns as an excuse to suppress criticisms of our policies or to give a competitive advantage to an American company and any commercial interests at all.
You have our word on that. So, we’re good? Moving on.
Kerry acknowledges that the Telecomm Act of 1996 is obsolete, noting that “Barely anybody in 1996 was talking about data, and data transformation, and data management. It was all about telephony – the telephone.”
Finally, he gets to governance:
So this brings me to another issue that should concern us all, and that is governance – because even a technology founded on freedom needs rules to be able to flourish and work properly. We understand that. Unlike many models of government that are basically top-down, the internet allows all stakeholders – the private sector, civil society, academics, engineers, and governments – to all have seats at the table. And this multi-stakeholder approach is embodied in a myriad of institutions that each day address internet issues and help digital technology to be able to function.
“Stakeholders” get a “seat at the table”? It’s our goddamned table. And it’s more like a blanket on the ground than polished rare wood in a board room. Here’s an idea for you, World Leaders: How about if you take your stakes and get off our blanket?
Well, that felt good. Back to governing the Internet into the ground. And to be fair, Kerry seems aware of the dangers of top-down control, even if he doesn’t appreciate the benefits of bottom-up self-organization:
That’s why we have to be wary of those who claim that the system is broken or who advocate replacing it with a more centralized arrangement – where governments would have a monopoly on the decision-making. That’s dangerous. Now, I don’t know what you think, but I am confident that if we were to ask any large group of internet users anywhere in the world what their preferences are, the option “leave everything to the government” would be at the absolute bottom of the list.
Kerry now enunciates his five principles.
First, no country should conduct or knowingly support online activity that intentionally damages or impedes the use of another country’s critical infrastructure.
Second, no country should seek either to prevent emergency teams from responding to a cybersecurity incident, or allow its own teams to cause harm.
Third, no country should conduct or support cyber-enabled theft of intellectual property, trade secrets, or other confidential business information for commercial gain.
Fourth, every country should mitigate malicious cyber activity emanating from its soil, and they should do so in a transparent, accountable and cooperative way.
And fifth, every country should do what it can to help states that are victimized by a cyberattack.
Two particular points:
First, #2 establishes Internet repair teams as the medical support people in the modern battleground: you don’t fire on them.
Second, #3 gets my goat. Earlier in the talk, Sect’y Kerry said: “We understand that freedom of expression is not a license to incite imminent violence. It’s not a license to commit fraud. It’s not a license to indulge in libel, or sexually exploit children.” But the one crime that gets called out in his five principles is violating copyright or patent laws. And it’s not even aimed at other governments doing so, for it explicitly limits the prohibition to acts committed “for commercial gain.” Why the hell is protecting “IP” more important than preventing cross-border libel, doxxing or other privacy violations, organizing human trafficking, or censorship?
Oh, right. Disney. Hollywood. A completely corrupt electoral process. Got it.
Now, it’s easy to be snarky and dismissive about this speech — or any speech — by a Secretary of State about the Internet, but just consider how bad it could have been. Imagine a speech by a Secretary of State in an administration that sees the Internet primarily as a threat to security, to morals, to business as usual. There’s actually a lot to like in this talk, given its assumptions that the Net needs governments to govern it and that it’s ok to spy on everyone so long as we don’t do Bad Things with that information that we gather.
The post John Kerry on the importance of an open-ish Internet appeared first on Joho the Blog.
Tuesday, May 19, 2015 at 12:00 pm
Berkman Center for Internet & Society at Harvard University
23 Everett Street, Second Floor, Cambridge, MA 02138
The recently published Guide to U.S. Government Practice on Global Sharing of Personal Information, Second Edition, provides an introduction to the principles, practices, and agreements behind how the U.S. government shares personal information with foreign governments - for purposes ranging from tax to counter terrorism and cyber-crime. This information sharing is not only necessary to strengthen relations with foreign governments but to protect the country from threats, foreign and domestic. In the past year, these issues have been most readily visible in the Transatlantic Trade and Investment Partnership (TTIP) negotiations and the renegotiation of the Safe Harbor Framework.
Neal Cohen is a New York and English qualified lawyer in the Privacy & Security practice group at Perkins Coie LLP and a research fellow at the Berkman Center for Internet & Society at Harvard University. His law practice and academic research focus on the global harmonization of data protection and privacy law. Prior to joining Perkins Coie LLP, Neal spent several years practicing data protection and privacy law in London at another multinational law firm and before that, Neal clerked in the Privacy Office at the Department of Homeland Security.
Cybersecurity and the Torah, an anti-manifesto for the Internet, sleep and your screen, and more... in this week's Buzz.
More Berkman in the News
Bernie Sanders gave as good an interview as he could this morning on CNN, trying to stick to the issues as Brianna Keilar repeatedly goaded him to attack Hillary Clinton, or to comment on the horse race. She asked only two questions about policy matters, and they were as non-incisive as questions could be. Twice Sanders said that he would not personlly attack Clinton, and turned the question back to Keilar, asking if the news media would focus on the serious issues facing the American 99.9%.
Just listen to CNN’s side of the conversation, taken from the transcript:
You’ve acknowledged that you don’t have the cash, that you don’t have the campaign infrastructure that Hillary Clinton, say, has and certainly as you enter the race, she is the one that you have your sights set on. What’s your path to victory?
Hillary Clinton talks a lot about income inequality, how you differentiate yourself on this from her?
Your candidacy was assessed by “U.S. News and World Report” like this. It said, “Like Obama in 2008, Sanders can serve to help define Clinton and make her a stronger candidate. Unlike Obama, Sanders can keep Clinton on her game without getting her tossed out of it.” You look at that assessment. Are you a spoiler here? Are you aiming to be a shaper of the debate? Or do you think that you really have a pathway to victory?
I just wonder is this going to be a civil debate with Hillary Clinton? Even if you’re talking about issues and not personality or the fact that she’s establishment, you have to go after a leading candidate with a hard edge. Are you prepared to do that?
Trade a big issue –
in the Senate and now we’re looking towards the House, where Republicans, oddly enough, may not have the votes along with Democrats for this initiative of President Obama’s, something you oppose. You have come out and said this is a terrible idea. Hillary Clinton has not. She is on the fence. Should she take a position?
I want to ask you about George Stephanopoulos, the host of This Week, who has been in the news. You appeared on his show on May 3rd and on that program he asked you about your concerns over the money raised by The Clinton Foundation. You have said that The Clinton Foundation fundraising is a fair issue to discuss. He had donated $25,000 over three years or $75,000 in total, $25,000 each year. He didn’t disclose those donations. And to viewers, to superiors at ABC. He didn’t tell you either, even though you discussed it.
If you take her at her word, Elizabeth Warren’s not getting into this race; Are you looking to gain that pocket of support to Hillary Clinton’s left?
Overall, I don’t hear a lot of forcefulness from you; a lot of people who observe politics say this is a contact sport. You have to have sharp elbows. Even if it’s not going fully negative in character assassination
But are you prepared to sharply point out where your Democratic opponents have not, in your opinion?
Senator Bernie Sanders, thank you so much for being with us. We appreciate it.
I wish I had confidence that if CNN were to hear their side of the conversation, they’d be even a little bit ashamed of how they’re failing in their essential job.
But no. CNN’s post about the interview led with the most negative thing they could find in the interview: “Bernie Sanders casts Hillary Clinton as newcomer to income fight.”
Senator Sanders, you have your answer.
Seriously, Reddit would do a much better job interview Sanders.
I’ve been a lifelong public radio listener, so it was an honor to sit down with Kara Miller of Innovation Hub at WGBH to talk about recent research on what motivates people to do civic things that I conducted with John Webb, Chris Chapman, and Charlotte Krontiris. The research was completed on behalf of the Google Civic Innovation portfolio.
Highlights: a story about one lady’s surprising definition of civic engagement, some pointers to what Google is doing with this research, and (personal) opinions on why institutions like Google should be involved in civic life.
I asked my mobile this morning to shuffle up Instant Karma – Save Dafur, a two-volume set of John Lennon covers. Great album. Big mistake.
It played these tracks in this heart-breaking order:
I think I write about John Lennon every spring as I start to run and listen to music again. I’m pretty sure I say the same things every time.
So I’m not going to say anything about this mix except that it shows why inconsistency is the truth.
Admiral Mike Rogers gave the keynote address at the Joint Service Academy Cyber Security Summit today at West Point. He started by explaining the four tenets of security that he thinks about.
First: partnerships. This includes government, civilian, everyone. Capabilities, knowledge, and insight of various groups, and aligning them to generate better outcomes to everyone. Ability to generate and share insight and knowledge, and to do that in a timely manner.
Second, innovation. It's about much more than just technology. It's about ways to organize, values, training, and so on. We need to think about innovation very broadly.
Third, technology. This is a technologically based problem, and we need to apply technology to defense as well.
Fourth, human capital. If we don't get people working right, all of this is doomed to fail. We need to build security workforces inside and outside of military. We need to keep them current in a world of changing technology.
So, what is the Department of Defense doing? They're investing in cyber, both because it's a critical part of future fighting of wars and because of the mission to defend the nation.
Expect to see more detailed policy around these coming goals in the coming months.
What is the role of the US CyberCommand and the NSA in all of this? The CyberCommand has three missions related to the five strategic goals. They defend DoD networks. They create the cyber workforce. And, if directed, they defend national critical infrastructure.
At one point, Rogers said that he constantly reminds his people: "If it was designed by man, it can be defeated by man." I hope he also tells this to the FBI when they talk about needing third-party access to encrypted communications.
All of this has to be underpinned by a cultural ethos that recognizes the importance of professionalism and compliance. Every person with a keyboard is both a potential asset and a threat. There needs to be well-defined processes and procedures within DoD, and a culture of following them.
What's the threat dynamic, and what's the nature of the world? The threat is going to increase; it's going to get worse, not better; cyber is a great equalizer. Cyber doesn't recognize physical geography. Four "prisms" to look at threat: criminals, nation states, hacktivists, groups wanting to do harm to the nation. This fourth group is increasing. Groups like ISIL are going to use the Internet to cause harm. Also embarrassment: releasing documents, shutting down services, and so on.
We spend a lot of time thinking about how to stop attackers from getting in; we need to think more about how to get them out once they've gotten in -- and how to continue to operate even though they are in. (That was especially nice to hear, because that's what I'm doing at my company.) Sony was a "wake-up call": a nation-state using cyber for coercion. It was theft of intellectual property, denial of service, and destruction. And it was important for the US to acknowledge the attack, attribute it, and retaliate.
Last point: "Total force approach to the problem." It's not just about people in uniform. It's about active duty military, reserve military, corporations, government contractors -- everyone. We need to work on this together. "I am not interested in endless discussion.... I am interested in outcomes." "Cyber is the ultimate team sport." There's no single entity, or single technology, or single anything, that will solve all of this. He wants to partner with the corporate world, and to do it in a way that benefits both.
First question was about the domains and missions of the respective services. Rogers talked about the inherent expertise that each service brings to the problem, and how to use cyber to extend that expertise -- and the mission. The goal is to create a single integrated cyber force, but not a single service. Cyber occurs in a broader context, and that context is applicable to all the military services. We need to build on their individual expertises and contexts, and to apply it in an integrated way. Similar to how we do special forces.
Second question was about values, intention, and what's at risk. Rogers replied that any structure for the NSA has to integrate with the nation's values. He talked about the value of privacy. He also talked about "the security of the nation." Both are imperatives, and we need to achieve both at the same time. The problem is that the nation is polarized; the threat is getting worse at the same time trust is decreasing. We need to figure out how to improve trust.
Third question was about DoD protecting commercial cyberspace. Rogers replied that the DHS is the lead organization in this regard, and DoD provides capability through that civilian authority. Any DoD partnership with the private sector will go through DHS.
Fourth question: How will DoD reach out to corporations, both established and start-ups? Many ways. By providing people to the private sectors. Funding companies, through mechanisms like the CIA's In-Q-Tel. And some sort of innovation capability. Those are the three main vectors, but more important is that the DoD mindset has to change. DoD has traditionally been very insular; in this case, more partnerships are required.
Final question was about the NSA sharing security information in some sort of semi-classified way. Rogers said that there are lot of internal conversations about doing this. It's important.
In all, nothing really new or controversial.
These comments were recorded -- I can't find them online now -- and are on the record. Much of the rest of the summit was held under Chatham House Rules. I participated in a panel on "Crypto Wars 2015" with Matt Blaze and a couple of government employees.
EDITED TO ADD (5/15): News article.
Franzosa and colleagues used publicly available microbiome data produced through the Human Microbiome Project (HMP), which surveyed microbes in the stool, saliva, skin, and other body sites from up to 242 individuals over a months-long period. The authors adapted a classical computer science algorithm to combine stable and distinguishing sequence features from individuals' initial microbiome samples into individual-specific "codes." They then compared the codes to microbiome samples collected from the same individuals' at follow-up visits and to samples from independent groups of individuals.
The results showed that the codes were unique among hundreds of individuals, and that a large fraction of individuals' microbial "fingerprints" remained stable over a one-year sampling period. The codes constructed from gut samples were particularly stable, with more than 80% of individuals identifiable up to a year after the sampling period.
On April 1, I announced the Eighth Movie Plot Threat Contest: demonstrate the evils of encryption.
Not a whole lot of good submissions this year. Possibly this contest has run its course, and there's not a whole lot of interest left. On the other hand, it's heartening to know that there aren't a lot of encryption movie-plot threats out there.
Anyway, here are the semifinalists.
Cast your vote by number here; voting closes at the end of the month.
— James Losey (@jameslosey) May 14, 2015
Bruce is one of the most visible, articulate, and smartest voices on behalf of preserving our privacy. (His new book, Data and Goliath, is both very readable and very well documented.) At an event at West Point, he met Admiral Mike Rogers, Director of the NSA. Bruce did an extensive liveblog of the Rogers’ keynote.
There was no visible explosion, forcing physicists to rethink their understanding of matter and anti-matter.
Security Chasers : The Chastening
US Confidential: What you don’t know you don’t know can kill you
Selma and Louise: Deep Cover
Tango and Hooch: The Spookening
Open and Shut: The Legend Begins
The post The Matrix glitches and puts Bruce Schneier next to the head of the NSA appeared first on Joho the Blog.
Ross Anderson summarizes a meeting in Princeton where Edward Snowden was "present."
Third, the leaks give us a clear view of an intelligence analyst's workflow. She will mainly look in Xkeyscore which is the Google of 5eyes comint; it's a federated system hoovering up masses of stuff not just from 5eyes own assets but from other countries where the NSA cooperates or pays for access. Data are "ingested" into a vast rolling buffer; an analyst can run a federated search, using a selector (such as an IP address) or fingerprint (something that can be matched against the traffic). There are other such systems: "Dancing oasis" is the middle eastern version. Some xkeyscore assets are actually compromised third-party systems; there are multiple cases of rooted SMS servers that are queried in place and the results exfiltrated. Others involve vast infrastructure, like Tempora. If data in Xkeyscore are marked as of interest, they're moved to Pinwale to be memorialised for 5+ years. This is one function of the MDRs (massive data repositories, now more tactfully renamed mission data repositories) like Utah. At present storage is behind ingestion. Xkeyscore buffer times just depend on volumes and what storage they managed to install, plus what they manage to filter out.
As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a "stolen cert," presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can't. There's no evidence of a "wow" cryptanalysis; it was key theft, or an implant, or a predicted RNG or supply-chain interference. Cryptanalysis has been seen of RC4, but not of elliptic curve crypto, and there's no sign of exploits against other commonly used algorithms. Of course, the vendors of some products have been coopted, notably skype. Homegrown crypto is routinely problematic, but properly implemented crypto keeps the agency out; gpg ciphertexts with RSA 1024 were returned as fails.
What else might we learn from the disclosures when designing and implementing crypto? Well, read the disclosures and use your brain. Why did GCHQ bother stealing all the SIM card keys for Iceland from Gemalto, unless they have access to the local GSM radio links? Just look at the roof panels on US or UK embassies, that look like concrete but are actually transparent to RF. So when designing a protocol ask yourself whether a local listener is a serious consideration.
On the policy front, one of the eye-openers was the scale of intelligence sharing -- it's not just 5 eyes, but 15 or 35 or even 65 once you count all the countries sharing stuff with the NSA. So how does governance work? Quite simply, the NSA doesn't care about policy. Their OGC has 100 lawyers whose job is to "enable the mission"; to figure out loopholes or new interpretations of the law that let stuff get done. How do you restrain this? Could you use courts in other countries, that have stronger human-rights law? The precedents are not encouraging. New Zealand's GCSB was sharing intel with Bangladesh agencies while the NZ government was investigating them for human-rights abuses. Ramstein in Germany is involved in all the drone killings, as fibre is needed to keep latency down low enough for remote vehicle pilots. The problem is that the intelligence agencies figure out ways to shield the authorities from culpability, and this should not happen.
The spooks' lawyers play games saying for example that they dumped content, but if you know IP address and file size you often have it; and IP address is a good enough pseudonym for most intel / LE use. They deny that they outsource to do legal arbitrage (e.g. NSA spies on Brits and GCHQ returns the favour by spying on Americans). Are they telling the truth? In theory there will be an MOU between NSA and the partner agency stipulating respect for each others' laws, but there can be caveats, such as a classified version which says "this is not a binding legal document." The sad fact is that law and legislators are losing the capability to hold people in the intelligence world to account, and also losing the appetite for it.
Worth reading in full.
PRX is back with our third annual open call for science radio ideas — the STEM Story Project. STEM Stories from 2013 and 2014 aired on Here & Now, All Things Considered, Studio 360, our science podcast Transistor, PRX Remix, and numerous other podcasts and public radio stations around the country. We’re excited to do this again.
Starting June 1, we’ll accept proposals to create radio stories inspired by STEM topics (Science, Technology, Engineering and Math). We have a pool of $50,000 from the the Alfred P. Sloan Foundation to distribute among multiple projects.
Our goals are to:
• Unleash highly creative, STEM-based original stories and productions
• Educate and excite listeners about STEM topics and issues
• Tell stories and explain STEM issues in new ways
Have an idea for a story? We will accept proposals between June 1st and July 1st, 2015. Stay tuned to #PRXSTEM on Twitter, via our handles @TransistorShow and @prx, for the guidelines, coming by the week of May 18.
Have questions? Comment below or email your questions to firstname.lastname@example.org. But please refer to the FAQ below and application guidelines — coming soon, right here — first!
May the force be with you.
-John Barth & Genevieve Sponsler
What is PRX’s STEM Story Project?
An open call for proposals to create radio stories about STEM (science, technology, engineering, and math). In the past two years, PRX has funded the creation of 29 STEM stories. They’ve aired on national shows like Here & Now, Studio 360, All Things Considered, our science podcast Transistor, and PRX Remix, in addition to being aired on stations throughout the country.
What are the dates?
PRX will accept proposals online between June 1 and July 1, 2015 at 11:59 p.m. ET. Accepted proposals will be announced in early September. Producers will then have two months to create their stories and publish them to PRX.org by November 1, 2015.
Who can apply?
We welcome any producers or writers with audio production experience to apply. Producers can be independent or station-based.
What if I don’t have audio production experience but want to submit a story?
We recommend that you work with an audio producer to come up with a story proposal and to provide audio samples.
If I already received a grant last year, can I apply again this year?
If I applied last year and didn’t get a grant, can I apply again?
Yes, but you must apply with a different story than the one you submitted last year.
What do I need to include in my application?
We’re looking for a proposal of your story idea, two audio samples of your previous work, and a proposed budget.
How long should my proposed audio story be?
We generally ask that the stories be 10 minutes or less. Shorter stories are more shareable online and more likely to get picked up by national shows, podcasts, and stations. Past stories we’ve funded have ranged from 6 to 18 minutes long, but again, with the majority being under 10 minutes.
How will proposals be chosen?
We will work with a team of science advisors and radio advisors to select proposals that best fit the project’s goals.
What should I include in my budget?
Producer fees, engineering fees, travel expenses, and editor fees. If your proposal is chosen, we will contact you to revise your budget, if necessary.
Will you be giving me any guidance during the production process?
PRX requires at least one mandatory check-in during the production period to go over initial script drafts.
What happens after the stories are done?
PRX will work with you to get the pieces licensed to different stations within our network as well as placed on blogs + other digital platforms.
The post Starting June 1: Open Call for Your Science Audio Story Ideas appeared first on PRX.
(or, Caillou Sucks)
What should people who are interested in accountability and algorithms be thinking about? Here is one answer: My eleven-minute remarks are now online from a recent event at NYU. I’ve edited them to intersperse my slides.
This talk was partly motivated by the ethics work being done in the machine learning community. That is very exciting and interesting work and I love, love, love it. My remarks are an attempt to think through the other things we might also need to do. Let me know how to replace the “??” in my slides with something more meaningful!
Preview: My remarks contain a minor attempt at a Michael Jackson joke.
Here is the video: https://www.youtube.com/embed/rJfDKx2fjdE
A number of fantastic Social Media Collective people were at this conference — you can hear Kate Crawford in the opening remarks. For more videos from the conference, see:
Algorithms and Accountabilityhttp://www.law.nyu.edu/centers/ili/algorithmsconference
Thanks to Joris van Hoboken, Helen Nissenbaum and Elana Zeide for organizing such a fab event.
If you bought this 11-minute presentation you might also buy: Auditing Algorithms, a forthcoming workshop at Oxford.
(This post was cross-posted to The Social Media Collective.)
Anyone can design a cipher that he himself cannot break. This is why you should uniformly distrust amateur cryptography, and why you should only use published algorithms that have withstood broad cryptanalysis. All cryptographers know this, but non-cryptographers do not. And this is why we repeatedly see bad amateur cryptography in fielded systems.
The latest is the cryptography in the Open Smart Grid Protocol, which is so bad as to be laughable. From the paper:
Dumb Crypto in Smart Grids: Practical Cryptanalysis of the Open Smart Grid Protocol
Philipp Jovanovic and Samuel Neves
Abstract: This paper analyses the cryptography used in the Open Smart Grid Protocol (OSGP). The authenticated encryption (AE) scheme deployed by OSGP is a non-standard composition of RC4 and a home-brewed MAC, the "OMA digest'."
We present several practical key-recovery attacks against the OMA digest. The first and basic variant can achieve this with a mere 13 queries to an OMA digest oracle and negligible time complexity. A more sophisticated version breaks the OMA digest with only 4 queries and a time complexity of about 2^25 simple operations. A different approach only requires one arbitrary valid plaintext-tag pair, and recovers the key in an average of 144 message verification queries, or one ciphertext-tag pair and 168 ciphertext verification queries.
Since the encryption key is derived from the key used by the OMA digest, our attacks break both confidentiality and authenticity of OSGP.
Note: That first sentence has been called "Schneier's Law," although the sentiment is much older.
Radiotopia has been a huge success by any measure. To name a few: In just over one year, we’ve grown to 7.5 million monthly downloads across our 11 shows. Our Kickstarter campaign drew record-breaking numbers of backers and funding. Most importantly, our producers are making some of the best radio out there.
Knight Foundation saw the potential, and now they see our momentum. We are proud to announce that Radiotopia will receive $1 million in funding from Knight over the next two years. This is an important recognition of how far this brainchild of Roman Mars and PRX has come, while giving us the opportunity to grow and strengthen the network in so many ways. Read more from PRX CEO Jake Shapiro on the Knight Foundation blog.
Patrick Kowalczyk, 212-627-8098, PKPR, email@example.com
Scott Piro, 212-627-8098, PKPR, firstname.lastname@example.org
Anusha Alikhan, Director of Communications, Knight Foundation, 305-908-2646, email@example.com
Radiotopia Podcast Network Expansion Will Help Independent Public Media Producers Develop A Sustainable Business Model With $1 Million From Knight Foundation
Cambridge, Mass. – May 12, 2015 – To help independent audio producers develop new ways to engage audiences, develop models for success and support new talent in public media, podcast network Radiotopia from PRX will expand with $1 million from the John S. and James L. Knight Foundation.
Knight support will enable PRX to provide more resources to Radiotopia’s producers, helping them to experiment with new business models. To this end, PRX will provide more production and logistical support to Radiotopia’s producers, increase operational capacity, market its shows to an even wider audience and double down on promising paths to sustainability. PRX will also hire an executive producer to provide leadership and promote collaboration across and beyond the network. Additionally, Knight support will establish a new pilot fund to identify and nurture diverse emerging producers and hosts.
“Radiotopia is at the epicenter of an expanding galaxy of audio stories and mobile distribution,” said Jake Shapiro, CEO of PRX. “Knight’s investment accelerates our path to reach new listeners, strengthen these shows, and establish a new model for public radio beyond broadcast.”
“PRX has grown Radiotopia to 7.5 million monthly downloads in just over a year by focusing on quality storytelling and programming, but also an innovative approach to distribution and revenue generation,” said Chris Barr, Knight Foundation director for media innovation. “Their experiences can help establish a means for independent producers to become more sustainable and draw in new funding.”
PRX, the award-winning public media company, launched Radiotopia in February 2014 with $200,000 in support from Knight Foundation, in partnership with Roman Mars, known as an innovator for independent podcasts including his hit show “99% Invisible.” Radiotopia has quickly become the leader in today’s audio storytelling renaissance by helping rising talent in the podcasting world grow their audiences, earn revenue and create the best work of their careers.
Last November, Radiotopia became the most-funded radio/podcast project in Kickstarter history, raising over $620,000 from over 21,808 backers, surpassing its original goal of $250,000. The Kickstarter enabled Radiotopia to add four new shows: “The Mortified Podcast,” “Criminal,” “The Heart” and “The Allusionist.” They joined Radiotopia’s roster of envelope-pushing podcasts, including anchor program “99% Invisible” by Roman Mars, “Radio Diaries,” “Theory of Everything,” “Strangers,” “Fugitive Waves,” “The Truth” and “Love + Radio.”
“Our main goal with this expansion is to provide structure and support for the Radiotopia producers and pilot new programs to increase the scope and diversity of public radio podcasts,” said Mars. “I can’t wait to hear what’s created in the years to come. This is only the beginning.”
Along with announcing the search for Radiotopia’s first executive producer, Radiotopia plans to announce new shows, initiatives and partnerships in the coming months.
For more information on Radiotopia visit radiotopia.fm.
PRX is an award-winning nonprofit public media company, harnessing innovative technology to bring compelling stories to millions of people. PRX.org operates public radio’s largest distribution marketplace, offering tens of thousands of audio stories for broadcast and digital use, including This American Life, The Moth Radio Hour, Sound Opinions, State of the Re:Union, Snap Judgment, and WTF with Marc Maron. PRX Remix is PRX’s 24/7 channel featuring the best independent radio stories and new voices. PRX was created through a collaboration of the Station Resource Group and Atlantic Public Media, and receives support from public radio stations and producers, The Corporation for Public Broadcasting, the National Endowment for the Arts, the Ford Foundation, the John D. and Catherine T. MacArthur Foundation, the Wyncote Foundation, and Knight Foundation.
About the John S. and James L. Knight Foundation
Knight Foundation supports transformational ideas that promote quality journalism, advance media innovation, engage communities and foster the arts. We believe that democracy thrives when people and communities are informed and engaged. knightfoundation.org
About Roman Mars
Roman Mars is the host and creator of 99% Invisible, a short radio show about design and architecture. With over 40 million downloads, the 99% Invisible podcast is one of the most popular podcasts in the world. Fast Company named him one of their 100 Most Creative People in 2013. He was a TED main stage speaker in 2015. His crowd funding campaigns have raised over $1.16 million, making him the highest-funded journalist in Kickstarter history. He is also a co-founder of Radiotopia, a collective of ground-breaking story-driven podcasts.
Tuesday, May 12, 2015 at 12:00 pm
Today we feel the impact of technology everywhere except in our paychecks. In the past, technological advancements dramatically increased wages, but during the last three decades, the median wage has remained stagnant. Machines have taken over much of the work of humans, destroying old jobs while increasing profits for business owners. The threat of ever-widening economic inequality looms, but in his new book, Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth, James Bessen argues that it is not inevitable. Workers can benefit by acquiring the knowledge and skills necessary to implement rapidly evolving technologies. Sharing knowledge is an important part of that process, including via open standards and employee job-hopping. At this event, Bessen will have a conversation with Berkman Faculty Associate Karim Lakhani about knowledge sharing, past and present, about government policies that discourage sharing, and about the broader issue of slow wage growth.
James Bessen studies the economics of innovation and patents. He has also been a successful innovator and CEO of a software company. Currently, Mr. Bessen is Lecturer in Law at the Boston University School of Law.
Bessen has done research on whether patents promote innovation, why innovators share new knowledge, and how technology affected worker skills historically. His research first documented the large economic damage caused by patent trolls. His work on software patents with Eric Maskin (Nobel Laureate in Economics) and Robert Hunt has influenced policymakers in the US, Europe, and Australia. With Michael J. Meurer, Bessen wrote Patent Failure (Princeton 2008), highlighting the problems caused by poorly defined property rights. His new book, Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth (Yale 2015), looks at history to understand how new technologies affect wages and skills today. Bessen’s work has been widely cited in the press as well as by the White House, the U.S. Supreme Court, judges at the Court of Appeals for the Federal Circuit, and the Federal Trade Commission.
In 1983, Bessen developed the first commercially successful “what-you-see-is-what-you-get” PC publishing program, founding a company that delivered PC-based publishing systems to high-end commercial publishers. Intergraph Corporation acquired the company in 1993.
Karim R. Lakhani is an Associate Professor of Business Administration at the Harvard Business School and the Principal Investigator of the Crowd Innovation Lab and NASA Tournament Lab at the Institute for Quantitative Social Science. He specializes in the management of technological innovation in firms and communities. His research is on distributed innovation systems and the movement of innovative activity to the edges of organizations and into communities. He has extensively studied the emergence of open source software communities and their unique innovation and product development strategies. He has also investigated how critical knowledge from outside of the organization can be accessed through innovation contests. Currently Professor Lakhani is investigating incentives and behavior in contests and the mechanisms behind scientific team formation through field experiments on the TopCoder platform and the Harvard Medical School.
Telepresence robots, fiber networks, Facebook and polarization, and more... in this week's Buzz.
More Berkman in the News
So here we go again:
We hold these truths to be self-evident: that all customers are born free, that they are endowed by their creator with innate abilities to relate, to converse and and to transact — on their own terms, and in their own ways. When sellers have labored long and hard to restrict those freedoms, and to ignore and insult the capacities enjoyed naturally by customers — by speaking, for example, of “targeting,” “capturing,” “acquiring,” “retaining,” “managing,” “locking in” and “owning” customers as if they were slaves — and when sellers work to inconvenience customers to the exclusive benefit of sellers themselves, for example through “loyalty programs” that require customers to carry around cards that thicken’ wallets and slow checkout in stores, it is the right of customers to obsolete the coercive systems to which both sellers and customers have become accustomed. We do this by providing ourselves with new tools for leveraging our native human powers, for the good of ourselves and sellers alike.
We therefore resolve to construct relationships in which we, the customers, control our own data, hold rights to metadata about ourselves, express loyalty at our own grace, deal in common and standard ways with all sellers and other second and third parties, protect our private persons and spaces, assert fair terms and means of engagement that work in mutually constructive ways for both ourselves and the other parties we engage, for the good of all.
We make this Declaration as free and independent persons, each with full agency, ready to form agreements, make choices, assert commitments, transact business, and otherwise function in the free and open environment we call The Marketplace.
To this we pledge our lives, our fortunes, and our precious time and attention.
Comments and improvements welcome.
*Read the whole thing. It matters. Hugely.
By the way, I’ll be in New Zealand and Australia the week after next, keynoting Identity 2015 in Wellington and Customer Tech X in Melbourne, where I will also be on a number of panels. I’ll also be in Sydney for one day before heading back. Hope I can also hook up with some of the growing number of VRM companies there. There are many on the VRM Developers List. (More on a separate post later.)
Yesterday was a busy day before the Supreme Judicial Court in Massachusetts (SJC), as the Court heard arguments in Commonwealth v. Estabrook and Commonwealth v. Lucas. The Cyberlaw Clinic filed amicus briefs in both cases. Clinic students Naomi Gilens (JD ’16) and Sandra Hanian (JD ’15) attended the arguments on Thursday, along with Clinical Instructor Vivek Krishnamurthy and Clinical Fellow Andy Sellars.
Estabrook – in which the Clinic filed a brief on behalf of the ACLU of Massachusetts and the Electronic Frontier Foundation – concerns cell phones and location privacy. As we noted in our earlier blog post, cell phones track the location of users as part of their basic function. After the SJC’s decision in Commonwealth v. Augustine, law enforcement in Massachusetts generally has needed a warrant in order to obtain that information as part of a criminal investigation. Estabrook concerns whether, based on some of the language in Augustine, law enforcement does not need a warrant for a “brief period” of location information. In Estabrook, law enforcement obtained two weeks of cellphone location information, but now the state seeks to use only six hours of that data in its prosecution.
In our brief, the amici argued that Augustine requires police to get a warrant in a case like this, even if they plan to use a small amount, and that a blanket warrant requirement is best suited to avoiding confusion in the lower courts. As the brief mentioned, to rule the other way would suggest that law enforcement could obtain years of location information from cellphone companies without a warrant, so long as they only used a small part of that information. During argument, the Court cited this example specifically, and also vigorously debated who should have standing to challenge warrantless searches, and whether under the facts of Estabrook the evidence should be excluded if the court finds that the Fourth Amendment or the Massachusetts analogue were violated.
Immediately following Estabrook, the Court considered the case of Commonwealth v. Lucas. Our prior blog post on Lucas gives some background on the case, where the Clinic filed a brief on behalf of the New England First Amendment Coalition, the parent company of the Boston Globe, the parent company of WCVB-TV Channel 5, the Massachusetts Newspaper Publishers Association, the New England Newspaper and Press Association, Inc., and the New England Society of Newspaper Editors. Lucas concerns a rarely-used statute that criminalizes false statements made concerning a candidate or ballot question. A criminal defendant who filed a mailer in the last state legislative election accusing a representative of helping sex offenders brought a challenge to the statute under the First Amendment and Article 16 of the Massachusetts Declaration of Rights.
In our brief, the amici argued that the statute is unconstitutional in light of recent Supreme Court precedent, incentivizes candidates to use criminal law for partisan gain, and that counterspeech is the better remedy for correcting mistruths around election speech. The Court extensively discussed these and other topics, including how striking such statutes would implicate statutes on fraud, defamation, or lying to public officers, and whether this statute can actually serve its stated goal when a legal case would likely reach a verdict months after an election ended.
The oral arguments in both cases were recorded and should be made available on Suffolk Law’s archive of SJC arguments. We will update the blog when the Court issues its decisions.
Photo of the John Adams Courthouse care of Flickr user cmh231fl, and licensed under a Creative Commons Attribution Noncommercial 2.0 license.
It helps if you own the banks:
The report said Shor and his associates worked together in 2012 to buy a controlling stake in three Moldovan banks and then gradually increased the banks' liquidity through a series of complex transactions involving loans being passed between the three banks and foreign entities.
The three banks then issued multimillion-dollar loans to companies that Shor either controlled or was connected to, the report said.
In the end, over $767 million disappeared from the banks in just three days through complex transactions.
A large portion of this money was transferred to offshore entities connected to Shor, according to the report. Some of the money was then deposited into Latvian bank accounts under the names of various foreigners.
Moldova's central bank was subsequently forced to bail out the three banks with $870 million in emergency loans, a move designed to keep the economy afloat.
It's an insider attack, where the insider is in charge.
What's interesting to me is not the extent of the fraud, but how electronic banking makes this sort of thing easier. And possibly easier to investigate as well.
Facebook researchers have published an article in Science, certainly one of the most prestigious peer-reviewed journals. It concludes (roughly) that Facebook’s filtering out of news from sources whose politics you disagree with does not cause as much polarization as some have thought.
Unfortunately, a set of researchers clustered around the Berkman Center think that the study’s methodology is deeply flawed, and that its conclusions badly misstate the actual findings. Here are three responses well worth reading:
Also see Eli Pariser‘s response.
The post Facebook, filtering, polarization, and a flawed study? appeared first on Joho the Blog.
Excited about the possibility that he would project his creativity onto paper, I handed my 1-year-old son a crayon. He tried to eat it. I held his hand to show him how to draw, and he broke the crayon in half. I went to open the door and when I came back, he had figured out how to scribble… all over the wooden floor.
Crayons are pretty magical and versatile technologies. They can be used as educational tools — or alternatively, as projectiles. And in the process of exploring their properties, children learn to make sense of both their physical affordances and the social norms that surround them. “No, you can’t poke your brother’s eye with that crayon!” is a common refrain in my house. Learning to draw — on paper and with some sense of meaning — has a lot to do with the context, a context in which I help create, a context that is learned outside of the crayon itself.
From crayons to compasses, we’ve learned to incorporate all sorts of different tools into our lives and educational practices. Why, then, do computing and networked devices consistently stump us? Why do we imagine technology to be our educational savior, but also the demon undermining learning through distraction? Why are we so unable to see it as a tool whose value is most notably discovered situated in its context?
The arguments that Peg Tyre makes in “iPads < Teachers” are dead on. Personalized learning technologies won’t magically on their own solve our education crisis. The issues we are facing in education are social and political, reflective of our conflicting societal values. Our societal attitudes toward teachers are deeply destructive, a contemporary manifestation of historical attitudes towards women’s labor.
But rather than seeing learning as a process and valuing educators as an important part of a healthy society, we keep looking for easy ways out of our current predicament, solutions that don’t involve respecting the hard work that goes into educating our young.
In doing so, we glom onto technologies that will only exacerbate many existing issues of inequity and mistrust. What’s at stake isn’t the technology itself, but the future of learning.
An empty classroom at the Carpe Diem school in Indianapolis.
Education shouldn’t be just about reading, writing, and arithmetic. Students need to learn how to be a part of our society. And increasingly, that society is technologically mediated. As a result, excluding technology from the classroom makes little sense; it produces an unnecessary disconnect between school and contemporary life.
This forces us to consider two interwoven — and deeply political — societal goals of education: to create an informed citizenry and to develop the skills for a workforce.
With this in mind, there are different ways of interpreting the personalized learning agenda, which makes me feel simultaneously optimistic and outright terrified. If you take personalized learning to its logical positive extreme, technology will educate every student as efficiently as possible. This individual-centric agenda is very much rooted in American neoliberalism.
But what if there’s a darker story? What if we’re really training our students to be robots?
Let me go cynical for a moment. In the late 1800s, the goal of education in America was not particularly altruistic. Sure, there were reformers who imagined that a more educated populace would create an informed citizenry. But what made widespread education possible was that American business needed workers. Industrialization required a populace socialized into very particular frames of interaction and behavior. In other words, factories needed workers who could sit still.
Many of tomorrow’s workers aren’t going to be empowered creatives subscribed to the mantra of, “Do what you love!” Many will be slotted into systems of automation that are hybrid human and computer. Not in the sexy cyborg way, but in the ugly call center way.
Like today’s retail laborers who have to greet every potential customer with a smile, many humans in tomorrow’s economy will do the unrewarding tasks that are too expensive for robots to replace. We’re automating so many parts of our society that, to be employable, the majority of the workforce needs to be trained to be engaged with automated systems.
All of this begs one important question: who benefits, and who loses, from a technologically mediated world?
Education has long been held up as the solution to economic disparity (though some reports suggest that education doesn’t remedy inequity). While the rhetoric around personalized learning emphasizes the potential for addressing inequity, Tyre suggests that good teachers are key for personalized learning to work.
Not only are privileged students more likely to have great teachers, they are also more likely to have teachers who have been trained to use technology — and how to integrate it into the classroom’s pedagogy. If these technologies do indeed “enhance the teacher’s effect,” this does not bode well for low-status students, who are far less likely to have great teachers.
Technology also costs money. Increasingly, low-income schools are pouring large sums of money into new technologies in the hopes that those tools can fix the various problems that low-status students face. As a result, there’s less money for good teachers and other resources that schools need.
I wish I had a solution to our education woes, but I’ve been stumped time and again, mostly by the politics surrounding any possible intervention. Historically, education was the province of local schools making local decisions. Over the last 30 years, the federal government and corporations alike have worked to centralize education.
From textbooks to grading systems, large companies have standardized educational offerings, while making schools beholden to their design logic. This is how Texas values get baked into Minnesota classrooms. Simultaneously, over legitimate concern about the variation in students’ experiences, federal efforts have attempted to implement learning standards. They use funding as the stick for conformity, even as local politics and limited on-the-ground resources get in the way.
Personalized learning has the potential to introduce an entirely new factor into the education landscape: network effects. Even as ranking systems have compared schools to one another, we’ve never really had a system where one students’ learning opportunities truly depend on another’s. And yet, that’s core to how personalized learning works. These systems don’t evolve based on the individual, but based on what’s learned about students writ large.
Personalized learning is, somewhat ironically, far more socialist than it may first appear. You can’t “personalize” technology without building models that are deeply dependent on others. In other words, it is all about creating networks of people in a hyper-individualized world. It’s a strange hybrid of neoliberal and socialist ideologies.
An instructor works with a student in the learning center at the Carpe Diem school in Indianapolis.
Just as recommendation systems result in differentiated experiences online, creating dynamics where one person’s view of the internet radically differs from another’s, so too will personalized learning platforms.
More than anything, what personalized learning brings to the table for me is the stark reality that our society must start grappling with the ways we are both interconnected and differentiated. We are individuals and we are part of networks.
In the realm of education, we cannot and should not separate these two. By recognizing our interconnected nature, we might begin to fulfill the promises that technology can offer our students.
This post was originally published to Bright at Medium on April 7, 2015. Bright is made possible by funding from the New Venture Fund, and is supported by The Bill & Melinda Gates Foundation.
Today in Science, members of the Facebook data science team released a provocative study about adult Facebook users in the US “who volunteer their ideological affiliation in their profile.” The study “quantified the extent to which individuals encounter comparatively more or less diverse” hard news “while interacting via Facebook’s algorithmically ranked News Feed.”*
My interpretation in three sentences:
I think this should not be hugely surprising. For example, what else would a good filter algorithm be doing other than filtering for what it thinks you will like?
But what’s really provocative about this research is the unusual framing. This may go down in history as the “it’s not our fault” study.
I carefully wrote the above based on my interpretation of the results. Now that I’ve got that off my chest, let me tell you about how the Facebook data science team interprets these results. To start, my assumption was that news polarization is bad. But the end of the Facebook study says:
“we do not pass judgment on the normative value of cross-cutting exposure”
This is strange, because there is a wide consensus that exposure to diverse news sources is foundational to democracy. Scholarly research about social media has–almost universally–expressed concern about the dangers of increasing selectivity and polarization. But it may be that you do not want to say that polarization is bad when you have just found that your own product increases it. (Modestly.)
And the sources cited just after this quote sure do say that exposure to diverse news sources is important. But the Facebook authors write:
“though normative scholars often argue that exposure to a diverse ‘marketplace of ideas’ is key to a healthy democracy (25), a number of studies find that exposure to cross-cutting viewpoints is associated with lower levels of political participation (22, 26, 27).”
So the authors present reduced exposure to diverse news as a “could be good, could be bad” but that’s just not fair. It’s just “bad.” There is no gang of political scientists arguing against exposure to diverse news sources.**
The Facebook study says it is important because:
“our work suggests that individuals are exposed to more cross-cutting discourse in social media they would be under the digital reality envisioned by some“
Why so defensive? If you look at what is cited here, this quote is saying that this study showed that Facebook is better than a speculative dystopian future.*** Yet the people referred to by this word “some” didn’t provide any sort of point estimates that were meant to allow specific comparisons. On the subject of comparisons, the study goes on to say that:
“we conclusively establish that…individual choices more than algorithms limit exposure to attitude-challenging content.”
“compared to algorithmic ranking, individuals’ choices about what to consume had a stronger effect”
Alarm bells are ringing for me. The tobacco industry might once have funded a study that says that smoking is less dangerous than coal mining, but here we have a study about coal miners smoking. Probably while they are in the coal mine. What I mean to say is that there is no scenario in which “user choices” vs. “the algorithm” can be traded off, because they happen together (Fig. 3 [top]). Users select from what the algorithm already filtered for them. It is a sequence.**** I think the proper statement about these two things is that they’re both bad — they both increase polarization and selectivity. As I said above, the algorithm appears to modestly increase the selectivity of users.
The only reason I can think of that the study is framed this way is as a kind of alibi. Facebook is saying: It’s not our fault! You do it too!
In my summary at the top of this post, I wrote that the study was about people “who volunteer their ideological affiliation in their profile.” But the study also describes itself by saying:
“we utilize a large, comprehensive dataset from Facebook.”
“we examined how 10.1 million U.S. Facebook users interact”
These statements may be factually correct but I found them to be misleading. At first, I read this quickly and I took this to mean that out of the at least 200 million Americans who have used Facebook, the researchers selected a “large” sample that was representative of Facebook users, although this would not be representative of the US population. The “limitations” section discusses the demographics of “Facebook’s users,” as would be the normal thing to do if they were sampled. There is no information about the selection procedure in the article itself.
Instead, after reading down in the appendices, I realized that “comprehensive” refers to the survey research concept: “complete,” meaning that this was a non-probability, non-representative sample that included everyone on the Facebook platform. But out of hundreds of millions, we ended up with a study of 10.1m because users were excluded unless they met these four criteria:
That #4 is very significant. Who reports their ideological affiliation on their profile?
It turns out that only 9% of Facebook users do that. Of those that report an affiliation, only 46% reported an affiliation in a way that was “interpretable.” That means this is a study about the 4% of Facebook users unusual enough to want to tell people their political affiliation on the profile page. That is a rare behavior.
More important than the frequency, though, is the fact that this selection procedure confounds the findings. We would expect that a small minority who publicly identifies an interpretable political orientation to be very likely to behave quite differently than the average person with respect to consuming ideological political news. The research claims just don’t stand up against the selection procedure.
But the study is at pains to argue that (italics mine):
“we conclusively establish that on average in the context of Facebook, individual choices more than algorithms limit exposure to attitude-challenging content.”
The italicized portion is incorrect because the appendices explain that this is actually a study of a specific, unusual group of Facebook users. The study is designed in such a way that the selection for inclusion in the study is related to the results. (“Conclusively” therefore also feels out of place.)
Last year there was a tremendous controversy about Facebook’s manipulation of the news feed for research. In the fracas it was revealed by one of the controversial study’s co-authors that based on the feedback received after the event, many people didn’t realize that the Facebook news feed was filtered at all. We also recently presented research with similar findings.
I mention this because when the study states it is about selection of content, who does the selection is important. There is no sense in this study that a user who chooses something is fundamentally different from the algorithm hiding something from them. While in fact the the filtering algorithm is driven by user choices (among other things), users don’t understand the relationship that their choices have to the outcome.
In other words, the article’s strange comparison between “individual’s choices” and “the algorithm,” should be read as “things I choose to do” vs. the effect of “a process Facebook has designed without my knowledge or understanding.” Again, they can’t be compared in the way the article proposes because they aren’t equivalent.
I struggled with the framing of the article because the research talks about “the algorithm” as though it were an element of nature, or a naturally occurring process like convection or mitosis. There is also no sense that it changes over time or that it could be changed intentionally to support a different scenario.*****
Facebook is a private corporation with a terrible public relations problem. It is periodically rated one of the least popular companies in existence. It is currently facing serious government investigations into illegal practices in many countries, some of which stem from the manipulation of its news feed algorithm. In this context, I have to say that it doesn’t seem wise for these Facebook researchers to have spun these data so hard in this direction, which I would summarize as: the algorithm is less selective and less polarizing. Particularly when the research finding in their own study is actually that the Facebook algorithm is modestly more selective and more polarizing than living your life without it.
Update: (6pm Eastern)
Wow, if you think I was critical have a look at these. It turns out I am the moderate one.
Eszter Hargittai from Northwestern posted on Crooked Timber that we should “stop being mesmerized by large numbers and go back to taking the fundamentals of social science seriously.” And (my favorite): “I thought Science was a serious peer-reviewed publication.”
Nathan Jurgenson from Maryland and Snapchat wrote on Cyborgology (“in a fury“) that Facebook is intentionally “evading” its own role in the production of the news feed. “Facebook cannot take its own role in news seriously.” He accuses the authors of using the “Big-N trick” to intentionally distract from methodological shortcomings. He tweeted that “we need to discuss how very poor corporate big data research gets fast tracked into being published.”
Zeynep Tufekci from UNC wrote on Medium that “I cannot remember a worse apples to oranges comparison” and that the key take-away from the study is actually the ordering effects of the algorithm (which I did not address in this post). “Newsfeed placement is a profoundly powerful gatekeeper for click-through rates.”
A comment helpfully pointed out that I used the wrong percentages in my fourth point when summarizing the piece. Fixed it, with changes marked.
It’s now one week since the Science study. This post has now been cited/linked in The New York Times, Fortune, Time, Wired, Ars Technica, Fast Company, Engaget, and maybe even a few more. I am still getting emails. The conversation has fixated on the <4% sample, often saying something like: “So, Facebook said this was a study about cars, but it was actually only about blue cars.” That’s fine, but the other point in my post is about what is being claimed at all, no matter the sample.
I thought my “coal mine” metaphor about the algorithm would work but it has not always worked. So I’ve clamped my Webcam to my desk lamp and recorded a four-minute video to explain it again, this time with a drawing.******
Here’s the video:
If the coal mine metaphor failed me, what would be a better metaphor? I’m not sure. Suggestions?
* Diversity in hard news, in their study, would be a self-identified liberal who receives a story from FoxNews.com, or a self-identified conservative who receives one from the HuffingtonPost.com, where the stories are about “national news, politics, [or] world affairs.” In more precise terms, for each user “cross-cutting content” was defined as stories that are more likely to be shared by partisans who do not have the same self-identified ideological affiliation that you do.
** I don’t want to make this even more nitpicky, so I’ll put this in a footnote. The paper’s citations to Mutz and Huckfeldt et al. to mean that “exposure to cross-cutting viewpoints is associated with lower levels of political participation” is just bizarre. I hope it is a typo. These authors don’t advocate against exposure to cross-cutting viewpoints.
*** Perhaps this could be a new Facebook motto used in advertising: “Facebook: Better than one speculative dystopian future!”
**** In fact, algorithm and user form a coupled system of at least two feedback loops. But that’s not helpful to measure “amount” in the way the study wants to, so I’ll just tuck it away down here.
***** Facebook is behind the algorithm but they are trying to peer-review research about it without disclosing how it works — which is a key part of the study. There is also no way to reproduce the research (or do a second study on a primary phenomenon under study, the algorithm) without access to the Facebook platform.
****** In this video, I intentionally conflate (1) the number of posts filtered and (2) the magnitude of the bias of the filtering. I did so because the difficulty with the comparison works the same way for both, and I was trying to make the example simpler. Thanks to Cedric Langbort for pointing out that “baseline error” is the clearest way of explaining this.
We identified three types of scams happening on Jiayuan. The first one involves advertising of escort services or illicit goods, and is very similar to traditional spam. The other two are far more interesting and specific to the online dating landscape. One type of scammers are what we call swindlers. For this scheme, the scammer starts a long-distance relationship with an emotionally vulnerable victim, and eventually asks her for money, for example to purchase the flight ticket to visit her. Needless to say, after the money has been transferred the scammer disappears. Another interesting type of scams that we identified are what we call dates for profit. In this scheme, attractive young ladies are hired by the owners of fancy restaurants. The scam then consists in having the ladies contact people on the dating site, taking them on a date at the restaurant, having the victim pay for the meal, and never arranging a second date. This scam is particularly interesting, because there are good chances that the victim will never realize that he's been scammed -- in fact, he probably had a good time.
“Blockhead by Paul McCarthy @ Tate Modern” image from flickr user Matt Hobbs. Used by permission.
Alan Turing proposed what is the best known criterion for attributing intelligence, the capacity for thinking, to a computer. We call it the Turing Test, and it involves comparing the computer’s verbal behavior to that of people. If the two are indistinguishable, the computer passes the test. This might be cause for attributing intelligence to the computer.
Or not. The best argument against a behavioral test of intelligence (like the Turing Test) is that maybe the exhibited behaviors were just memorized. This is Ned Block’s “blockhead” argument in a nutshell. If the computer just had all its answers literally encoded in memory, then parroting those memorized answers is no sign of intelligence. And how are we to know from a behavioral test like the Turing Test that the computer isn’t just such a “memorizing machine”.
In my new(ish) paper, “There can be no Turing-Test–passing memorizing machines”, I address this argument directly. My conclusion can be found in the title of the article. By careful calculation of the information and communication capacity of space-time, I show that any memorizing machine could pass a Turing Test of no more than a few seconds, which is no Turing Test at all. Crucially, I make no assumptions beyond the brute laws of physics. (One distinction of the article is that it is one of the few philosophy articles in which a derivative is taken.)
Between 2003 and 2009, most music purchased through Apple’s iTunes store was locked using Apple’s FairPlay digital restrictions management (DRM) software, which is designed to prevent users from copying music they purchased. Apple did not seem particularly concerned by the fact that FairPlay was never effective at stopping unauthorized distribution and was easily removed with publicly available tools. After all, FairPlay was effective at preventing most users from playing their purchased music on devices that were not made by Apple.
No user ever requested FairPlay. Apple did not build the system because music buyers complained that CDs purchased from Sony would play on Panasonic players or that discs could be played on an unlimited number of devices (FairPlay allowed five). Like all DRM systems, FairPlay was forced on users by a recording industry paranoid about file sharing and, perhaps more importantly, by technology companies like Apple, who were eager to control the digital infrastructure of music distribution and consumption. In 2007, Apple began charging users 30 percent extra for music files not processed with FairPlay. In 2009, after lawsuits were filed in Europe and the US, and after several years of protests, Apple capitulated to their customers’ complaints and removed DRM from the vast majority of the iTunes music catalog.
Fundamentally, DRM for downloaded music failed because it is what I’ve called an antifeature. Like features, antifeatures are functionality created at enormous cost to technology developers. That said, unlike features which users clamor to pay extra for, users pay to have antifeatures removed. You can think of antifeatures as a technological mob protection racket. Apple charges more for music without DRM and independent music distributors often use “DRM-free” as a primary selling point for their products.
Unfortunately, after being defeated a half-decade ago, DRM for digital music is becoming the norm again through the growth of music streaming services like Pandora and Spotify, which nearly all use DRM. Impressed by the convenience of these services, many people have forgotten the lessons we learned in the fight against FairPlay. Once again, the justification for DRM is both familiar and similarly disingenuous. Although the stated goal is still to prevent unauthorized copying, tools for “stripping” DRM from services continue to be widely available. Of course, the very need for DRM on these services is reduced because users don’t normally store copies of music and because the same music is now available for download without DRM on services like iTunes.
We should remember that, like ten years ago, the real effect of DRM is to allow technology companies to capture value by creating dependence in their customers and by blocking innovation and competition. For example, DRM in streaming services blocks third-party apps from playing music from services, just as FairPlay ensured that iTunes music would only play on Apple devices. DRM in streaming services means that listening to music requires one to use special proprietary clients. For example, even with a premium account, a subscriber cannot listen to music from their catalog using an alternative or modified music player. It means that their television, car, or mobile device manufacturer must cut deals with their service to allow each paying customer to play the catalog they have subscribed to. Although streaming services are able to capture and control value more effectively, this comes at the cost of reduced freedom, choice, and flexibility for users and at higher prices paid by subscribers.
A decade ago, arguments against DRM for downloaded music focused on the claim that users should have control over the music they purchase. Although these arguments may not seem to apply to subscription services, it is worth remembering that DRM is fundamentally a problem because it means that we do not have control of the technology we use to play our music, and because the firms aiming to control us are using DRM to push antifeatures, raise prices, and block innovation. In all of these senses, DRM in streaming services is exactly as bad as FairPlay, and we should continue to demand better.
In this long article on the 2005 assassination of Rafik Hariri in Beirut, there's a detailed section on what the investigators were able to learn from the cell phone metadata:
At Eid's request, a judge ordered Lebanon's two cellphone companies, Alfa and MTC Touch, to produce records of calls and text messages in Lebanon in the four months before the bombing. Eid then studied the records in secret for months. He focused on the phone records of Hariri and his entourage, looking at whom they called, where they went, whom they met and when. He also followed where Adass, the supposed suicide bomber, spent time before he disappeared. He looked at all the calls that took place along the route taken by Hariri's entourage on the day of the assassination. Always he looked for cause and effect. How did one call lead to the next? "He was brilliant, just brilliant," the senior U.N. investigator told me. "He himself, on his own, developed a simple but amazingly efficient program to set about mining this massive bank of data."
The simple algorithm quickly revealed a peculiar pattern. In October 2004, just after Hariri resigned, a certain cluster of cellphones began following him and his now-reduced motorcade wherever they went. These phones stayed close day and night, until the day of the bombing - when nearly all 63 phones in the group immediately went dark and never worked again.
The investigators now turned their full attention to the cellphone records. Building on Eid's work, they determined that the assassins worked in groups, each with a leader and each adhering to specific procedures. Everyone in the group called the leader, and he called everyone in the group, but the lower-level operatives never called one another.
The investigators gave each group a color. The green group consisted of 18 Alfa phones, purchased with fake identification from two shops in South Beirut in July and August 2004. The purpose of the fake IDs was not to defraud Alfa out of payment; every month from September 2004 to May 2005, someone went to an Alfa office and paid all 18 bills in cash, without leaving any clue to his identity. The total phone bill for the green network, including activation fees, was $7,375 -- a prodigious amount, considering that 15 of the green group's 18 phones went almost entirely unused.
The first spike in call activity occurred in September 2004, immediately after Hariri announced his resignation. The investigators contend that the green group was at the center of the conspiracy. The phone number 3140023 belonged to the top leader, and the numbers 3159300 and 3150071 belonged to his two deputies. (He called them and they called him, but with those phones, they never called each other.) The two deputies carried phones belonging to other groups, through which they passed on instructions to the other participants in the operation. When a member of one group would call a group leader, the group leader would often follow up by switching to a green phone and calling the supreme leader, who was nearly always in South Beirut, where Hezbollah keeps its headquarters.
On Oct. 20, 2004, the day Hariri left office and his security detail was significantly reduced, the blue group went into operation. It originally worked according to the same rules as the green group, but its active membership increased from three phones to 15, with seven connected to Alfa and eight to MTC Touch. All of the blue phones were prepaid. Some were acquired as early as 2003 and had seen little or no use. The people who bought them also gave false identification, and again money seemed to be in plentiful supply. The minutes that expired each month went largely unused, but the phones were loaded again and again. When the blue group went dark, the phones still had unused minutes worth $4,287.
The prosecutors say the blue group followed Hariri's movements. On the morning of Oct. 20, its members were already deployed around Quraitem Palace. At 10:30 a.m., Hariri set out toward Parliament and then to the presidential palace, where Lahoud was waiting to receive his resignation. The cell towers picked up the blue group's members moving with him and calling their chief. From then on, the blue phones trailed Hariri nearly everywhere -- to Parliament, to meetings with political leaders, to long lunches at the Saint-Georges Yacht Club & Marina. When Hariri was at his home, so were they. When he flew abroad, they moved with him to the airport and then stopped operating until he returned, when they would pick up the trail again.
Eventually, the yellow group was added....
There's a lot more. It's section 6 of the article.
See also this example.
Some news: We’re launching a MOOC — a massive open online course — on news and media literacy. The course (here’s the registration page) will be based on an online course I currently teach at Arizona State University’s Walter Cronkite School of Journalism and Mass Communication, and will be open to all who are interested, at no charge.
The MOOC, which has received funding from the Robert R. McCormick Foundation, will be hosted at edX, one of the major–and rapidly growing–course platforms. ASU has become a member of the edX university consortium, and this is the first offering from the school. The course launches July 6, and registration is open now.
(Note: The media-literacy MOOC is not part of the ASU/edX Global Freshman Academy, which will be offering a battery of for-credit courses.)
We’re well aware that the jury is out, to put it mildly, on the ultimate value of MOOCs. Clearly they’ve been oversold in some ways. To think that they’ll take over education is absurd. Equally clearly, they have enormous potential. This course is experimental by definition, but we have two major goals: to make it a super-useful learning experience, and to learn from what happens in order to improve the next time.
One of the best parts of this project is the people involved. In the past several months we’ve recorded conversations with some of the smartest folks I know in the news and media-literacy communities. They include Wikipedia’s Jimmy Wales; New York Times Public Editor Margaret Sullivan; CNN’s Brian Stelter; media-literacy guru Renee Hobbs; and many others. We’ll be featuring these conversations in the course.
This is a team effort in every possible way. I’m incredibly fortunate to be working with the ASU Online folks, who’ve been helping me sweat the details and who know lots of things I don’t. A team of students at the Cronkite School’s Public Relations Lab has put together some great marketing ideas. PhD candidate Kristy Roschke, whose focus is media literacy, is playing a key role in the course development and will be the lead teaching assistant when the course goes live.
MOOCs are open in ways that most university courses are not. Openness is core to my work–such as the Mediactive book, on which the course is largely based, is free to read online and/or download from this site, and is available under a Creative Commons copyright license (“Some Rights Reserved”). I want to apply the principle of openness, as much as possible, to the new project. So I’ll be blogging regularly about how we’re doing this between now and the July 6 launch.
You may find this interesting to watch. If so, and if you think we can improve on what we’re doing, let me know. I’m looking for the best ideas, not just my own.
Last summer I interned at Atlantic Public Media in Woods Hole, MA. I spent the summer making Sonic IDs and produced a 6-minute feature about the upcoming 400th Anniversary of Plymouth Plantation (in 2020) from the perspective of Native Wampanoag. I’d played around a little with audio editing before getting to APM, but didn’t realize just how much work goes into production until I was sitting at a desk, staring at hours upon hours of audio, and trying to find those golden 30-60 seconds.
My appreciation for public radio and audio-storytelling increased exponentially in those moments. It’s hard work, people!! It takes a long time to really figure it out and get it right. I can’t count how many times I read Ira Glass’ quote about creativity that summer. It’s going to take a while, it’s going to take a while… just gotta fight through it. Nothing I produce at this stage in my life is actually going to feel good enough. Just. Have. To. Keep. Trying. Ahhh.
I spent August – December 2014 in Kathmandu, Nepal living with a host-family and learning Nepali. I worked with the phenomenal power-couple Jaya Luintel and Madhu Acharya, two incredible and renowned radio-Journalists in Nepal. I worked mostly with Jaya doing some writing for her organization The Story Kitchen. I didn’t produce a radio story in Nepal for a number of reasons, but largely I was trying to figure out ethics of recording in a cultural context completely different from my own. I did a final project on Women Exercising in Nepal. I was inspired by a group of women from the Siddhipur Jogging Group. I met them while on one of my early morning runs with a friend and we were graciously welcomed into their community and their homes. These women became family. I returned in late-December to a world of snow, and ice, and closed-off New England homes. It was a hard transition to say the least and I miss my family every day. We talk on the phone often.
The recent earthquake in Nepal has been devastating. To learn that the people who so graciously shared their lives and their culture, who became both my family and my friends are struggling in ways that are difficult to fathom is heartbreaking. My host-family and many of the women from the Siddhipur Jogging group lost their homes. Many lives have been lost and countless more will be threatened as the situation continues to worsen. I’m trying to find ways to effectively assist in recovery efforts from afar. Nepal and its people have a long road ahead in terms of recovery. I had been planning to return in June with my parents (this would be their first time out of the US!), but we all agree the money can be better used to support relief efforts.
I’m really excited to be here at PRX this summer. I’ll be here in the office once a week, on Tuesdays. When I’m not at PRX I’ll be working at Brandmoore Farm in Rollinsford, NH. At Brandmoore I’m doing a combination of farm work and media production. Becky and Phil Brand so graciously invited me to work as a Digital Media Producer / Outreach Coordinator this summer. I’ll be creating content to showcase their farm and also look into the ways that local farms and food systems can reach a wider range of the population through public media. The content I produce might also be used for a Kickstarter Campaign they’re organizing in the near future. I’m hoping to integrate that work into something I do here at PRX. What that will look like, however, I’m not sure!
In my spare time I like to run in the woods, bike long distances, and experiment with fresh ingredients in the kitchen.
If you read all the way to here, you’re a trooper! I definitely wrote way too much – but hey, that’s me.
Questions? Comments? Fantastic story?! Shoot me an email. firstname.lastname@example.org
I’m at the wonderful Re:publica conference for a single day, racing home to teach tomorrow… and thus far I’ve given a keynote and done over 12 interviews, so I haven’t gotten the whole feel of the conference yet. Still, it’s one of the most wonderful and high energy venues I’ve ever spoken at, and I’m having a great time.
My talk this morning focused on civics in the age of mistrust. The organizers (wisely?) put a different title on it, but the audience clearly got the core idea: we’re at a moment in time where mistrust in institutions is at a very high level, and any approaches to revitalizing public life or fixing civics needs to start from understanding mistrust and harnessing it productive.
At some point soon, I hope to annotate my speakers notes, likely on FOLD. But here are the rough ones now, for those who missed the talk, or for those who are interested and want to know what I meant to say.
I want to begin my talk by showing you a christmas gift I received in 2012 from my friend, journalist Quinn Norton.
I received the postcard a few weeks after she published an essay that was both brilliant and troubling. It was titled “Don’t Vote” and it was, in part, an apology to her great-grandmother, who had marched in the streets to demand women’s right to vote, the right Quinn was now urging us to stop exercising. She writes “I have decided that I am on strike as a voter, until voting means something.”
Quinn is opting out of voting not out of ignorance, but out of knowledge and frustration: with gerrymandering, with legalized corruption, and with her growing sense of impotence at changing these problems through the ballot box. She closes the essay by urging us to “let your body be your ballot” – to make change in how you act in the world, what you stand for, for how the organizations you work with or companies you work for treat people.
Her postcard is a much simpler statement: it’s an elegant essay reduced to a cartoon. The picture is of a brick with a logo that’s unmistakeable to any American voter – it’s the sticker you receive when you vote. It’s like the ash they smear on your forehead on ash wednesday – visible, public evidence that you’ve done your civic duty. The postcard is a cartoon, not a concrete suggestion: it’s not an encouragement to riot so much as it is a reminder that participating in a system that’s badly broken is an endorsement of it
Quinn wrote her essay after spending much of a year reporting on Occupy, while embedded within the movement, visiting 14 of the camps, and wrote a moving eulogy for Occupy in Wired. In her reporting, she is clear that she was in, but not part of, Occupy, covering it as press and treating it with the seriousness that it deserved, as clear evidence of people dissatisfied with how systems are working and looking for ways to change them, or replace them with something different.
I pinned Quinn’s brick above my desk so that I would look at it every day.
represented a tension between two sorts of civic engagement that I have been losing faith in: electoral, representative democracy and public protest.
I’m certainly not the only one losing faith in democracy’s ability to make change. We are seeing falling voting rates in the United States, with 2014 registering the lowest turnout in history for a US congressional election.
And the US is not alone. 2014 also saw the lowest turnout for an EU parliamentary election, and while EP elections always have lower turnout than national elections in Europe, both have been trending down in Europe since 1979, much as they have been in the US.
Lots of reasons have been offered for why participation in voting is decreasing. Many of these explanations blame the ignorance or laziness of voters: if only we weren’t so distracted by our phones and the internet, if only we weren’t so lazy, we’d take part in our critical civic duty. But this argument misses the critical fact that while participation in elections is shrinking, we’re experiencing a golden age of protest. And say what you will about people who take to the streets to protest their government, they may be many things, but they’re not lazy.
Protests are an essential part of democracy. They can be deeply effective as a way of demanding immediate change from those who are in power. Last week, my country watched people come out into the streets in Baltimore, NYC, Boston to protest death of Freddie Gray, a young man fatally injured after he was arrested by local police. After a week of protests, six police officers are now facing murder and manslaughter charges. Certainly doesn’t always work, but it can be powerful in forcing institutions to do the right thing
Protest gets more complicated when you’re not protesting a single incident and demanding a response, but protesting against a larger system that’s broken.
2011 was a pivotal year for protest with the arab spring protests, a wave of popular protests legitimately seeking to change oppressive governments. They’ve had a mixed outcome, as governments have gotten better at fending them off. The current tally gives us one clear success (Tunisia), three civil wars (Syria, Libya, Yemen), violent repression (Bahrain, Sudan), and the deeply complicated case of Egypt, where a successful revolution led to election of Islamist government, popular protests led to a military coup.
We’ve learned that protests are good at counterpower, at ousting a surprised and unaware government, but that protests have a much harder time building governments than toppling them. Even though it’s philosophically more easy to be excited about protests leading to revolution in monarchies than in democracies, by the middle of 2011, democratic movements in Europe, North and South America had picked up the spirit of the Arab Spring and turned it into an anti-politics movement – protesting against repressive and disempowering systems, not against singular injustices.
In Spain, the Indignados movement brought people into the streets, starting on May 15, 2011. Activists protested unemployment brought on by austerity policies, lack of opportunities for young people, and a general sense that Spain was being run on behalf of a wealthy elite at the expense of ordinary citizens. While the movements in the streets ended within a year, some supporters of the movement have build the political party Podemos, which is the second largest in Spain by number of members, but finished 4th in recent elections with only 8% of the vote.
The Occupy movement, began in NYC on September 17, 2011 with Occupy Wall Street. The movement focused on inequality, financial corruption, housing and college debt burdens, and had some measurable successes on local scales, fighting eviction and buying back outstanding debt. It has brought discussions of inequality into the political dialog in the US, and has helped establish a template for protest globally, with movements like Occupy Central in Hong Kong adopting tactics and rhetoric… but even its most ardent supporters will concede that the movement has not led to major changes to the US political or economic system.
These protest movements throughout Europe, North and South America have demonstrated huge energy and enormous popular support. But it’s hard to point to tangible, systemic changes that parallel the scale of mobilizations that have taken place. This may point to a paradox of these broad, anti-political protests in democracies. Unless you’re going to overthrow a democratically elected government, the likely outcome of a protest is that you’re going to get invited into government to try to fix things. And as activists throughout history have figured out, fixing the problems of inequality, corruption and lack of opportunity is a lot harder than motivating people to protest against them.
I want to offer two other reasons to be skeptical of systemic change through protests.
Zeynep Tufekci is a brilliant scholar of social change and of protest. She conducted fieldwork focused on the Gezi Park protests, which brought at least 3.5 million Turks into the streets of 90 Turkish cities from May to August of 2013. Zeynep reports that the rallies featured an incredibly diverse group of protesters – from ultranationalists to gay and lesbian rights activists – and that they fell apart very quickly. While they were dramatic, they were also incredibly ineffective. The one shared objective of the movement – ousting Erdoğan – failed utterly, as Erdoğan was elected president in 2014 without need for a run-off.
Why? Zeynep argues that it’s much, much easier to bring people out to protest than in years before – you can organize on Facebook, report on Twitter, livestream on UStream and now on Periscope. Combine all these channels for mobilization with a message behind the protests that was maximally inclusive – quoting a poem by Rumi, the movement’s motto was “Sen de gel” – You come, too! But in years past, took months of organizing behind the scenes to bring 50,000 people in the streets. Bringing 50,000 meant that you’d held meetings with different groups and made deals and compromises to find a common agenda. Now you can bring out 50,000 people by announcing what you’re against and inviting people to join you. But when the authorities crack down, or when it comes to turn from mobilization to making demands and setting an agenda, movements split and dissipate much more easily – and political leaders know this, and are less threatened by a million in the streets today than they were by 50k a decade ago. What we may be building in the wake of the Arab Spring and the Occupy protests, Zeynep warns, is a form of protest that can mobilize but can’t set an agenda or build a movement.
If that sounds like bad news, here’s some worse news from another scholar, Ivan Krastev, chairman of the Center for Liberal Strategies, in Sofia, Bulgaria.
He worries that even if protests like the Indignados or Occupy succeed in ousting a government, much of what protesters are asking for is not possible. “Voters can change governments, yet it is nearly impossible for them to change economic policies.” When Indignados grows into Podemos, Krastev predicts that it’s going to be very hard for them to truly reverse policies on austerity – global financial markets are unlikely to let them do so, punish them by making it impossibly expensive to borrow
Krastev offers the example of how Italy finally got rid of Silvio Berlusconi – wasn’t through popular protest, but through the bond market – the bond market priced italian debt at 6.5%, and Berlusconi resigned, leaving Mario Monti to put austerity measures in place. You may have been glad to see Berlusconi go, but don’t mistake this as a popular revolt that kicked him out – it was a revolt by global lenders, and basically set the tone for what the market would allow an Italian leader to do. As Krastev puts it, “Politics has been reduced to the art of adjusting to the imperatives of the market” – we’ve got an interesting test of whether this theory is right with Syriza, a left-wing party rooted in anti-austerity protests now in power, and facing possible default and exit from the Eurozone this month. What Krastev is saying is really chilling – we can oust bad people through protest and elect the right people and put them in power, we can protest to pressure our leaders to do the right things, and they may not be powerful enough to give us the changes we really want.
If you’re feeling depressed at this point in the talk, that’s a good thing – it means you’re listening. But it also means that you may be looking for a new way forward, a third path between elections and protest. And for a lot of people – particularly for people like those in this room – we’ve hoped the way forward is through technology, through the mobile phone and the internet and the ways they might make engaging with society more fair, more participatory, make governments more responsive and closer to the will of the people.
I’m part of the first generation to use and build the world wide web – I dropped out of graduate school in 1994 to help found one of the world’s first social media companies. Like a lot of people who were working on the internet in the mid-1990s, I wasn’t there for the money, because frankly, no one was making money online at that point. I was there because people believed that the internet was going to change the world.
We believed that the internet was going to oust powerful companies that dominated markets with monopolies and make it impossible for other monopolies to take their place, because it was so easy to create new businesses online that no one would ever control the whole market for something as essential as search or online messaging.
We believed that the internet routes around censorship and that publishing online would allow people to speak freely, that censoring the internet was like nailing gelatin to the wall, as President Clinton once said, and that when countries like China encountered the internet, their governments would fall as people learned how they were controlled and manipulated.
We believed that the internet would let people interact with each other in new and honest ways, because no one knew who we were online. In a space where no one knew whether you were male or female, black or white, European or African, we would overcome the prejudices of the offline world and have conversations that were fully inclusive of all perspectives.
We believed that governments didn’t care what happened online, that they weren’t paying attention to it, and that if they were, the internet was far too vast to monitor all of it, and that even if they did, the companies we were using to communicate would protect our privacy, and that we could use unbreakable encryption to protect anything that truly needed to be secret.
In other words, we believed a lot of dumb stuff
It turns out that the internet doesn’t magically make the world a better place. We’re starting to wake up to that now – when the inventor of the World Wide Web launches a campaign to build “the web we want”, a web that’s very different from the one we’ve all built over the last twenty five years, it’s a pretty clear sign that this remarkable technology alone doesn’t transform the world in the ways we might hope
Of all the missed opportunities and wrong turns, the most disappointing may be the way the internet has failed to transform politics and government.
Some hoped that the internet would transform elections, making it easier for exciting new and unknown candidates to build a political base and take power. It works, sometimes – I had lunch yesterday with my favorite German politician, Malte Spitz of the Green Party, and it’s hard to imagine him getting elected without the internet. But it turns out that existing political parties have gotten very good at using the internet to raise money and disseminate propaganda, and to target advertising to persuade us how to vote for candidates who aren’t using the internet to solicit ideas and input.
We hoped that by demanding transparency, we would expose waste and corruption and make government more responsive and efficient. But it turns out that it’s a long path from releasing data sets to exposing systemic flaws in governance, and that it’s a task that requires not just coders, but journalists, artists, storytellers and activists. Even when we’re confronted with a trove of secrets, leaked diplomatic and intelligence documents, it takes enormous work to turn leaks into revelations and into actions. Transparency is a neccesary but not sufficient requirement for change.
We hoped that we as citizens might take on the work of actually crafting and shaping legislation, stepping back from the compromise that is representative democracy to participating directly in writing the laws that govern our societies. And while we’ve had precious few successes, it’s worth celebrating those victories we have, like the Marco Civil Do Internet in Brazil, written not only by professionals, but by a thousands citizens. Ronaldo Lemos and his colleagues at the Institute of Technology and Society in Rio are releasing a new platform, Plataforma Brasiliana, which will make it easier to collectively author legislation, but questions remain: yes, surpremely geeky Brasilians were willing to take time to author laws about the internet, but will anyone show up to write better tax policy?
Micah Sifry, co-founder of Personal Democracy Forum, is one of the smartest people thinking about the internet and politics, and he’s recently published a brave and terrific book, The Big Disconnect: Why the Internet Hasn’t Transformed Politics (Yet). It’s brave because Micah thoroughly acknowledges that we haven’t gotten what we wanted from twenty years of bringing the internet to politics – indeed, in the US, our politics on a federal level are far worse than they were two decades ago. Fixing this is going to require us to build some tools that are very, very difficult to build. We need to solve the hardest problem in politics – how do you let people deliberate at scale, so that people can work together to build movements, to advocate for issues, to work together with elected officials to bring new solutions into the world. And he’s hopeful that people may be starting to build these tools, looking to people like Pia Mancini, the leader of Argentina’s Net Party, which is building Democracy OS, a set of tools that let citizens vote on policy proposals and work with legislators in the Net Party to promote new legislation.
I think Micah’s right that we need new tools. But I think the problem is even deeper than he imagines. When you ask Americans whether they trust their government to do the right thing most of the time, 24% answer yes. That’s down from 77% in 1964. For my entire lifetime, there’s been only one moment when a majority of the American people trusted the government to do the right thing… and that’s the moment George W. Bush was leading us into a disastrous war in Iraq.
But it’s not just confidence in government that’s dropping in the US – it’s trust in institutions of all kinds. From the 1960s to now, Americans tell you that they have less trust in newspapers, in churches, in non-profit organizations, in corporations, in banks, in the medical establishment. The only institutions where trust is increasing in my country are in the military and the police (though trust in the police is changing very quickly right now.)
I don’t have data at the same granularity for European nations as I do for the US, and I don’t want to make the mistake of treating European nations as a group, but I want to note that one survey sees several European nations has having a bigger problem with institutional mistrust than the US. Edelman’s Trust Barometer is built annually by asking 1000 citizens in each of 33 nations questions about whether they trust the government, NGOs, business and the media. They found that trust is at an all time low, and that Germany, Italy, Poland, Spain, Sweden and Ireland all have a lower level of trust in institutions than we are experiencing in the US.
I don’t know what’s causing this increase in mistrust in the US and Europe – I don’t think it’s a single thing, but a combination of factors. Inequality is on the rise, globally, as Thomas Piketty has been telling us, and it’s easy for trust to decline when we feel like very few people are getting rich and we’re getting poorer – whether we blame government, corporations or banks, we lose trust in those institutions. Transparency, for all its benefits, means that we know more about the failings of institutions, about corruption or just sheer incompetence – it’s hard to learn about the causes of the 2008 financial crisis and come out with trust intact in the global financial system and those responsible for regulating it. The professionalization of politics has something to do with mistrust – once we start seeing politicians as a different class of people rather than as people like us, representing our interests, we don’t trust them to have our best interests at heart. I think mistrust can come from a sense of powerlessness – if governments and corporations and the media can’t rally together and make real progress on a critical issue like global warming, are they really as powerful as we think they are?
I fear that mistrust has something to do with globalization, and increasing diversity in our societies. Mistrust began to rise in the US during the reforms of the civil rights era that began ensuring equal rights for African-American citizens… and it’s possible that people started trusting governments and universities less when they were providing services not just to people like them, but to people of other ethnic or national backgrounds. This might be a way to think about euroskepticism and rising nationalism, as some people mistrust institutions that are redistributing wealth across the continent to people they identify as “other”
Political scientist and economists are generally pretty scared of mistrust. There’s a low level of mistrust that you need to have a liberal democracy function: the legislative, executive and judicial branches all look at each other with a low-level of mistrust so that they’re able to act as checks and balances to each other. But high levels of mistrust end up being corrosive. If people don’t trust banks, they don’t deposit money and eventually the bank can’t make loans. If people don’t trust governments, they don’t pay taxes and the government can do less and less. Institutional mistrust is corrosive in large doses – it leads to societies where we interact and trade only with people we trust deeply, like family or tribe.
Many of my friends around the world who are trying to revitalize interest in civics are working to increase the trust in institutions. Whether they’re encouraging people to monitor elections, releasing government data sets or helping cities find and fill potholes, they’re working to lower the cost of civic participation and give people a better chance to have a positive experience with the institutions they’re affected by. I think this work is important and admirable, but I also think it’s not nearly enough to tackle the problems we face today.
The radical idea I want to put forward is that we can’t reverse the rise of mistrust. Instead, we’ve got to figure out how to channel it productively. We have to start treating mistrust, our deep skepticism of the institutions in our lives and in our communities into a civic asset.
I’m seeing at least three different ways people are learning to harness mistrust. In our research at Center for Civic Media, we’re seeing a great deal of civic activism that’s unfolding outside of government institutions. People who have a high degree of frustration and mistrust, but who are finding ways to make change outside of winning elections and passing laws.
In his book Code, Lawrence Lessig observed that there are at least four ways we regulate behavior in our societies. We pass and enforce laws to prohibit certain behaviors; we use markets to make some behaviors expensive and others cheap; we use code and other architectures to make some behaviors technically possible or impossible; and we use norms to make some behaviors socially desirable and others taboo. When we lose faith in some kinds of institutions, say in governments’ abilities to pass and enforce good laws, we see people channeling their desire for change towards code, towards markets and towards norms.
I’d like to see European governments take action to prevent the massive violations of privacy we’ve seen committed by the NSA, but I have very little faith that the American government will make significant changes to prevent the sorts of violations revealed by Edward Snowden. And since I don’t have very much faith in my government to make these changes, it’s exciting to see projects putting their faith in code to make surveillance far more difficult by making use of strong encryption routine. Mailpile, Mailvelope, Tor, Whisper Systems, The Guardian Project – these are all people channeling their frustration and mistrust into making change through code.
I’d like an international binding carbon tax, but it’s hard to have faith that the UN and other international institutions will find balance between countries like China and India, that want to give billions of citizens a better lifestyle, fossil fuel producing nations, and nations like mine where a remarkable percentage of people aren’t convinced that human beings have a role in causing climate change. But even if I’m skeptical of governments and international institutions, I can look to the market, to companies like Tesla, trying to build beautiful and exciting electric cars, and to entrepreneurs around the world working to make solar power not only the most sensible way to produce power, but the cheapest.
Many of the hardest problems we face worldwide are problems of human rights, of protecting the rights of minorities from the actions of majorities. It’s critically important that we legislate to protect the rights of all people, but it’s not enough when we lose trust in the institutions designated to protect those rights, as is happening with Americans and our police forces today. Protecting the rights of minorities, whether it’s African Americans in my country, or the Roma in Europe, requires us to change norms, to address our basic beliefs. Around the world, we’re seeing people working to change norms by making media and building movements – the #blacklivesmatter movement has created a narrative that is forcing American law enforcement to face that they’ve got a real and persistent problem with racial bias and may be the first step towards making real change.
So one way to harness mistrust is to try new theories of change, to look for ways we can make change through markets, code and norms. Another way to harness this mistrust is to become engaged, careful critics of the institutions we mistrust.
Luigi Reggi was working for the Italian government, building a massive open data system so that people could see where EU funds were being spent in his community. He built a gorgeous open data portal, but found that not only did most people ignore the data he worked to present, but they also had a general sense that Italy wasn’t getting its money’s worth from these EU projects. So, working outside the government, he started something new. Monithon is a project that invites people to monitor an EU funded community project, to ask hard questions about whether the project ever got completed, whether it’s working well or at all, whether the project meets a community’s needs. Their biggest partner is Libera, a group that works to identify and resist the role of the mafia in Italy, and they’re mobilizing not just seasoned activists to monitor the effectiveness of EU projects around Italy, but high school students, who are now taking on evaluating these projects in their community as a hands-on lesson in citizenship.
I call this idea “monitorial citizenship”, and my students and I have been working on ways we can make it work at scale, inviting thousands of people to take on the task of monitoring their government not just as a one-time thing, but as essential and important a task of citizenship as voting. We’ve launched a project in Sao Paulo, Brazil, where the mayor, Fernando Haddad, started his term by publishing 100 concrete promises – I’ll put this many streetlamps in this neighborhood, build this many new low-income housing units. He held elections for over 1000 citizen monitors whose job it is to see that the mayor lives up to these goals. And we’ve built a tool that lets citizens meet and decide what infrastructures they want to monitor in their communities – schools, playgrounds, sidewalks – and quickly build a survey that anyone with a smartphone can take. The data they collect – the photos, GPS locations, questions they answer – get posted to a shared map which can be shared with the government or with the press, or used by the community to self-organize and take on these challenges directly. We launched it three weeks ago in Sao Paulo and it’s popular enough that we’ve expanded projects into nine Brazilian cities, working with neighborhood and community groups.
Here’s the interesting thing about monitorial citizenship – sometimes you find that your mistrust of institutions is deserved, and you’ve got data to back up your suspicions. And sometimes you discover that the people who represent you are doing a better job than you’d imagined. It’s a model that can turn mistrust into advocacy for change or can lessen mistrust, and it works as well if you’re auditing the promises a company, a university or a government makes.
Some of the most exciting mistrust-fueled work I’m seeing looks at the idea that we could eliminate institutions altogether, building systems designed from the ground up to be decentralized. One of the first times I was in Berlin, more than ten years ago, I watched the folks from Freifunk build a mesh network that spanned the entire city, a network with no single point of failure and no single internet service provider in charge of it. This same impulse, to build systems that have no center, is what’s animating the interest in Bitcoin, a currency that doesn’t force us to trust central banks or currency policies, whose faith is in algorithms and distributed computation, not in the institutions that failed so badly in 2007.
These three approaches – building new institutions, becoming engaged critics of the institutions we’ve got, and looking for ways to build a post-institutional world – all have their flaws. We need the new decentralized systems we build to work as well as the institutions we are replacing, and when Mt. Gox disappears with our money, we’re reminded what a hard task this is. Monitorial citizenship can lead to more responsible institutions, but not to structural change. When we build new companies, codebases and movements, we’ve got to be sure these new institutions we’re creating stay closer to our values than those we mistrust now, and that they’re worthy of the trust of generations to come.
What these approaches have in common is this: instead of letting mistrust of the institutions we have leave us sidelined and ineffective, these approaches make us powerful. Because this is the middle path between the ballot box and the brick – it’s taking the dangerous and corrosive mistrust we now face and using it to build the institutions we deserve. This is the challenge of our generation, to build a better world than the one we inherited, one that’s fairer, more just, one that’s worthy of our trust.
Spring 2015 Cyberlaw Clinic students Jack Xu and Cecillia Xie joined the Clinic’s Managing Director Chris Bavitz on a trip to Seattle last month to participate in the WeRobot 2015 robotics law and policy conference at University of Washington School of Law in Seattle. Accompanied by Chelsea Barabas of the MIT Center for Civic Media, the Clinic’s representatives attended the conference to present their working draft paper entitled, “Legal and Ethical Issues in the Use of Telepresence Robots: Best Practices and Toolkit.” J. Nathan Matias, also of the Center for Civic Media, contributed to the paper but was unable to attend the event.
Chris, Jack, Cecillia, and Chelsea joined discussant Laurel Riek of Notre Dame for a panel discussion about the paper and, more broadly, about privacy and related concerns that arise in connection with the use of telepresence robots. The draft paper and panel discussion helped to lay the groundwork for development a broader law and policy toolkit examining legal concerns that arise in connection with the use of telepresence robots. Professor Riek’s approach — grounding the project in the literature of AI and robotics research — helped to guide the discussion, which focused on the Clinic’s methodology and the scope and scale of its work.
Chelsea talked at length about the “People’s Bot” project, which was the genesis of the toolkit project. Cecillia, Jack, and Chris talked about the Clinic’s approach to legal issues and how documenting best practices can help to establish norms in fast-moving areas of law and technology. The discussion focused a lot on whether telepresence robotics presents unique problems and concerns or whether the legal issues raised by telepresence are the same as those raised by various other remote surveillance, recording, and other technologies.
Video of the panel discussion is now available:
Ryan Calo and his colleagues at UW School of Law put on a phenomenal conference, with outstanding presentations by Kate Darling, Anupam Chander, Karen Levy and Tim Hwang, and many others over the course of the two-day event.
Photos courtesy of Cecillia Xie and Jack Xu.
Kamkar told Ars his Master Lock exploit started with a well-known vulnerability that allows Master Lock combinations to be cracked in 100 or fewer tries. He then physically broke open a combination lock and noticed the resistance he observed was caused by two lock parts that touched in a way that revealed important clues about the combination. (He likened the Master Lock design to a side channel in cryptographic devices that can be exploited to obtain the secret key.) Kamkar then made a third observation that was instrumental to his Master Lock exploit: the first and third digit of the combination, when divided by four, always return the same remainder. By combining the insights from all three weaknesses he devised the attack laid out in the video.
I was walking on the street in front of Wheelock College today when I saw an elderly man, nicely dressed, stopping as he walked along to pick up the plastic bags stuck in the shrubbery. “Thank you,” I said as I passed by him.
It was Mike Dukakis, whom you might remember from such projects as being the former governor of the Commonwealth of Massachusetts and the Democratic Presidential candidate who ran against George Bush the Senior.
He chatted me up: My name, what I do, etc. I complimented him on setting such an example. When I beat him to a ruptured styrofoam coffee cup, he offered to throw it out for me, but I instead relieved him of some of the trash he was carrying because Mike Dukakis. He continued on his way.
Talk about being civic-minded! What a decent, humble man.
From a Wired article:
But hidden within another document leaked by Snowden was a slide that provided a few hints about detecting Quantum Insert attacks, which prompted the Fox-IT researchers to test a method that ultimately proved to be successful. They set up a controlled environment and launched a number of Quantum Insert attacks against their own machines to analyze the packets and devise a detection method.
According to the Snowden document, the secret lies in analyzing the first content-carrying packets that come back to a browser in response to its GET request. One of the packets will contain content for the rogue page; the other will be content for the legitimate site sent from a legitimate server. Both packets, however, will have the same sequence number. That, it turns out, is a dead giveaway.
Here's why: When your browser sends a GET request to pull up a web page, it sends out a packet containing a variety of information, including the source and destination IP address of the browser as well as so-called sequence and acknowledge numbers, or ACK numbers. The responding server sends back a response in the form of a series of packets, each with the same ACK number as well as a sequential number so that the series of packets can be reconstructed by the browser as each packet arrives to render the web page.
But when the NSA or another attacker launches a Quantum Insert attack, the victim's machine receives duplicate TCP packets with the same sequence number but with a different payload. "The first TCP packet will be the 'inserted' one while the other is from the real server, but will be ignored by the [browser]," the researchers note in their blog post. "Of course it could also be the other way around; if the QI failed because it lost the race with the real server response."
Although it's possible that in some cases a browser will receive two packets with the same sequence number from a legitimate server, they will still contain the same general content; a Quantum Insert packet, however, will have content with significant differences.
It's important we develop defenses against these attacks, because everyone is using them.
I attended a fantastic event last week, hosted by the Clinic’s good friend (and soon-to-be colleague) Susan Crawford at Columbia University‘s Tow Center For Digital Journalism. The event followed a series of workshops that Susan hosted at the Tow Center, with generous support from the Ford Foundation, aimed at answering the following question: “What could a university center do to advance policymaking and planning for fiber-optic networks that provide everyone in the United States with high-speed Internet access and (a) improve local governance and (b) support civic journalism?”
Last week’s event saw Susan weaving together disparate strains of thinking that had emerged during the preceding workshops about technology, cities, civic engagement, big data, trust, privacy, and the transformative power of fiber-based communications networks. She connected these strains eloquently in her remarks with an extended music metaphor that drew on her own experience as a musician and the product of a musical household.
Susan’s talk segued into an inspiring discussion with an all-star panel of civic tech leaders — Lev Gonick (Chief Executive, OneCommunity); Brett Goldstein (Fellow in Urban Science, University of Chicago and Board Member of Code for America); Elin Katz (Consumer Council, State of Connecticut); and Oliver Wise (Director, Office of Performance and Accountability, City of New Orleans). The event coincided with the release of a Report on the Responsive Cities Initiative.
The Tow Center has made video of Susan’s talk and the ensuing panel discussion available:
During the session, and in conversations with participants afterwards, it was clear we are in the midst of a time of palpable energy at the local community and government levels around harnessing the power of technology to transform cities and the ways they engage with their citizens.
Tim Berners-Lee, the Confrontation Clause, Rhinobird.tv, and more... in this week's Buzz.
More Berkman in the News
NPR has announced that it’s making 800,000 pieces of audio embeddable anywhere you want, including on this blog:
When you browse their site you’ll find an “embed” button to the right of a story’s “Play” button. Click ‘n’ paste. (And at the bottom of the widget that you embed you’ll see a tiny, gray copyright notice.)
Thank you, NPR.
Stingray is the code name for an IMSI-catcher, which is basically a fake cell phone tower sold by Harris Corporation to various law enforcement agencies. (It's actually just one of a series of devices with fish names -- Amberjack is another -- but it's the name used in the media.) What is basically does is trick nearby cell phones into connecting to it. Once that happens, the IMSI-catcher can collect identification and location information of the phones and, in some cases, eavesdrop on phone conversations, text messages, and web browsing.
The use of IMSI-catchers in the US used to be a massive police secret. The FBI is so scared of explaining this capability in public that the agency makes local police sign nondisclosure agreements before using the technique, and has instructed them to lie about their use of it in court. When it seemed possible that local police in Sarasota, Florida, might release documents about Stingray cell phone interception equipment to plaintiffs in civil rights litigation against them, federal marshals seized the documents. More recently, St. Louis police dropped a case rather than talk about the technology in court. And Baltimore police admitted using Stingray over 25,000 times.
The truth is that it's no longer a massive police secret. We now know a lot about IMSI-catchers. And the US government does not have a monopoly over the use of IMSI-catchers. I wrote in Data and Goliath:
From the Washington Post:
How rife? Turner and his colleagues assert that their specially outfitted smartphone, called the GSMK CryptoPhone, had detected signs of as many as 18 IMSI catchers in less than two days of driving through the region. A map of these locations, released Wednesday afternoon, looks like a primer on the geography of Washington power, with the surveillance devices reportedly near the White House, the Capitol, foreign embassies and the cluster of federal contractors near Dulles International Airport.
At the RSA Conference last week, Pwnie Express demonstrated their IMSI-catcher detector.
Building your own IMSI-catcher isn't hard or expensive. At Def Con in 2010, researcher Chris Paget demonstrated his homemade IMSI-catcher. The whole thing cost $1,500, which is cheap enough for both criminals and nosy hobbyists.
It's even cheaper and easier now. Anyone with a HackRF software-defined radio card can turn their laptop into an amateur IMSI-catcher. And this is why companies are building detectors into their security monitoring equipment.
Two points here. The first is that the FBI should stop treating Stingray like it's a big secret, so we can start talking about policy.
The second is that we should stop pretending that this capability is exclusive to law enforcement, and recognize that we're all at risk because of it. If we continue to allow our cellular networks to be vulnerable to IMSI-catchers, then we are all vulnerable to any foreign government, criminal, hacker, or hobbyist that builds one. If we instead engineer our cellular networks to be secure against this sort of attack, then we are safe against all those attackers.
We have one infrastructure. We can't choose a world where the US gets to spy and the Chinese don't. We get to choose a world where everyone can spy, or a world where no one can spy. We can be secure from everyone, or vulnerable to anyone.
Like QUANTUM, we have the choice of building our cellular infrastructure for security or for surveillance. Let's choose security.
EDITED TO ADD (5/2): Here's an IMSI catcher for sale on alibaba.com. At this point, every dictator in the world is using this technology against its own citizens. They're used extensively in China to send SMS spam without paying the telcos any fees. On a Food Network show called Mystery Diners -- episode 108, "Cabin Fever" -- someone used an IMSI catcher to intercept a phone call between two restaurant employees.
The new model of the IMSI catcher from Harris Corporation is called Hailstorm. It has the ability to remotely inject malware into cell phones. Other Harris IMSI-catcher codenames are Kingfish, Gossamer, Triggerfish, Amberjack and Harpoon. The competitor is DRT, made by the Boeing subsidiary Digital Receiver Technology, Inc.
EDITED TO ADD (5/2): Here's an IMSI catcher called Piranha, sold by the Israeli company Rayzone Corp. It claims to work on GSM 2G, 3G, and 4G networks (plus CDMA, of course). The basic Stingray only works on GSM 2G networks, and intercepts phones on the more modern networks by forcing them to downgrade to the 2G protocols. We believe that the more moderm ISMI catchers also work against 3G and 4G networks.
Google has a new Chrome extension called "Password Alert":
To help keep your account safe, today we're launching Password Alert, a free, open-source Chrome extension that protects your Google and Google Apps for Work Accounts. Once you've installed it, Password Alert will show you a warning if you type your Google password into a site that isn't a Google sign-in page. This protects you from phishing attacks and also encourages you to use different passwords for different sites, a security best practice.
Here's how it works for consumer accounts. Once you've installed and initialized Password Alert, Chrome will remember a "scrambled" version of your Google Account password. It only remembers this information for security purposes and doesn't share it with anyone. If you type your password into a site that isn't a Google sign-in page, Password Alert will show you a notice like the one below. This alert will tell you that you're at risk of being phished so you can update your password and protect yourself.
It's a clever idea. Of course it's not perfect, and doesn't completely solve the problem. But it's an easy security improvement, and one that should be generalized to non-Google sites. (Although it's not uncommon for the security of many passwords to be tied to the security of the e-mail account.) It reminds me somewhat of cert pinning; in both cases, the browser uses independent information to verify what the network is telling it.
EDITED TO ADD: It's not even a day old, and there's an attack.
Expertise literature in mainstream cognitive psychology is rarely applied to criminal behaviour. Yet, if closely scrutinised, examples of the characteristics of expertise can be identified in many studies examining the cognitive processes of offenders, especially regarding residential burglary. We evaluated two new methodologies that might improve our understanding of cognitive processing in offenders through empirically observing offending behaviour and decision-making in a free-responding environment. We tested hypotheses regarding expertise in burglars in a small, exploratory study observing the behaviour of 'expert' offenders (ex-burglars) and novices (students) in a real and in a simulated environment. Both samples undertook a mock burglary in a real house and in a simulated house on a computer. Both environments elicited notably different behaviours between the experts and the novices with experts demonstrating superior skill. This was seen in: more time spent in high value areas; fewer and more valuable items stolen; and more systematic routes taken around the environments. The findings are encouraging and provide support for the development of these observational methods to examine offender cognitive processing and behaviour.
The lead researcher calls this "dysfunctional expertise," but I disagree. It's expertise.
Claire Nee, a researcher at the University of Portsmouth in the U.K., has been studying burglary and other crime for over 20 years. Nee says that the low clearance rate means that burglars often remain active, and some will even gain expertise in the crime. As with any job, practice results in skills. "By interviewing burglars over a number of years we've discovered that their thought processes become like experts in any field, that is they learn to automatically pick up cues in the environment that signify a successful burglary without even being aware of it. We call it 'dysfunctional expertise,'" explains Nee.
See also this paper.