Skip to the main content

Commenting on the Berkman Center's broadband study for the FCC

It has been three weeks since the FCC posted for public comment the Berkman Center’s study (PDF) of international experience with broadband transitions and policy. The FCC recently upgraded its comment facility, and we want to encourage everyone who cares about the future of broadband, and the National Broadband Plan, to take advantage of this updated system and to add their comments to the appropriate FCC dockets (GN Docket Nos. 09-47, 09-51 and 09-137; full public notice). The comment period for the Berkman Center study closes November 16.

In the meantime, comments in the blogosphere have also emerged, and we thought it would be appropriate to respond to the ones that have received the most attention. The study is long and dense, and we hope that P.I. Yochai Benkler’s responses below, that highlight some key considerations of the study’s methods, will be helpful for those who are reviewing it.




On "Next Generation Connectivity: A review of broadband Internet transitions and policy from around the world"

The US invented the Internet. We led the way in early commercialization and access. But we have been lagging in the past few years. Half of the industrialized countries have higher levels of penetration, however you count it; New York City is not even among the top 20 cities in speeds available, and when we look at the highest speed offerings available, and the best prices available for those speeds, we find that US consumers see higher prices and lower speeds. The FCC is putting together a broadband plan for the next decade, and as it does so, it is essential to find out why we are where we. Our task was to cut through the many arguments about whether or not the numbers were showing what they seemed to be showing, and to try to explain why. The FCC opened up a formal comment proceeding, where we hope others will engage in productive and focused discussion of our findings and their implications.

We are eager to see comments submitted to the FCC, so we can help the Commission follow through on its commitment to evidence-based policy. For now, here, we wanted to clarify some parts of our study and address several misconceptions that have been sown in the blogosphere.

To capture the gist of the questions we address, you have to ask yourself one question. If someone told you that it was possible, in an advanced market economy, with wages that are every bit as high or higher than wages in the US, to buy broadband speeds that are two to three times faster than we get here; bundled with unlimited international calling to 70 countries or more; with over 100 digital TV channels; and the ability to connect nomadically, using your Wi-Fi-enabled laptop, whenever you were within range of a Wi-Fi box of another subscriber to the same service, which is about a quarter of all broadband homes in your country, and you could get all this for about $32 to $35, would you want it? And if you knew that this was possible, would you try to understand what made it possible?

Why look at international comparisons at all?

One criticism is that each country is unique, and so looking at different countries doesn't teach you much. The reason that this is a mistake is that when you look only at your own results, you will always see progress in such a dynamic market. Of course US residents will see better broadband today than they saw 10 years ago. The rate of technological development makes any other outcome practically impossible. To be able to make any kind of reasonable judgment about whether we are doing as well as we could be doing, we need some comparison, some benchmark that will give us an idea, not of how well we are doing relative to the past, but of how well we are doing relative to others who are sufficiently similar to us that their experience might have been ours. You wouldn't hold on to stock in a company that's been growing nicely at 10% per year for the past five years if the rest of the market had been growing at 30% and your company had been losing market share all along. You can't re-run an experiment of how your own country would have done with different policies or practices. The next best thing is to look at good international comparators. That's what international benchmarks are for.

Is OECD data really that flawed?

The biggest complaint we’ve heard (in a blog post by George Ou, for example) is that we use OECD data as part of our analysis. The claim is that OECD data is fundamentally flawed. Two points:

1. We used our own independent analysis as well as OECD data and other sources where those seemed better.

2. We tested and validated the OECD data where we did use it, and found that it was substantially better than the critics would have you believe.

What you need to understand about this complaint is that the OECD is one of the most important formal international, inter-governmental organizations that collects standardized statistics about a wide range of economic activities. For broadband specifically, there simply is no other source of data with as many standardized measurements, from as many countries that are as closely comparable to us, and over as long a period, as the OECD. The ITU has data for more countries, but it is not as fine-grained and includes many more countries with greater differences in economic structure and wealth. Where it was better suited, we included the ITU data as well. In general, the OECD broadband data is the most relevant and competent international dataset on the issues we focus on. If you think that benchmarking is indispensable for understanding our own condition, the OECD data is the best available. If the numbers have been challenged over the years, as those of the OECD have been, the solution is to understand the advantages and limitations of the data, to learn from the data, and to improve the dataset. That's what we did in our study, dedicating almost 50 pages and over 30 graphs in the report to analyzing data from the OECD, and from other sources, as well as developing our own data sources and measurements, correlating and cleaning them up as we went along, precisely so that we could have good data, and could separate out poor from high quality data. We dug deep: the granularity with which we looked at the data allowed us to get to the point that we were able to identify precisely what errors Slovakia made when it reported fiber deployment based on investments and subscriber levels by Orange Slovenska in late 2008; or what companies were excluded from the ITU and OECD data for low-speed offerings, and why.

Penetration per 100 vs. households

The standard complaint about the most widely used OECD measure—penetration per 100 inhabitants—is that it penalizes countries with large households like the US. (This showed up, for example, in Bret Swanson's blog post about our data, as well as in a criticism of our study in a Canadian commentary). Two things need to be remembered: (a) the OECD does report penetration per household as well as penetration per 100, and (b) that for purposes of the US standing, there is no difference which measure you use; and for purposes of other countries, the correlation between the two is extremely high.

Figure 3.4. Broadband penetration per 100 inhabitants and by households.

per 100 vs households

There are different advantages and disadvantages to each measure. Per household reports the most important measure for broadband policy, which is how many households subscribe, but per 100 is more up-to-date and more certain (because it relies on carrier-side subscription data, not household surveys), and includes access by small and some medium-sized businesses. Business access is also an important policy objective and is excluded in household measurements. Korea and Japan are the only two countries whose performance is really understated by using per 100 as opposed to per household measure. The United States moves up one spot, from 15th to 14th. In our overall assessment, we weight household penetration higher than per 100. But again, which measure we use does not really matter for purposes of assessing US performance.

Wired vs. wireless

Another argument that has long been leveled at the penetration numbers is that they don't account for wireless penetration. We tackled this problem not only because it was a criticism of OECD data, but also because our studies helped us to understand that the next generation transition will be about much higher speeds, plus ubiquitous, seamless connectivity. Suzanne Blackwell, from Canada, faulted us for not accounting for the fact that some countries that use pre-paid cards count each SIM card as an account, and so have over 100% penetration. She says we do not account for this in our 3G data. She also argues that our data does not account for the fact that 3G service is available to 91% of the Canadian population. As Blackwell graciously notes, however, we do in fact explain this problem with the data. We don't throw our hands up and say “that means there's no data.” What we did instead was look at market reports of 3G subscriptions, and these look different from 2G subscriptions. For example, in the OECD data that combines 2G and 3G subscriptions, which appears to be more sensitive to the multiple-accounts problem, Japan and South Korea appear in the bottom 7 countries, together with the US. But when you move to looking only at 3G subscriptions, some of that noise is removed—Japan and Korea take their place at the top of the distribution, while countries like Greece or the Czech Republic, which clearly benefit on the cellular side from multiple prepaid accounts, drop from the top of the distribution in the OECD data that includes 2G, to much lower in the distribution in our study of market data for 3G only. Looking at 3G does not eliminate the difficulty posed by multiple SIM cards, but it helps; and comparing the two figures suggests that looking at 3G alone does not simply replicate the same problem that we see for 2G, and yet, it does not much improve the picture for the United States and Canada.

Figure 3.10. 3G penetration.

3G per 100

Figure 3.12. Cellular mobile penetration: 2G & 3G in OECD Report

OECD mobile per 100

The second complaint, that we use subscription data as opposed to proportion of the population covered, is weaker still. The basic metric we are trying to get at is how many people are actually using connectivity. Actual subscriptions are a much better measure of the price, quality and availability tradeoff from the perspective of consumers. Just as in fixed broadband we don't count every house that could buy cable as covered by cable; and every house within a mile of a DSL switch as having DSL, we can't count every person who lives within an area that in principle could deliver 3G connectivity as having 3G service. We need to pick a metric that will most likely account for the availability of what consumers actually value, at prices they think are worth paying. The most direct measure of that is: are people actually buying the brew that's on offer. And if in one country many more people do buy that brew than in others, and if the country where people do buy the brew has lower prices for higher performance, then it is reasonable to conclude that higher performance and lower prices are an important part of the reason.

In his blog post, George Ou chose instead to focus on our Wi-Fi hotspot measures. He worries that the measures are uncertain, and that there is no standard definition for Wi-Fi hotspots. We agree that this data is problematic, and in our study we explain in much detail that accounting for nomadic access is hard, non-standardized, and imperfect. In the absence of better data, what we have offered are three different sources, from different times, coupled with a narrative effort to explain the differences. We explain why, of all these, we prefer the measure we chose, but mostly emphasize how important it is, going forward, to develop better measures. We give hotspots very little weight in our overall measure for penetration because of these limitations. Most importantly from the perspective of Ou's concerns, for purposes of looking at US performance, our hotspot measure reflects relatively well on the US. If we excluded the hotspots data, the US would drop two spots, behind Austria and Spain, to 19th on penetration, although the overall multidimensional ranking would not be affected.

Speed and latency

Another complaint about our data is that we use advertised speeds, and that advertised speeds are inflated to different degrees in different countries. As with everything else in our report, we use multiple measures, testing them against each other, and trying to understand what they do and do not mean. So the first thing to realize is that when we compare average advertised speeds as reported by the OECD to average actual speeds as measured by Speedtest.net, we find an R2 of 0.52. In other words, the two measures are significantly correlated, and, although advertised speeds are consistently higher than actual speeds, advertised speeds do in fact offer a reasonable prediction of the variation across countries in actual speeds.

Look, for example, at Figure A in George Ou's blog post. There he lists Akamai measures of actual speed, relative to advertised speeds as stated by OECD. The US is at the bottom of the heap of his comparators, exhibiting the slowest speeds in the countries he looks at using his preferred source. Does it really matter that that the US speeds are only one-quarter the speeds in Korea, and not one-eighth as appears when you look at advertising; or that Sweden's speeds are close to 50% higher than ours by his measure, rather than the 30% faster as indicated by advertised rates? In Akamai's “State of the Internet” report for Q2 2009, it found that the US was 18th in the world in average speed (Fig. 13), 12th in speeds above 5Mbps (Fig. 15), but 24th in speeds above 2Mbps (Fig 21). Other measures from the Akamai data are also broadly consistent with the rankings presented in our study; fastest advertised speeds in the US are 6th-8th in the world, and the US is 6th-7th in the world in fiber deployment. The basic point remains unchanged, indeed, is reinforced by the Akamai data: by actual speed performance measures, just as by advertised speeds, the US is lagging behind comparable countries. Our speed data and Akamai's speed report from the same period, the Fourth Quarter of 2008, have an R2 of 0.75! This very high correlation lends strong credence to both sets of numbers. The result they paint is unpleasant to contemplate. But it is, nonetheless, the best actual speed data currently available.

Average download speeds measured by Akamai and Speedtest.net

Akamai vs Speedtest

Ou also protests our use of latency. Indeed, we ourselves are highly critical of the latency measures, and in our study, we move away from using latency as part of the total measure, as the Oxford Said Business school did, and report it separately and with great caution. We use a variety of approaches to clean up and more finely cut the Speedtest data than has been done before. We look at upload speed and download speed, as well as latency; we look at average, median, and top 90th percentile measures. We look at them independently, and correlated with each other. We clean out various spurious results from large institutions that are not ISPs. The result is very middle of the pack.

Figure 3.19d,e,a,b. Speedtest.net data

Speedtest down scatters

Speedtest avg down and up

One of the most surprising and troubling pieces of information we got from our actual speed tests is when we began to use the finer-grained detail available in the Speedtest data to look at cities. We looked at the largest city and capital city in each of the OECD countries. New York City is not one of the top 20 cities in the OECD. Clearly, it isn't urban density. It isn't wealth. It isn't, as one Canadian critic put it, that small rich countries rank high in broadband. If Paris and Lyon, Amsterdam and Rotterdam, Stockholm and Götberg, Helsinki and Espoo, or Copenhagen and Aarhus, not to speak of Seoul and Busan or Tokyo and Yokohama, all have actual, measured download speeds faster than New York or Washington D.C., this is not because of the latter's lack of density, wealth, or political clout to get investment. Surely the first explanation we should examine is the relative efficiency of broadband markets in the different countries.

Table 3.3. Top 20 cities in OECD countries by actual speed measurements, Q4 2008

Speedtest city table

As for complaints about our use of latency, as I said, we put very little emphasis on it. It gets a 10% weighting in our speed rankings, and a 3% in overall; it would have no effect on the US ranking, and we actually explain in detail why and how we do so.

Prices: complaints and actual findings

George Ou's final complaint is that our price analysis is flawed for three reasons. First, he says that the pricing is “arbitrary and demonstrably inaccurate”; second, that it doesn't account for bit-caps; and third, that it doesn't account for costs “hidden” in rent or condo fees.

First, we spent a tremendous amount of time and effort both learning from and refining the OECD pricing data, and created an entirely new pricing study based on a completely independent market data source, the GlobalComms database, which we studied independently. We constructed a dataset of nearly 1000 discrete formal offerings drawing on these two independent datasets: the OECD and GlobalComms. We then carried out analysis on these data both separately and together. I am unconvinced that the link Ou offers to a single pricelist from NTT East, coupled with a statement of a general lack of confidence in our findings, is a serious critique of our methods or findings. The other problem Ou seems to have is that he doesn't replicate our results when he uses exchange rates. This is not a problem, for the standard way to do international comparisons of prices is not by using exchange rates, but by using a conversion factor that accounts for differences in what things cost in different countries more generally: purchasing power parity (PPP). PPP is what we use in our studies; and it is what the OECD uses in its.

Second, Ou complains that we did not take account of datacaps. This criticism suggests a simple failure to read our methods. We explicitly take account of datacaps in our pricing study, and we explained how and why in the pricing study itself. These are not in fact as widespread as Ou claims they are, though they do appear in countries whose price performance is lower. We corrected for the fact that a low bitcap really suggests a different product, or possibly higher actual prices when overage fees are accounted for, by excluding price offerings below 2Gb per month, because that is what Comcast publicly stated was the median monthly usage of its customers. To clarify: we excluded from our study prices for offers that limited their users to data caps below what a median Comcast user uses in the US.

Third, Ou tries to make a point about how tenants in the US get 100Mbps for $30 per month in San Francisco, but not in large suburban houses. I would be curious to see who in the United States can buy 100Mbps advertised speed, or 30Mbps effective throughput, in San Francisco or anywhere else, for $30, and how that number is arrived at by the building paying for some of the service and rolling it up in the rent or condo fees. This argument simply conflates cost of provisioning in dense urban settings with prices seen by consumers. We certainly have dense urban areas in the United States, as do other countries in our dataset. But as the speed analysis for cities shows, it is not density, but a competitive market structure, that we seem to lack.

As I said, we did a lot of work to clean the data up. We found, for example, that the OECD data did not cover some of the lower-speed, lower-priced offers available in the US, and our study therefore shows a more attractive picture of low-end price/performance packages in the US than the OECD does—although we also show that the ITU's rosy portrayal of the US offers at the low end is overly attractive because it ignores offers in other countries from precisely the kinds of innovative entrants we cover in the access portion of our report.

We reject the use of price per megabit per second, which has been widely used, because we show that it double counts speed. Instead, we show rankings separately for the low, medium, high, and very high speeds. We also note that pricing is a slippery target to observe, so we provide joint analyses of both our own data, with over 500 data points, and the OECD data, with over 600 data points (about 150 overlap). The results are consistent: the US does OK on offers for low-end speeds, and gets middling to weak as we move up the speed groupings, and ultimately does quite poorly on the very high end speeds.

Figure 3.26. OECD versus GlobalComms pricing in low speed tier

OECD v GC low speed pricing

Figure 3.27. OECD versus GlobalComms pricing in medium speed tier

OECD v GC med speed pricing

Figure 3.28. OECD versus GlobalComms pricing in high speed tier

OECD v GC high speed pricing

Figure 3.29. OECD versus GlobalComms pricing in very high speed tier

OECD v GC very high speed pricing

Lastly, we don't use only average prices. We did something much more detailed, and looked at actual individual offers from 59 companies throughout the OECD which offer the highest speeds. At the end of the day, after all, our benchmarking is about something simple: what quality of service do consumers get, at what prices? What we found was surprising and troubling.

Pricing and access regulation

There are two basic competing hypotheses in the broadband regulation space. One, which is dominant in the United States, is that the best market structure is one where you have two independent competitors, each owning its own infrastructure, each able to invest in its infrastructure secure in the knowledge that it can reap all the benefits of investment, and each disciplined by the other's competition. This is called “inter-modal” competition, and it means that competition is between owners of different “modes” of delivering broadband: telephone networks, cable networks, and power lines. On this theory, as Bret Swanson emphasizes, we in the United States have been blessed with uniquely high cable penetration, which makes it possible for us to rely on this model, while other countries are relegated to using shortcuts and regulatory fixes to a poor baseline market structure where all they have is DSL.

The competing hypothesis is that basic local infrastructure: the trenches you have to dig to pull ducts and wires to homes, the holes you have to make in the walls to get cables in, etc., is so expensive, and so hard to build, that basing the competitive market only on those companies that can afford to duplicate each other's facilities completely necessarily results in a shallow market of one or at most two providers. This hypothesis claims that two is not enough. To get more than two, you have to regulate some of the most basic inputs, the aspects of the service that are by far the most expensive, accounting for over 85% of total cost, but are also the least innovative and slowest moving: the trenches, the ducts, the physical copper, cable, or fiber. These regulations are called “open access” regulations. Once these basic inputs are available to all potential competitors, at regulated rates, a robust, innovative market in electronics and services can develop on top of these facilities, among firms that can afford to invest in and deploy their own innovative electronics and services. What drives investment in the faster moving aspects of the service—the electronics and services—is the competition on top of the shared platform, rather than the competition between the platform owners.

If the inter-modal story is correct, then we should see, after years of the two policy approaches being put in place, countries that mostly use platform- or inter-modal competition with better networks with faster speeds, and they should have competitive markets where the prices and speeds that cable companies and telephone companies offer are closely matched to each other. By contrast, open access-based markets should have slower speeds, disinvestment because of the fear of regulation, and no real competition that uses any platforms other than open access-based facilities, because why would anyone want to invest in a competing network if they can just buy elements from the incumbent? If the open access hypothesis is correct, then we should see less investment, higher prices, and less competitive responsiveness in markets with only two facilities-based competitors, and higher speeds at lower prices in markets where open access is adopted effectively.

So we did the actual study; and we looked at the most forward-looking offers in the OECD, on a company-by-company basis, for companies where we had an actual history of what their business strategy for entry and investment had been, so we could categorize who was an incumbent, or a cable carrier, or a power company; who had relied on open access, etc.

Figure 4.2. Best price for highest speed offering

Price vs speed very high offerings

The results, in Figure 4.2. in the report, are consistent with the open access hypothesis, and inconsistent with the inter-modal competition hypothesis. If you look at the figure, the top right hand corner is where you want to be: high speeds for low prices. What we see is that telco incumbents and cable companies, power companies, and open access entrants in five different countries are all tightly clustered in one high-performing corner. These are countries with robust open access policies. On the bottom left hand corner are the companies that offer low speeds for high prices. These companies are not tightly clustered, and they don't seem to be responding to any particular competitor, but are rather setting prices with much less discipline to push them to a “market price.” Almost all the companies in that bottom third corner are in the two major “inter-modal competition” markets—the United States and Canada. In the middle are countries with later implementations of open access, or with less effective implementations. We spend many pages detailing what happened in these different countries, to a level of detail and scope not done previously. The details are fascinating, the basic storyline is repeated. Where a regulator rolled up its sleeves and really implemented open access, new and innovative entrants used the opportunity to invest in new service models and new electronics, introduced bundled voice over IP, or IPTV, or nomadic access. Whether it was Softbank in Japan, Iliad and Neuf Telecom (now SFR) in France, Tele2 and Glocalnet (now Telenor) in Sweden, or one of the many other firms we studied, the new entrants served as catalysts for demand, for service models, and for competition throughout the market. In 2002 France still had a very weak broadband market. One would not have predicted that France Telecom's offer, the most expensive among major offers in the French market, would, within six years, be for speeds twice as fast as those offered by Verizon for a little less than one-third the price, or would be offering speeds several times as fast as those AT&T offers at roughly the same price. Similarly, KDDI's investments in power company lines in Japan, complemented with unbundled DSL, should surprise those who predicted that open access policies would cause disinvestment from competing platforms. Similarly, the fact that NTT is offering price/performance ratios that put France Telecom's to shame, but are identical to the offer by Japan's innovative entrant, Softbank, also strongly suggest the presence of a highly competitive market.

Yochai Benkler
October 2009

You might also like


Projects & Tools 02

Broadband Review

The Berkman Center conducted an independent expert review of existing literature and studies about broadband deployment and usage throughout the world, in order to help inform the…

Broadband

Since submitting its Next Generation Connectivity report to the FCC's National Broadband Plan committee, the Berkman Center has continued its research on broadband policy.