Policy makers need to better understand the networks they regulate, but academic network research can be a tough task. Google, New America, and PlanetLab have created Measurement Lab, an open group of distributed servers meant to make research into Internet speeds, latency, jitter, and BitTorrent-blocking easier."Measure twice, cut once," says the old carpenter's adage, and what's good for framing a house turns out to be important for tech policy as well. Without better knowledge of what's actually happening on the collection of networks we call the Internet, researchers and policy makers are operating from a position of ignorance. But getting that sort of network data has, to date, been tough to do.
Google's Vint Cerf, one of the men behind TCP/IP, hopes to make it easier. Cerf Wednesday will announce at a Washington, DC event a new project launched by Google, the New America Foundation (which Google supports), and PlanetLab (which Google also helps support) designed to make distributed network management easy to do, and the data from such projects easy to share. Ars spoke with Google and the New America Foundation about the effort.
The initiative is called Measurement Lab, or M-Lab. The idea began in 2008, when Cerf and other Googlers began talking with academic researchers about problems that the researchers faced.
One of the biggest was one of the most obvious: doing network research requires widely distributed servers and huge amounts of user data in order to be meaningful. Rolling out the server infrastructure that could support such tests was both expensive and difficult, and there was no central repository for sharing the massive data sets collected.
PlanetLab is an academic consortium that has worked to address these problems, and it runs an overlay network of servers across the US that can be used for network research. But PlanetLab doesn't guarantee that enough server bandwidth and processing power is actually available at any given time to run any given experiment—M-Lab does so. Because PlanetLab has extensive experience with the management aspects of such measurement servers, its software will power the M-Lab servers.
To start with, three such servers (exclusive to M-Lab) will reside in Mountain View. By the end of 2009, 36 servers will exist at 12 locations in the US and Europe, and M-Lab is open to participation from any other group that wants to host a site. To do so, all that's required is three dedicated rack-mount servers with dual quad-core processors each and a fast Internet connection.
Meet the tools
M-Lab will initially work with three tools; two more are coming soon. All tools must allow inspection of the source code, and all data generated from their use will enter the public domain. M-Lab will also host this data and make it available to any researchers that want it.
Initial tools focus heavily on network openness. Already up on the site is Glasnost, built by the Max Planck Institute for Software Systems in Germany, which can test whether BitTorrent is being blocked by a user's ISP. Coming soon are DiffProbe (to "determine whether an ISP is giving some traffic a lower priority than other traffic") and NANO (to "determine whether an ISP is degrading the performance of a certain subset of users, applications, or destinations").
M-Lab has already used the recent news that Cox Cable will begin delaying traffic it deems not "time sensitive" during periods of congestions to talk up M-Lab's usefulness; DiffProbe and NANO will report this sort of degradation, even where ISPs do not announce it.
According to Google, though, the M-Lab platform isn't simply a way to advance Google's goals like network neutrality. M-Lab's tool will also "help the public understand what they're getting when they sign up for broadband," and the tools can be used by ISPs to help diagnose user problems as they arise.
Tools can also be written by any research group and do not need to overlap with Google's corporate goals to work with the M-Lab servers, and M-Lab has its own board (most members come from academia). Back to the future
Current research projects simply have a "massive shortcoming in data collection and analysis," Sascha Meinrath of New America explained to Ars.
According to Meinrath, detailed network data about speeds, latency, jitter, and more used to be in the public domain until NSFnet was privatized in the earlier 1990s and the Internet as we know it today began its expansion. Google's offer of hardware and bandwidth is the "catalyst" that researchers need to break out of the "marginalized space" they have been crammed into ever since.
To turn M-Lab into a truly open and useful resource, the group is seeking help from anyone who can offer it—interface designers, network researchers, tool developers, and companies willing to host more servers.
Interface designers, especially, are needed if M-Lab's work can translate into something that is also useful for end-users. As it stands now, the site's tools generally require the installation of Java applets and are not necessarily master classes in interface design and output presentation.