Tuesday evening, hundreds of Web destinations, including the four most popular ones worldwide, showed up on the IPv6 Internet. IPv6 traffic went up a lot in relative terms, but still barely registered as a blip on the radar. The 0.05 percent of users who were expected to encounter delays or errors didn't—or decided to call their Moms, watch TV, or otherwise refrain from complaining. Twenty-four hours later things went back to the way they were before—for the most part. But not everything went according to plan: a small but significant number of websites did participate, but experienced some IPv6-related problems.
Bumps in the road
The most salient is the story of a mobile operator in a small European country. I'm not sure how much of this is true, but what I heard was that the CEO learned about World IPv6 Day the day before, and instructed his engineers to participate. When WIPv6D rolled around, the frantic efforts to enable IPv6 reachability for the company's website had completely failed and they were unreachable for the better part of the day.
When Facebook gained its IPv6 address about half an hour before midnight Zulu time, it remained unreachable for several minutes and then disappeared from the DNS again. About half an hour later the address came back and everything worked as expected. In many cases, the DNS updates propagated really quickly, but depending on how much (and how standards-compliant) caching is present in various DNS servers, the operating system, and even applications, in a few cases it would take a while for newly injected IPv6 addresses to be recognized by applications. It probably took even more time for the IPv6 addresses to disappear once they were removed when the experiment ended.
Internet backbone operator Level3 had a lot of issues with its IPv6-enabled website: when consulted over IPv4, the site would display fine, but when loaded over IPv6, the exact same URL produced a 404 error message. In hindsight, it makes sense that such problems would occur for a fraction of the participants. The page shown for a HTTP request can depend on both the address that the client connects to and the "host" header sent by the client, which contains the domain name from the requested URL. Web servers themselves, as well as load balancers, look at both these variables. You need to explicitly test that asking for the right host over a connection to the right address makes the server send the right page. Testing this requires some trickery to execute before the IPv6 address is added to the DNS.
Not long after the start of the experiment, someone reported that www.nist.gov didn't work. However, NIST's IPv6 address did appear to be reachable. When I asked for a webpage, it just didn't produce any results. This changed when I set the maximum packet size on my network connection to 1280, which is the "minimum maximum" specified for IPv6—in other words, every IPv6 system must be able to handle packets of at least 1280 bytes. With the new setting, my computer would tell the NIST Web server that I can only handle 1280-byte packets. And then everything worked.
So NIST was suffering from a path MTU discovery problem, probably the result of a filter that filters IPv6 ICMP packets that routers send to make IPv6 systems adjust their packet sizes where necessary. This is a problem that's hard to debug, because it only shows its ugly face when communicating over a path that has a packet size limitation. Most of the Internet uses the standard 1500-byte Ethernet (or larger) packets so the problem doesn't occur.
This problem disappeared and reappeared at least once during the day. But once the NIST site had loaded successfully, clicking on any link produced a 404 error. I've been told that a message was later added to the page explaining that this was the case and that the page was a copy made to participate in World IPv6 Day; viewing the NIST website required visiting over IPv4.
An IPv6 all-nighter in Amsterdam
But how does an average user do that? In fact, the group of researchers, students and IPv6 enthusiasts of all walks of life that had assembled at the University of Amsterdam all-night lab had some trouble with that. There are some browser extensions that claim to show whether a site is IPv4 or IPv6. Unfortunately, these extensions don't have access to the browser's internals, so they can't tell whether a page was really loaded over IPv4 or IPv6. My personal solution for this was to connect my Mac to the dual-stack (IPv4 + IPv6) LAN and use Safari as my dual-stack browser. Firefox lets you turn off IPv6 within the application by going into the about:config and then setting "network.dns.disableIPv6" to "true." So that was my IPv4-only browser. I then connected my iPhone to the IPv6-only Wi-Fi network and turned off cellular data to avoid getting IPv4 over 3G, so Safari on the iPhone was my IPv6-only browser.
Once the experiment was in full swing, it was time to check up on IPv6 traffic levels. Being in Amsterdam, we of course first checked the Amsterdam Internet Exchange IPv6 statistics. Normally, about 0.3 percent of the AMS-IX traffic is IPv6, which is higher than found elsewhere. But if anything, WIPv6D reduced AMS-IX IPv6 traffic a little. Strange. Not so at the Deutsche Commercial Internet eXchange (DE-CIX) in Frankfurt: there, IPv6 traffic was almost double the normal amount, reaching about 0.1 percent of IPv4 traffic.
Hurricane Electric, which is very active in the IPv6 world and provides free IPv6-over-IPv4 tunneling services to the public, saw its IPv6 traffic go up by more than a factor of five. And Sandvine, which builds "traffic optimization" devices, observed that even though native (untunneled) IPv6 traffic remained stable, 6to4 traffic, which is automatically tunneled over the IPv4 internet to the closest gateway to the IPv6 internet, increased by a factor of ten. They also say that 6over4 traffic increased by a factor of five, but 6over4 is a very rarely used mechanism. They probably mean packets tunneled through manually created IPv6-in-IPv4 tunnels, such as the ones that Hurricane Electric offers. Akamai, one of the founding participants of the Day, saw its IPv6 requests quickly skyrocket to more than ten times the normal level.
In a blog post, Arbor Networks explains that normally, the little IPv6 traffic that they see is encrypted file transfers, peer-to-peer traffic, and experimental protocols. But for the duration of the experiment, HTTP pretty much became the only game in town.
Facebook had a million visitors over IPv6, but no increase in the number of users seeking help from their help center. Hopefully one or more World IPv6 Day participants (such as Google) have measured the number of users which experienced problems, because discovering those problems was the whole point of the exercise. So far, no thwarted users have come out of the woodwork.
But it seems safe to say that adding a few IPv6 addresses to the DNS is not enough kill the Internet. As such, a few participants, such as xbox.com, have decided to keep their IPv6 addresses in the DNS. Based on the very high amounts of tunneled IPv6 traffic observed, an obvious next step is to do whatever's needed to get native, untunneled IPv6 off the ground for consumers. That's not going to be easy, but then again, staying with IPv4 ones after all the addresses have been given out isn't going to be easy, either.