On Jun 27, 2014, at 9:58 PM, Dave Taht <[email protected]> wrote:

> One of the points in the wired article that kicked this thread off was
> this picture of what the internet is starting to look like:
> 
> http://www.wired.com/wp-content/uploads/2014/06/net_neutral.jpg.jpeg
> 
> I don't want it to look like that.

Well, I think trying to describe the Internet in those terms is a lot like half 
a dozen blind men describing an elephant. The picture makes a point, and a good 
one. But it’s also wildly inaccurate. It depends on which blind man you ask. 
And they’ll all be right, from their perspective.

There is in fact a backbone. Once upon a time, it was run by a single company, 
BBN. Then it was more like five, and then ... and now it’s 169. There are, if 
the BGP report (http://seclists.org/nanog/2014/Jun/495) is to be believed, 
47136 ASNs in the system, of which 35929 don’t show up as transit for anyone 
and are therefore presumably edge networks and potentially multihomed, and of 
those 16325 only announce a single prefix. Of the 6101 ASNs that show up as 
transit, 169 ONLY show up as transit. Yes, the core is 169 ASNs, and it’s not a 
little dot off to the side. If you want to know where it is, do a traceroute 
(tracery on windows).

I’ll give you two, one through Cisco and one through my residential provider.

traceroute to reed.com (67.223.249.82), 64 hops max, 52 byte packets
 1  sjc-fred-881.cisco.com (10.19.64.113)  1.289 ms  12.000 ms  1.130 ms
 2  sjce-access-hub1-tun10.cisco.com (10.27.128.1)  47.661 ms  45.281 ms  
42.995 ms
 3  ...
11  sjck-isp-gw1-ten1-1-0.cisco.com (128.107.239.217)  44.972 ms  45.094 ms  
43.670 ms
12  tengige0-2-0-0.gw5.scl2.alter.net (152.179.99.153)  48.806 ms  49.338 ms  
47.975 ms
13  0.xe-9-1-0.br1.sjc7.alter.net (152.63.51.101)  43.998 ms  45.595 ms  49.838 
ms
14  206.111.6.121.ptr.us.xo.net (206.111.6.121)  52.110 ms  45.492 ms  47.373 ms
15  207.88.14.225.ptr.us.xo.net (207.88.14.225)  126.696 ms  124.374 ms  
127.983 ms
16  te-2-0-0.rar3.washington-dc.us.xo.net (207.88.12.70)  127.639 ms  132.965 
ms  131.415 ms
17  te-3-0-0.rar3.nyc-ny.us.xo.net (207.88.12.73)  129.747 ms  125.680 ms  
123.907 ms
18  ae0d0.mcr1.cambridge-ma.us.xo.net (216.156.0.26)  125.009 ms  123.152 ms  
126.992 ms
19  ip65-47-145-6.z145-47-65.customer.algx.net (65.47.145.6)  118.244 ms  
118.024 ms  117.983 ms
20  * * *
21  209.59.211.175 (209.59.211.175)  119.378 ms *  122.057 ms
22  reed.com (67.223.249.82)  120.051 ms  120.146 ms  118.672 ms

traceroute to reed.com (67.223.249.82), 64 hops max, 52 byte packets
 1  10.0.2.1 (10.0.2.1)  1.728 ms  1.140 ms  1.289 ms
 2  10.6.44.1 (10.6.44.1)  122.289 ms  126.330 ms  14.782 ms
 3  ip68-4-12-20.oc.oc.cox.net (68.4.12.20)  13.208 ms  12.667 ms  8.941 ms
 4  ip68-4-11-96.oc.oc.cox.net (68.4.11.96)  17.025 ms  13.911 ms  13.835 ms
 5  langbprj01-ae1.rd.la.cox.net (68.1.1.13)  131.855 ms  14.677 ms  129.860 ms
 6  68.105.30.150 (68.105.30.150)  16.750 ms  31.627 ms  130.134 ms
 7  ae11.cr2.lax112.us.above.net (64.125.21.173)  40.754 ms  31.873 ms  130.246 
ms
 8  ae3.cr2.iah1.us.above.net (64.125.21.85)  162.884 ms  77.157 ms  69.431 ms
 9  ae14.cr2.dca2.us.above.net (64.125.21.53)  97.115 ms  113.428 ms  80.068 ms
10  ae8.mpr4.bos2.us.above.net.29.125.64.in-addr.arpa (64.125.29.33)  109.957 
ms  124.964 ms  122.447 ms
11  * 64.125.69.90.t01470-01.above.net (64.125.69.90)  86.163 ms  103.232 ms
12  250.252.148.207.static.yourhostingaccount.com (207.148.252.250)  111.068 ms 
 119.984 ms  114.022 ms
13  209.59.211.175 (209.59.211.175)  103.358 ms  87.412 ms  86.345 ms
14  reed.com (67.223.249.82)  87.276 ms  102.752 ms  86.800 ms

Cisco->AlterNet->XO->ALGX is one path, and Cox->AboveNet->Presumably ALGX is 
another. They both traverse the core.

Going to bufferbloat.net, I actually do skip the core in one path. Through 
Cisco, I go through core site and hurricane electric and finally into ISC. ISC, 
it turns out, is a Cox customer; taking my residential path, since Cox serves 
us both, the traffic never goes upstream from Cox.

Yes, there are CDNs. I don’t think you’d like the way Video/IP and especially 
adaptive bitrate video - Netflix, Youtube, etc - worked if they didn’t exist. 
Akamai is probably the prototypical one, and when they deployed theirs it made 
the Internet quite a bit snappier - and that helped the economics of Internet 
sales. Google and Facebook actually do operate large data centers, but a lot of 
their common content (or at least Google’s) is in CDNlets. NetFlix uses several 
CDNs, or so I’m told; the best explanation I have found of their issues with 
Comcast and Level 3 is at http://www.youtube.com/watch?v=tR1sLLOYxnY (and it 
has imperfections). And yes, part of the story is business issues over CDNs. 
Netflix’s data traverses the core once to each CDN download server, and from 
the server to its customers.

The IETF uses a CDN, as of recently. It’s called Cloudflare.

One of the places I worry is Chrome and Silk’s SPDY Proxies, which are 
somewhere in Google and Amazon respectively. Chrome and Silk send https and 
SPDY traffic directly to the targeted service, but http traffic to their 
proxies, which do their magic and send the result back. One of the potential 
implications is that instead of going to the CDN nearest me, it then goes to 
the CDN nearest the proxy. That’s not good for me. I just hope that the CDNs I 
use accept https from me, because that will give me the best service (and btw 
encrypts my data).

Blind men and elephants, and they’re all right.



Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to