Robin,

On Jan 5, 2010, at 21:39 MST, Robin Whittle wrote:
> 1 - Core-edge elimination schemes are impossible to introduce widely
>    enough on a voluntary basis to solve the routing scaling problem.

I take issue with your assertion that it's "impossible" to upgrade 
hosts/servers, particularly on a voluntary basis, to support (say) an ID-Loc 
split to solve the routing scaling problem.

First, I think this completely ignores the emerging trend of mobile devices -- 
not [just] cell phones, but smartphones, netbooks and (possibly) "tablets".  
It's important to note that when I mention these devices I'm not referring 
solely to the portability/mobility of these devices, but also (perhaps, more 
importantly) the **disposability** of these devices.  Specifically, due to 
breakage, wear & tear, CPU slowness, lack of memory, lack of new features, etc. 
people often throw out these types of devices after a couple of years and buy 
new ones, (usually/generally subsidized by the carriers).  Even aside from 
disposability, it seems the more recent generations of smartphones, (e.g.: 
iPhone, Android phones, etc.), often have major releases around once per year 
mostly for new features and, more importantly, entice end-users to upgrade (by 
themselves) for those new features -- this is opposed to traditional cell 
phones of just a few years ago that were never upgraded largely beca
 use they only had one, or two, applications/uses: Voice & TXT messaging.

Assuming one believes this is true, how many of these types of devices are out 
there?  Well, unfortunately, (from what I can discern) it seems the IETF 
stopped measuring the size and, more importantly, composition of devices on the 
Internet back in the early '90's.  (I would welcome being corrected, of 
course).  Although I've spent way too much time looking for *any* [good] data, 
at all, the "best" data I came across appears to come from Internet advertising 
firms[1].  Specifically, look here:
http://www.phonecount.com/pc/count.jsp
Take a look at the "sources" links on that Web page, but I believe they mostly 
come from here:
http://www.internetworldstats.com/stats.htm
... quite frankly, if these numbers are true (and, I'd like to believe they're 
directionally correct), they appear quite shocking.  Of course, if there's 
better or more reliable data that I haven't seen, please do share.  Regardless, 
the larger trends that I gather from that data is that: a) mobile/disposable 
devices are, or will be, growing at an unprecedented rate we've not witnessed 
heretofore; and, b) we still have a lot of Internet growth ahead of us, given 
that parts of the [developing] world have such a low penetration of the 
Internet.  Ultimately, because they're disposable devices, 'natural' breakage 
and wear & tear should ensure fairly healthy turnover of these devices and, 
more importantly, the O/S'es that drive them.

Next, let's take a look at the release cycles of major O/S'es, (note, this list 
is completely arbitrary picking on my part, but hopefully illustrates the 
point):
1)  http://en.wikipedia.org/wiki/Microsoft_windows#Timeline_of_releases
At a quick glance, from Windows 95 onward, it appears Microsoft averaged 
approximately 2 - 3 years to release a new O/S, modulo XP to Vista which took 
about double that time.
2)  http://en.wikipedia.org/wiki/Mac_OS_X#Versions
Mac OS X, in recent years, appears to much more consistently average around 2 
years for a new major O/S release.
3)  http://en.wikipedia.org/wiki/Fedora_linux#Version_history
4)  http://en.wikipedia.org/wiki/RHEL#Version_history
It seems as if Fedora is on an ~6 month release schedule whereas RHEL, (more 
Enterprise focused), is averaging around 7 - 9 months.  Although I didn't 
include other mainstream Linux releases, from what I understand of them they're 
typically releasing 1 - 2 times per year.
... The larger point with mentioning the above release cycles is, it's my 
belief, that what gets released into these (and other) O/S'es is the _size_ of 
the changes being made.  IOW, if the changes are viewed as more incremental in 
nature, (i.e.: perhaps similar to ILNP and Name-Based Sockets, just to mention 
two host-based proposals I'm more familiar with), then it will be significantly 
easier for them to code the changes, test them, release the code and start to 
transition their developer base onto them.  Related to the last point, take a 
look at articles related to Mac OS X Snow Leopard and how Apple is 
transitioning their developer base toward 64-bit API's -- it's tricky, but they 
appear to be doing it quite gracefully.

The point of mentioning all of the above is that you appear to be focusing 
mostly/solely on the rearview mirror when thinking about a future Internet 
Architecture, specifically designing a solution based around traditional 
fixed/wired devices that use traditional multihoming techniques, (while 
potentially placing significant amounts of complexity in the network to do so). 
 While we certainly can't forget about the embedded base that's out there 
today, it seems false to believe that host O/S'es are completely static, nor 
ever get upgraded.  Finally, I would assert that we potentially are at a 
crossroads where the composition of the Internet may fundamentally be changing, 
as we speak, away from pre-dominantly wired hosts to mobile, disposable devices 
(if it hasn't already).  It would be very unfortunate if we didn't provide a 
well designed, host-based ID-Loc solution out-of-the-gate (perhaps/likely as 
not the only solution, but certainly as a key part of the overall recommended
  solution) to get us on a better trajectory for scaling, not to mention 
putting more intelligence in the hosts to let them decide/control their own 
application's fate while at the same time keeping the network as dumb, 
inexpensive and [relatively] easy to run.

My $0.02,

-shane

[1] I would assume major content houses like Google, Yahoo, etc. probably have 
some great data on browser types and O/S'es, over time, which would be 
wonderful to see and help guide us; however, I'm not aware of anyone of that 
size making said data publicly available.  I've looked at publicly released 
reports/presos from Akamai, Arbor Networks, Renesys & other vendors who would 
seem to potentially have interesting data in this regard, however they don't 
seem to look at Internet device composition either, unfortunately, from what I 
can tell.  Paging kc @ CAIDA.  :-)
_______________________________________________
rrg mailing list
[email protected]
http://www.irtf.org/mailman/listinfo/rrg

Reply via email to