On Wed, 21 Jan 2009, David Conrad wrote:

Paul,

I assumed there was an implied ", to a significant enough extent for it to matter" at the end of the sentence I quoted.

What meets your requirement of enough "to matter"? Depending on deployment scenario, a tunnel-router-based solution could cover thousands of hosts or more. Is that enough to matter?

It's going to be subjective to a point, but determining "to matter" would involve comparing the control-overhead to the actual data-bearing traffic, rather than focusing solely on the difference in control traffic between end-host and intermediary based solutions.

Having a tunnel-router do the liveliness tests for 1000 hosts might save 999 instances of control traffic, but if the 999 instances of control traffic comprise 1-2% of the data traffic - then should we care? (Is there a data-communications version of Amdahl's Law I can quote here?).

By my back-of-the-envelope calculations, a Shim6/REAP solution has Order(1.2%) overhead relative to data-traffic (on a low amount of traffic, 24kB - e.g. fetching the google front-page) for the normal case. Failure cases presumably might take slightly more - but failure is not the common case. Solid quantitative studies would be really useful.

So basically, where's the awful overhead of host-based signalling, relative to what matters: the actual data?

It seems it would be good to have data to back up such an assertion, rather than accepting it blindly and introducing a lot of extra complexity (and losing much fate-sharing) to optimise an apparently relatively-trivial overhead.

regards,
--
Paul Jakma      [email protected]   [email protected]  Key ID: 64A2FF6A
Fortune:
  "I always avoid prophesying beforehand because it is much better
  to prophesy after the event has already taken place. " - Winston
  Churchill
_______________________________________________
rrg mailing list
[email protected]
http://www.irtf.org/mailman/listinfo/rrg

Reply via email to