{Apologies to the list for the slight derail I started, but it's an
interesting topic, and also one of my hot buttons... :-}
On the actual subject of the document, I too agree it would be good to have a
document which looks in detail at aggregation in destination-vector routing
architectures; there are indeed some interesting 'issues' when you try and
automatically start aggregating more-specifics.
> From: Warren Kumari <[email protected]>
>> Doing traffic engineering by injecting more-specifics into the global
>> destination-vector routing is a top pick on my list of 'optimal
>> illustration for hammer-nail syndrome'
> there are a large number of folk that are currently doing this ..
> Removing their ability to use this "feature" because it violates ..
> views of elegance seems, um, impolite at best.
First, 'elegance' isn't abstract - doing things the ugly way has real costs
and disadvantages, and it's those I really care about: 'elegant' is
effectively just a short-hand for all the upsides.
Second, I never advocated taking that mechanism way _unless there was
something else available_; of course we're running an operational system that
a large part of the world's population has come to rely on. But 'we do it
this way because we've always done it this way' is no way to run a railroad.
> From: Nick Hilliard <[email protected]>
> No-one's going to argue that multihoming by multiple prefix
> announcement isn't a pile of poo. It's just that all that all the
> alternatives are even worse.
How is using a {location->identity} binding layer worse? (And please read
the next bit before you answer...)
> All the alternatives are significantly more complicated in a variety of
> different ways, and all open up many more and many more complicated
> failure modes.
So, because the newer Unix/Linux file systems are "more complicated in a
variety of different ways", with "more and many more complicated failure
modes" we should go back to the v6 one?
Of course not. The new systems have lots of good properties (including better
scaling); and although they are indeed considerably more complex (with
attendant increase in number of failure modes), they also have stuff added to
try and make them even more robust - one of their features - (albeit at a
cost of even more complexity).
And don't forget that your 'simple' alternative has costs and failure modes
of its own - all those extra routes in the DFZ create problems when tables
overflow, mean longer convergance times after major topology changes, etc,
etc, etc.
Look, simple systems are wonderful. I love (and prefer) simple systems.
However, they often have issues (e.g. don't scale well), and you have to use
something more complex. Deal.
Noel
_______________________________________________
GROW mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/grow