Tomas Daniska <mailto:[EMAIL PROTECTED]> wrote on Tuesday, November 20,
2007 10:11 AM:

> Oli,
> 
>>> I am aware that symmetric load splitting to transparent stateful
>>> devices (such as IPS, SCE etc...) is possible with EtherChanneling
>>> (with some careful balancing algorithm design), and is available on
>>> c6k5 for some time.
>> 
>> Right, but I would not call this "symmetric". You always need a
>> sufficiently large number of flows to achieve symmetric load.
> 
> the symmetricity I talk about is in *topological* means - i.e., you
> get a flow going through the same member of the EtherChannel in both
> directions, which is crucial for deploying stateful boxes - kinda
> FWLB.  Nothing to do with granularity of the flows (that's why I have
called
> it load splitting instead of balancing). Sorry for being unclear.

Ah, I see. See below.

>>> But - c6k5 do not support cross-chassis EtherChannels with current
>>> supervisors; so if topological redundancy is required, L2-based LB
>>> is not the way to go. I've noticed someone somewhere saying this is
>>> also possible with CEF at L3, but I can find no reference for such
>>> solutions.
>> 
>> Yes, regular CEF load-balancing also achieves a similar result, with
>> the same caveat as above.
> 
> Can you please comment on CEF-based, possibly cross-site setup,
> considering the clarification above? Is it possible to hack CEF (by
> design, by tweaking or whatever) to achieve symmetricity of the
> forward and return path when load splitting between multiple paths?

Hmm, this could be tricky: There are two aspects to this:

1) CEF, by default, avoids polarization by introducing a "universal ID"
into the hash function. Polarization can affect certain topologies where
some paths will not be used, and is the result of the same hash value
for a given <src,dest> tupel on multiple nodes. As your requirement asks
for the same hash value, you need to configure the same ID on all
involved nodes via "ip cef load-sharing algorithm universal <id>"

2) The physical link used also depends on the order the CEF adjacencies
are setup (i.e. the order links go up/down, ARP replies, etc.). In
legacy CEF code (before 12.2(33)SXH/SRA), the order is
non-deterministic, so even with the same hash value (see (1)), you might
end up hashing to different interfaces. I recall that this was changed
in the CEF rewrite in SXH/SRA, but I'm not entirely sure if this, along
with an identical universal ID, will address your requirement. 
Can you try this in the lab? "show mls cef exact-route <source> <dest>
[<src-l4> <dest-l4>]" shows the result of the CEF hash for a given flow.

Having said all this: I wouldn't really base an important design on the
above behaviour. Load-sharing is really a local decision, and in case
the algorithm gets changed, your solution will break.

        oli
 
 
_______________________________________________
cisco-nsp mailing list  [email protected]
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to