On Fri, Feb 25, 2011 at 12:17:58AM +0100, Ludovico Gardenghi wrote:
> AFAIK, at least for the "average Cisco switch" on which I have had some
> experience, the default behaviour is per-interface tail-drop. That is,
> each egress interface has a buffer. When it's full, new packets are
> dropped and there's no timeout -- enqueuing will happen again when some
> old packets are delivered. Alternatives are variants of RED/WRED
> algorithms, which start dropping random packets before the queue gets
> full, trying to address some issues of tail-dropping especially for TCP
> traffic (global synchronization is one of them).
> If you go up with the price (or to layer 3 switching) you get more
> refined QoS settings which are usually performed by the software (while
> the tail-drop queue is handled by the hardware), but I think they're out
> of scope here.

I agree. Let me add something about that.

Virtualizing a switch is a very hard task because real ones come with a
specific processing unit, with realtime capabilities and multiple cores,
dedicated (almost) 100% to the task. Said so, nowadays very fast switches 
(>1GBit) use advanced QoS algorithms and shapers on the output queues to 
cope with traffic overloads. 

vde_l3 supports shapers in separate plugins, and there are already a drop
tail and a RED implemented as separate modules, don't know if they can be
useful. RED is very difficult to implement, so I had basically stolen that 
from linux implementation ;).

Prettige dag


Free Software Download: Index, Search & Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
vde-users mailing list

Reply via email to