I would like to know, if the way vde_switch is using to manage packet
queue (packetq.c) has some theoretical basis, because, maybe, some
improvements are possible to reduce UDP packets out of order (and also
to reduce packet loss maybe). For example if a packet can't be
transmitted "now", it simply pass and will try to resend later. The
number of retries is fixed as the timeout is. I'm asking myself if a
smarter way to manage the queue exist because these fixed values are
not the best likely.
I'm trying to implement a dynamic timeout. After some tests with
variable timeout, i see it seems to improve the TCP performance
(478Mb/s Max -> 533 Mb/s Max) due to lower OOO packets probably, and
gives comparable performance on UDP but much less OOO packets (1/3 of
the original). I don't know how a real switch works, but I think this
component should aim to be as fast as possible and possibly take under
control every real aspect to reproduce.
Notice that on quadcore CPU with 2 kvm vm's connected to the same
vde_switch, the vde_switch process never goes over 40..55%. Maybe it's
normal because this process is IO bound. Or maybe something weird is
happening inside the component... However the free cpu time can be
used to implements a more complex-more efficient queue management
I would like to discuss deeper some of these thoughts with you.



Free Software Download: Index, Search & Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
vde-users mailing list

Reply via email to