On Thu, Feb 24, 2011 at 12:26 AM, Simone Abbakus wrote:

> Hi,

Hi Simone,

thank you very much for looking into this.

> I would like to know, if the way vde_switch is using to manage packet
> queue (packetq.c) has some theoretical basis, because, maybe, some
> improvements are possible to reduce UDP packets out of order (and also
> to reduce packet loss maybe). For example if a packet can't be
> transmitted "now", it simply pass and will try to resend later. The
> number of retries is fixed as the timeout is. I'm asking myself if a
> smarter way to manage the queue exist because these fixed values are
> not the best likely.

I didn't participate to the original packetq design, but I'm pretty
confident that it has been implemented without any prior performance
investigation on potential alternatives.

> I'm trying to implement a dynamic timeout. After some tests with
> variable timeout, i see it seems to improve the TCP performance
> (478Mb/s Max -> 533 Mb/s Max) due to lower OOO packets probably, and
> gives comparable performance on UDP but much less OOO packets (1/3 of
> the original).

The performance improvements you're mentioning look interesting.
I'm all up for discussing an alternative design.. let's discuss!

I have a few points to throw in the conversation, please tell me what you
think.

First of all I'd like to see/discuss what performance tests you're basing
your work on, so that we can start from a common ground with some
replicable benchmarks.

Then, for discussing the changes can you please send us your patch(es)?
A pseudo-code proposal of the algorithm works as well.

> I don't know how a real switch works, but I think this
> component should aim to be as fast as possible and possibly take under
> control every real aspect to reproduce.

I'm not an expert in how switches/routers queuing works either, if we're
planning to improve this we should put some effort in documenting
ourselves. Did you consider alternative queuing systems other than
changing the timeout parameters? For instance while designing the vde 3
proposal we opted for a separate queuing settings for each port. The
rationale behind this is that different ports/connections have different
traffic capacity, hence it might be unfair to use the same timeout
parameters for slow and fast links.

> Notice that on quadcore CPU with 2 kvm vm's connected to the same
> vde_switch, the vde_switch process never goes over 40..55%. Maybe it's
> normal because this process is IO bound. Or maybe something weird is
> happening inside the component... However the free cpu time can be
> used to implements a more complex-more efficient queue management
> maybe.
> I would like to discuss deeper some of these thoughts with you.

I think that in 2011 doing some basic calculation each time we process the
queue doesn't harm that much. The most important thing is to profile the
application after the changes, to talk with real data in front of us.

Regarding the CPU usage, what did you measure? What's the bottleneck that
gets hit during your benchmarks? How many context switches do you see?


Thank you very much for your contribution.


Luca

-- 
Beware of programmers who carry screwdrivers.
                        -- Leonard Brandwein

http://www.artha.org/

------------------------------------------------------------------------------
Free Software Download: Index, Search & Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
_______________________________________________
vde-users mailing list
vde-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/vde-users

Reply via email to