in svn, under:
I have implemented an experimental version including:
- nanosecond resolution on delays: it uses ppoll
- dynamic delays (similar to yours): the first delay is 5ms (the mean value) and
then a value between 1ms and 10ms depending on the length of the queue.

Let me know if it works and if it gives the same performance increment you got 
your code.

I am asking to the other members of the development team if:
- ppoll
- clock_gettime
- librt (-lrt) in

are portable to the other architectures supported for vde (expecially on MacOSX 
and bsd).

Thank you to everybody.


P.S. I am currently brainstorming about the idea of per-port queues/delays, and 
I am not
so sure that increasing the frequency for long queues is a good idea.
If there is a congested line, things may evolve to a worse situation.
question #1: is this the behavior of (non virtual) switch?
question #2: if it is different: is there something in virtualization 
supporting the correctness
of this behavior?

P.S.#2 Simone, have you tried the kvde_switch? 
I am curious to see if it has a better or worse behavior in your situation.

Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
vde-users mailing list

Reply via email to