Hello Renzo, thank you for looking into this, we look forward to have the
performance of the switch improved, so I second Simone's idea to put hands
on the queues.

Since nowadays I work with (non-virtual) switches, I would like to give
some consideration on those.

On Mon, Mar 14, 2011 at 04:37:03PM +0100, Renzo Davoli wrote:
> P.S. I am currently brainstorming about the idea of per-port queues/delays, 
> and I am not
> so sure that increasing the frequency for long queues is a good idea.
> If there is a congested line, things may evolve to a worse situation.
> question #1: is this the behavior of (non virtual) switch?

Every physical switch, even those off-the-shelf one can buy for a few
euros, have separate queues on the output ports, at least one per port.

Most advanced switches, supporting QoS, have more than one queue per
outgoing port and possibly also a queue per each ingoing port. 

I try to summarize the options we have for virtualizing switch queues:

1 - The simplest queue model for a switch would be one queue per outgoing port
with a fixed-length buffer, using drop-tail policy to drop packets that
cannot make to the output after the switching. 

2 - A step further can be a fixed-rate policy on a single port. I think the
most common is the token bucket, but also RED can be an option.

3 - Step three in order of complexity is supporting QoS, so queues with
different classes on the same output port. The setup could be either
configurable by the user, or obeying ToS field in IP header, or either
recognizing the type of traffic to give different classes. An example of
auto-detected classes with priority from low to hi could be:
        - P2P traffic
        - Best effort (TCP, anything else...)
        - Audio/video streams
        - VOIP traffic
Of course to implement something like that, some sort of intelligence is
required, and it is also a non-standard cross-layer behavior.
The priority for the queue selection can be either round-robin or some
more complex classifier.


4 - Adding queues on input ports allows to implement some back-pressure
mechanism that refuses incoming packets prior to the switching. This is
rather complicated because it requires a fast pre-switching algorithm to
identify outgoing flows of frames on the input queues.


As a side note, as already mentioned, I ported some QoS modules from
Linux/netfilter to use them on the vde_l3, those are in fact the
dynamically loadable modules that are already in the repository. I am not
telling that we have to use them, but maybe you would like to take a look if
you consider step 2 or further.

If I understand the current implementation you are considering, I think
that dropping packets that cannot be send instead of requeuing could be a
better option, but this is only my opinion.


ciao 

-- 
Daniele



------------------------------------------------------------------------------
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
_______________________________________________
vde-users mailing list
vde-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/vde-users

Reply via email to