On Dec 9, 2013, at 8:19 AM, Perhaps Lee <07xl...@gmail.com> wrote:
> When testing the bandwidth isolation, I add the queues like:
> 
> dpctl add-queue tcp:localhost:6634 3 1 200
> dpctl add-queue tcp:localhost:6634 3 2 800
> 
> Then assign queue-1 for udp traffic, queue-2 for tcp traffice. The result are 
> tcp: 25.7Mbits/sec, udp: 26.5Mbits/sec. It seems the bandwidth isolation 
> cannot work. 
> By running the tc command, I get:
> 

hi,

my guess is that you are hitting the limits of user-space software switching on 
these devices, which unfortunately have very slow processors.

for example, your queues are set up to be 200 Mbps and 800 Mbps respectively, 
yet you are only seeing 25.7 Mbps and 26.5 Mbps of maximum traffic -- an order 
of magnitude smaller than your limits!  my hunch is that the user-space 
software switch is both overwhelmed by the traffic and also processing the 
ports fairly, leading to the even split in traffic.

try using much lower limits (for example, the 4 Mbps and 6 Mbps limits used on 
the slicing Wiki page) and see if that works.

if you want to preserve the 200 (20%) and 800 (80%) numbers, one option is to 
directly create an additional parent queue with tc which is limited to a speed 
which the switch can handle (say, 20 Mbps). this is what we did in Mininet to 
get QoS working with link configuration [1], otherwise the kernel will use the 
nominal interface speed (1 Gbps in your case, 10 Gbps in the Mininet case) as 
the parent queue, even if the switch can't handle traffic at that rate.


hope this helps!
Andrew



[1] https://github.com/mininet/mininet/pull/132
_______________________________________________
openflow-discuss mailing list
openflow-discuss@lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/openflow-discuss

Reply via email to