On Mon, Mar 23, 2015 at 8:01 PM, Tim Bagr <[email protected]> wrote:
> Hello all,
>
> I tried to limit bandwidth for particular VM attached to OpenVSwitch port.
>
> There are VMs attached to virtual switch br3. I want to limit bandwidth for
> one VM, which is attached to br3 and in_port 2. Doing as follows:
> 1)
> # ovs-vsctl set port br3 qos=@newqos1 -- --id=@newqos1 create qos
> type=linux-htb other-config:max-rate=10 queues:123=@vn2queue --
> --id=@vn2queue create queue other-config:max-rate=10
> # ovs-ofctl add-flow br3
> "priority=50,in_port=2,actions=set_queue:123,normal"
>
> Command worked! The bandwidth is limited as expected for all traffic going
> through LOCAL port of br3, limit is very tight so even icmp echo is hardly
> passed. All VMs are affected by this.
>
> After that I tried to limit the bandwidth for only one VM, and give almost
> unlimited bandwidth for the others:
> 2)
> # ovs-vsctl set port br3 qos=@newqos1 -- --id=@newqos1 create qos
> type=linux-htb other-config:max-rate=100000000
> queues:123=4d65ddec-5c5d-4c87-8165-dd51431c7ab3
>
> Here the ID 4d65ddec-5c5d-4c87-8165-dd51431c7ab3 is that queue which was
> created in previous steps with max-rate=10
> OpenFlow rule for redirecting traffic from the in_port=2 to that queue
> remains the same.
>
> After that I got all VMs (including attached to in_port 2) has got that big
> bandwidth (100000000).
> But it was supposed that VM from port 2 cannot send even pings (as its
> traffic should be limited by max-rate=10).
>
>
> Am I doing something wrong or maybe have incorrect understanding how QoS
> works?

I believe that your understanding is a little flawed (or I did not
follow your description correctly).
For your case, I think you should be applying QoS on the outgoing
physical port (e.g ethX) which has been added as a port of OVS bridge
and not on the bridge itself (br3 in your case).

You usually create multiple queues with different configured min rate
and max rates.

e.g:
if eth1 is a port of br1:

sudo ovs-vsctl -- set port eth1 qos=@newqos \
-- --id=@newqos create qos type=linux-htb
other-config:max-rate=900000000 queues=10=@q0,20=@q1,30=@q2 \
-- --id=@q0 create queue other-config:min-rate=720000000
other-config:max-rate=900000000 \
-- --id=@q1 create queue other-config:min-rate=0
other-config:max-rate=80000000 \
-- --id=@q2 create queue other-config:min-rate=0 other-config:max-rate=70000000

The above creates 3 queues with queue-numbers 10, 20 and 30

The next step is to get the ofport value of 'eth1' using 'ovs-ofctl show br1'

Then you can add your flows for bridge br1 and direct particular set
of traffic to particular queues.
e.g:
sudo ovs-ofctl add-flow br1 priority=65500,in_port=3,actions=enqueue:2:10
sudo ovs-ofctl add-flow br1 priority=65500,in_port=4,actions=enqueue:2:20
sudo ovs-ofctl add-flow br1 priority=65500,in_port=5,actions=enqueue:2:30

Above in_port values 3, 4 and 5 can be vifs of your VMs if they have
been attached to br1.

The above example is not typically used in a typical OVS deployment
that involves overlay tunnels. In those cases, you will attach vifs to
a separate OVS bridge. Your flows on that bridge will enqueue to a
tunnel port's ofport value instead (But your queue is still on
physical port).





> Or it is a known issue within OVS?
> Please help.
>
> _______________________________________________
> discuss mailing list
> [email protected]
> http://openvswitch.org/mailman/listinfo/discuss
>
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to