Hello Gurucharan,

Thanks for the reply.

I have no physical ports attached to that OVS bridge. My intention was
exactly to limit bandwidth for br3 port (i.e. openflow keyword LOCAL), and
another try was to limit bandwidth for internal patch-port, which connects
br3 bridge with br2 bridge. In both bridges there are VMs attached and they
may communicate with each other through the patch-port.

# ovs-vsctl show
b11e83cb-d741-4a59-90f7-ea9693d508cf
    Bridge "br2"
        Port "b3p"
            Interface "b3p"
                type: patch
                options: {peer="b2p"}
        Port "vnet1"
            Interface "vnet1"
        Port "br2"
            Interface "br2"
                type: internal
    Bridge "br3"
        Port "br3"
            Interface "br3"
                type: internal
        Port "vnet0"
            Interface "vnet0"
        Port "b2p"
            Interface "b2p"
                type: patch
                options: {peer="b3p"}

Here vnet1 is port of 1st VM and vnet0 is port of 2nd VM. Now I just want
to limit bandwidth from VM2 to VM1.
So I run:
# ovs-vsctl set Port b2p qos=@newq -- --id=@newq create qos type=linux-htb
other-config:max-rate=100000000 queues=111=@q1 -- --id=@q1 create queue
other-config:min-rate=0 other-config:max-rate=10
65c5488d-2066-4d97-b21f-ba369a8b2920
1e3726b4-12e2-4184-b8e4-13f9c692095f

# ovs-ofctl show br3
OFPT_FEATURES_REPLY (xid=0x2): dpid:00007a42d1ca7e05
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST
SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 3(b2p): addr:ae:a4:9d:8b:9f:a9
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 6(vnet0): addr:fe:54:00:ac:e1:a6
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
 LOCAL(br3): addr:7a:42:d1:ca:7e:05
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

# ovs-ofctl add-flow br3 priority=5,in_port=6,actions=enqueue:3:111
# ovs-ofctl dump-flows br3
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=3.285s, table=0, n_packets=0, n_bytes=0, idle_age=3,
priority=5,in_port=6 actions=enqueue:3:111
 cookie=0x0, duration=79191.532s, table=0, n_packets=744, n_bytes=431875,
idle_age=7, hard_age=65534, priority=0 actions=NORMAL

Then I start to ping6 from VM2 to VM1 and pings will go through that
patch-ports b3p and b2p. As I expected, the QoS queue will be in effect.
And limit the bandwidth to very small amount of max-rate=10

[root@VM2 ~]# ping6 fe80::5054:ff:fe72:b2bc%eth0 -i 0.1 -s 1000
PING fe80::5054:ff:fe72:b2bc%eth0(fe80::5054:ff:fe72:b2bc) 1000 data bytes
1008 bytes from fe80::5054:ff:fe72:b2bc: icmp_seq=1 ttl=64 time=0.536 ms
1008 bytes from fe80::5054:ff:fe72:b2bc: icmp_seq=2 ttl=64 time=0.153 ms
1008 bytes from fe80::5054:ff:fe72:b2bc: icmp_seq=3 ttl=64 time=0.150 ms
1008 bytes from fe80::5054:ff:fe72:b2bc: icmp_seq=4 ttl=64 time=0.163 ms
1008 bytes from fe80::5054:ff:fe72:b2bc: icmp_seq=5 ttl=64 time=0.142 ms
1008 bytes from fe80::5054:ff:fe72:b2bc: icmp_seq=6 ttl=64 time=0.142 ms
1008 bytes from fe80::5054:ff:fe72:b2bc: icmp_seq=7 ttl=64 time=0.255 ms
^C
--- fe80::5054:ff:fe72:b2bc%eth0 ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 599ms
rtt min/avg/max/mdev = 0.142/0.220/0.536/0.134 ms



# ovs-ofctl dump-flows br3
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=12.228s, table=0, n_packets=36, n_bytes=38232,
idle_age=6, priority=5,in_port=6 actions=enqueue:3:111
 cookie=0x0, duration=79236.643s, table=0, n_packets=814, n_bytes=505239,
idle_age=6, hard_age=65534, priority=0 actions=NORMAL

So the n_packets=36 tells that openflow rule is actually matching but queue
limit of max-rate=10 is not working :(



On Tue, Mar 24, 2015 at 8:05 PM, Gurucharan Shetty <[email protected]>
wrote:

> On Mon, Mar 23, 2015 at 8:01 PM, Tim Bagr <[email protected]> wrote:
> > Hello all,
> >
> > I tried to limit bandwidth for particular VM attached to OpenVSwitch
> port.
> >
> > There are VMs attached to virtual switch br3. I want to limit bandwidth
> for
> > one VM, which is attached to br3 and in_port 2. Doing as follows:
> > 1)
> > # ovs-vsctl set port br3 qos=@newqos1 -- --id=@newqos1 create qos
> > type=linux-htb other-config:max-rate=10 queues:123=@vn2queue --
> > --id=@vn2queue create queue other-config:max-rate=10
> > # ovs-ofctl add-flow br3
> > "priority=50,in_port=2,actions=set_queue:123,normal"
> >
> > Command worked! The bandwidth is limited as expected for all traffic
> going
> > through LOCAL port of br3, limit is very tight so even icmp echo is
> hardly
> > passed. All VMs are affected by this.
> >
> > After that I tried to limit the bandwidth for only one VM, and give
> almost
> > unlimited bandwidth for the others:
> > 2)
> > # ovs-vsctl set port br3 qos=@newqos1 -- --id=@newqos1 create qos
> > type=linux-htb other-config:max-rate=100000000
> > queues:123=4d65ddec-5c5d-4c87-8165-dd51431c7ab3
> >
> > Here the ID 4d65ddec-5c5d-4c87-8165-dd51431c7ab3 is that queue which was
> > created in previous steps with max-rate=10
> > OpenFlow rule for redirecting traffic from the in_port=2 to that queue
> > remains the same.
> >
> > After that I got all VMs (including attached to in_port 2) has got that
> big
> > bandwidth (100000000).
> > But it was supposed that VM from port 2 cannot send even pings (as its
> > traffic should be limited by max-rate=10).
> >
> >
> > Am I doing something wrong or maybe have incorrect understanding how QoS
> > works?
>
> I believe that your understanding is a little flawed (or I did not
> follow your description correctly).
> For your case, I think you should be applying QoS on the outgoing
> physical port (e.g ethX) which has been added as a port of OVS bridge
> and not on the bridge itself (br3 in your case).

You usually create multiple queues with different configured min rate
> and max rates.
>
> e.g:
> if eth1 is a port of br1:
>
> sudo ovs-vsctl -- set port eth1 qos=@newqos \
> -- --id=@newqos create qos type=linux-htb
> other-config:max-rate=900000000 queues=10=@q0,20=@q1,30=@q2 \
> -- --id=@q0 create queue other-config:min-rate=720000000
> other-config:max-rate=900000000 \
> -- --id=@q1 create queue other-config:min-rate=0
> other-config:max-rate=80000000 \
> -- --id=@q2 create queue other-config:min-rate=0
> other-config:max-rate=70000000
>
> The above creates 3 queues with queue-numbers 10, 20 and 30
>
> The next step is to get the ofport value of 'eth1' using 'ovs-ofctl show
> br1'
>
> Then you can add your flows for bridge br1 and direct particular set
> of traffic to particular queues.
> e.g:
> sudo ovs-ofctl add-flow br1 priority=65500,in_port=3,actions=enqueue:2:10
> sudo ovs-ofctl add-flow br1 priority=65500,in_port=4,actions=enqueue:2:20
> sudo ovs-ofctl add-flow br1 priority=65500,in_port=5,actions=enqueue:2:30
>
> Above in_port values 3, 4 and 5 can be vifs of your VMs if they have
> been attached to br1.
>
> The above example is not typically used in a typical OVS deployment
> that involves overlay tunnels. In those cases, you will attach vifs to
> a separate OVS bridge. Your flows on that bridge will enqueue to a
> tunnel port's ofport value instead (But your queue is still on
> physical port).
>
>
>
>
>
> > Or it is a known issue within OVS?
> > Please help.
> >
> > _______________________________________________
> > discuss mailing list
> > [email protected]
> > http://openvswitch.org/mailman/listinfo/discuss
> >
>
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to