Hi,

   Ben, Babu, what you're proposing here makes sense, it's aligned with the
plans we were shaping for min-bandwidth guarantees in the openvswitch-agent
[1] [2]

    I wasn't aware of the possibility of setting a queue on the external
interface, and reference that from set_queue (I need to understand how
would the external interface queue can be registered into openvswitch
openflow queues -is that currently possible?-)

    My plan was (in the case of the ovs-agent solution) to add those queues
in the "patch" ports connecting br-int to other bridges by turning them
into veths when necessary -slow-

[br-tun] <---> [br-int]
[br-ex*] <---> [br-int]


Please note that we plan to tackle [3]  (ingress bw limit) by popular
request, by attaching a queue to the instance port (for now).

In the future, we could also provide min-bw for ingress in the case of
having an intermediate veth or device where to attach the queue and count
all the passing traffic (to make min effective), but I'd be very happy to
stay away from veths because of the performance penalty they introduce.



[1] https://bugs.launchpad.net/neutron/+bug/1560963 (min bw)
[2] https://bugs.launchpad.net/neutron/+bug/1578989 (min bw
scheduling-aware)
[3] https://bugs.launchpad.net/neutron/+bug/1560961 (instance ingress max
bw limit)



On Tue, May 17, 2016 at 1:08 PM, Babu Shanmugam <bscha...@redhat.com> wrote:

> Thank you for your answers Ben.
> I tried configuring a htb qdisc on a physical interface which is not
> attached to any bridge and found the egress shaping working on a tap device
> attached to br-int.
>
> Babu
>
>
> On Tuesday 17 May 2016 01:37 AM, Ben Pfaff wrote:
>
>> Hi Bryan, I think that you understand how QoS works in NVP.  We're
>> currently talking about how to implement QoS in OVN.  Can you help us
>> understand the issues?
>>
>> ...now back to the conversation already in progress:
>>
>> On Tue, May 10, 2016 at 05:04:06PM +0530, Babu Shanmugam wrote:
>>
>>> On Friday 06 May 2016 10:33 PM, Ben Pfaff wrote:
>>>
>>>> But I'm still having trouble understanding the whole design here.
>>>> Without this patch, OVN applies ingress policing to packets received
>>>>
>>> >from (typically) a VM.  This limits the rate at which the VM can
>>>
>>>> introduce packets into the network, and thus acts as a direct (if
>>>> primitive) way to limit the VM's bandwidth resource usage on the
>>>> machine's physical interface.
>>>>
>>>> With this patch, OVN applies shaping to packets *sent* to (typically) a
>>>> VM.  This limits the rate at which the VM can consume packets*from*  the
>>>> network.  This has no direct effect on the VM's consumption of bandwidth
>>>> resources on the network, because the packets that are discarded have
>>>> already consumed RX resources on the machine's physical interface and
>>>> there is in fact no direct way to prevent remote machines from sending
>>>> packets for the local machine to receive.  It might have an indirect
>>>> effect on the VM's bandwidth consumption, since remote senders using
>>>> (e.g.) TCP will notice that their packets were dropped and reduce their
>>>> sending rate, but it's far less efficient at it than shaping packets
>>>> going out to the network.
>>>>
>>>> The design I expected to see in OVN, eventually, was this:
>>>>
>>>>          - Each VM/VIF gets assigned a queue.  Packets received from the
>>>>            VM are tagged with the queue using an OpenFlow "set_queue"
>>>>            action (dunno if we have this as an OVN logical action yet
>>>> but
>>>>            it's easily added).
>>>>
>>>>          - OVN programs the machine's physical interface with a
>>>> linux-htb
>>>>            or linux-hfsc qdisc that grants some min-rate to each
>>>>            queue.
>>>>
>>>  From what I understand, to setup egress shaping for a VIF interface
>>> - We need a physical interface attached to br-int
>>>
>> It doesn't have to be attached to br-int, because queuing information is
>> preserved over a hop from bridge to bridge and through encapsulation in
>> tunnels, but OVN would have to configure queues on the interface
>> regardless of what bridge it was in.
>>
>> - QOS and Queue tables has to be setup for the port entry that corresponds
>>> to the physical interface
>>> - Packets received from VIF are put in these queues using set_queue.
>>> Is my understanding correct?
>>>
>> Yes, I believe so.
>>
>> Is there any way that HTB/HFSC queues can work without a physical
>>> interface
>>> attached to br-int? if not are we going to mandate in some way that a
>>> physical interface has to be attached to br-int?
>>>
>> I don't think it's desirable or necessary to attach the physical
>> interface to br-int, only to ensure that the queues are configured on
>> it.
>>
>> Bryan, what does NVP do?  In particular, does it configure queues on a
>> physical interface without caring what bridge it is attached to?
>>
>> Thanks,
>>
>> Ben.
>>
>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
>
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to