Thanks for the response, it helps me a lot, this is my first attempt at using 
openvswitch and dpdk so I am quite confused right now. I thinkI understand your 
suggestion about creating 32 vhost ports, earlier I was considering something 
similar but I was not sure if that wasthe right way to do it ...
Message: 1
Date: Thu, 23 Aug 2018 11:55:08 +0300
From: Ilya Maximets <[email protected]>
To: [email protected], amit sehas <[email protected]>
Subject: Re: [ovs-dev] dpdk VIRTIO drivier with multiple queues in
    Openvswitch
Message-ID:
    
<20180823085358eucas1p2f2ac6caa2027a1751c6932f62a4b28f1~nd3_snsg03109831098eucas1...@eucas1p2.samsung.com>
    
Content-Type: text/plain; charset="utf-8"

> I have a host running ubuntu 16.04 xenial and several docker containers in it 
> running the same OS image (ubuntu 16.04). I amutilizing openvswitch on the 
> host.  I have 32 queues per port in the application.  I am able to add queues 
> in openvswitch as follows:
> ovs-vsctl set Interface vhost-user4 options:n_rxq=32ovs-vsctl set Interface 
> vhost-user4 options:n_txq=32
> But I am not able to figure out how to add flows that will direct traffic to 
> specific queues. So for eg, traffic should go from queue0 to queue0and from 
> queue30 to queue30 and so on for each of the ports in the switch ?
> 
> has anyone tried to make multiple queues work with VIRTIO utilizing 
> openvswitch?
> the add-flow command in ovs-ofctl doesn't seem to match on input queue 
> number, but it does let you enqueue to an output queue .. also I amnot using 
> qemu and not planning to do so either ...
> any suggests?
> thanks

You're mixing the "hardware" rx/tx queues and the logical queues that usually
used for QoS (rate limiting and so on). ovs-ofctl  works with QoS queues like
this:
    http://docs.openvswitch.org/en/latest/faq/qos/

If you want to direct traffic between "hardware" queues like virtio rings/real
hardware queues of physical NICs, than I'm afraid that it's impossible.
Packets are distributed between rx queues by hardware/virtio based on RSS or
some other algorithms. OVS will use one TX queue per-port for each PMD thread
if possible for performance reasons.

Also, "options:n_rxq=32" and "options:n_txq=32" has no effect for vhost
interfaces. The number of queues will be taken from virtio device (qemu,
virtio-user). OVS has no ability to change that, because it has no control on
memory allocated by QEMU or virtio-user.
Anyway, "options:n_txq" has effect only for dummy interfaces. For all other
types, number of transmit queues controlled by OVS automatically. 

If you want to achieve your goal, you will have to create 32 vhost ports
and configure appropriate OpenFlow rules.

P.S. Looks like documentation about PMD threads messed up and misleading after
    the recent documentation split up. Please avoid looking at it. Or refer
    the docs for OVS 2.9.

Best regards, Ilya Maximets.

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to