On 26.06.2017 00:52, Bodireddy, Bhanuprakash wrote:
>>> +
>>> +/* Flush the txq if there are any packets available.
>>> + * dynamic_txqs/concurrent_txq is disabled for vHost User ports as
>>> + * 'OVS_VHOST_MAX_QUEUE_NUM' txqs are preallocated.
>>> + */
>>
>> This comment is completely untrue. You
With this change and CFS in effect, it effectively means that the dpdk control
threads need to be on
different cores than the PMD threads or the response latency may be too long
for their control work ?
Have we tested having the control threads on the same cpu with -20 nice for the
pmd thread ?
The current qos function is used for geneve
tunnel to control the traffic out the ovs. And have no to-port qos
control.
This patch do the modification as follow
1. change the qos configuration with direction to consistent with neutron
qos rule. Add qos_ingress_max_rate, qos_ingress_burst, qos_egr
Hi Mark,
>>
>>Clang reports that array access from 'dumps' variable result in null
>>pointer dereference.
>>
>>Signed-off-by: Bhanuprakash Bodireddy
>>
>
>Hi Bhanu,
>
>LGTM - I also compiled this with gcc, clang, and sparse without issue.
>Checkpatch reports no obvious problems either.
>
>Acked-by:
>-Original Message-
>From: nickcooper-zhangtonghao [mailto:n...@opencloud.tech]
>Sent: Friday, June 23, 2017 4:00 PM
>To: Bodireddy, Bhanuprakash
>Cc: d...@openvswitch.org
>Subject: Re: [ovs-dev] [PATCH v9] netdev-dpdk: Increase pmd thread priority.
>
>If we fail to set the priority, we sh
Increase the DPDK pmd thread scheduling priority by lowering the nice
value. This will advise the kernel scheduler to prioritize pmd thread
over other processes and will help PMD to provide deterministic
performance in out-of-the-box deployments.
This patch sets the nice value of PMD threads to '-
>> +
>> +/* Flush the txq if there are any packets available.
>> + * dynamic_txqs/concurrent_txq is disabled for vHost User ports as
>> + * 'OVS_VHOST_MAX_QUEUE_NUM' txqs are preallocated.
>> + */
>
>This comment is completely untrue. You may ignore 'concurrent_txq'
>because you *must* lock the que
On 23/06/2017 19:52, Darrell Ball wrote:
Hardware offload introduced extra tracking of netdev ports. This
included ovs-netdev, which is really for internal infra usage for
the userpace datapath. This breaks cleanup of the userspace
datapath. One effect is that all userspace datapath system t