On 11/25/2016 09:41 AM, Maxime Coquelin wrote:
> 
> 
> On 11/24/2016 08:45 PM, Kevin Traynor wrote:
>> On 11/24/2016 03:34 PM, Loftus, Ciara wrote:
>>>>
>>>>
>>>> On 11/24/2016 03:59 PM, Loftus, Ciara wrote:
>>>>>>
>>>>>> On 11/24/2016 03:20 PM, Ciara Loftus wrote:
>>>>>>> This commit announces support for DPDK 16.11. Compaitibilty with
>>>>>>> DPDK
>>>>>>> v16.07 is not broken yet thanks to only minor code changes being
>>>> needed
>>>>>>> for the upgrade.
>>>>>>>
>>>>>>> Signed-off-by: Ciara Loftus <[email protected]>
>>>>>>> ---
>>>>>>>  .travis/linux-build.sh    |  2 +-
>>>>>>>  INSTALL.DPDK-ADVANCED.rst |  4 ++--
>>>>>>>  INSTALL.DPDK.rst          | 18 +++++++++---------
>>>>>>>  NEWS                      |  1 +
>>>>>>>  lib/netdev-dpdk.c         |  3 ++-
>>>>>>>  5 files changed, 15 insertions(+), 13 deletions(-)
>>>>>> <snip />
>>>>>>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>>>>>>> index de78ddd..7564ad7 100644
>>>>>>> --- a/lib/netdev-dpdk.c
>>>>>>> +++ b/lib/netdev-dpdk.c
>>>>>>> @@ -2593,7 +2593,8 @@ netdev_dpdk_vhost_class_init(void)
>>>>>>>          rte_vhost_driver_callback_register(&virtio_net_device_ops);
>>>>>>>          rte_vhost_feature_disable(1ULL << VIRTIO_NET_F_HOST_TSO4
>>>>>>>                                    | 1ULL << VIRTIO_NET_F_HOST_TSO6
>>>>>>> -                                  | 1ULL << VIRTIO_NET_F_CSUM);
>>>>>>> +                                  | 1ULL << VIRTIO_NET_F_CSUM
>>>>>>> +                                  | 1ULL <<
>>>>>>> VIRTIO_RING_F_INDIRECT_DESC);
>>>>>>>          ovs_thread_create("vhost_thread", start_vhost_loop, NULL);
>>>>>>>
>>>>>>>          ovsthread_once_done(&once);
>>>>>>>
>>>>>> Any reason you disable indirect descs?
>>>>>> Did you measure performance degradation with it?
>>>>>
>>>>> I measured a slight decrease ~-6% for the use case I tested.
>>>> Ok, could you provide more details about this use-case?
>>>> I'm interested to know which cases (don't) benefit of this feature.
>>>
>>> Single flow 64B packets unidirectional loopback test (dpdk -> vhost,
>>> vhost -> dpdk) with testpmd forwarding between vhost ports in the VM.
>>> Mergeable buffers disabled.
>>>
>>>>
>>>>> Either way, if it is to be enabled it should be enabled in a
>>>>> separate patch.
>>>> Well, I'm not sure.
>>>> I think this is not a good way to disable features, because once
>>>> OVS packaged, we cannot enable it anymore.
>>>
>>> My reasoning not to enable it in this patch (other than the
>>> degradation) is because the purpose of this patch is the DPDK
>>> upgrade. We could have a second patch "enable indirect descriptors
>>> for vHost" for example, as suggested by Ilya previously:
>>> https://mail.openvswitch.org/pipermail/ovs-dev/2016-October/324586.html
>>
>> +1. I guess it might hold up merging of the upgrade patch and the docs
>> being updated if there's still ongoing debate about whether to enable
>> indirect descs.
>>
>>>
>>>>
>>>> It should be the role of the management tool to disable it by default,
>>>> by configuring the Qemu params properly if needed (indirect_desc=off).
>>>> Doing this, it could still be enabled without rebuilding the package if
>>>> it benefits some use-cases.
>>>>
>>>> I would think this is the same for the other features that are
>>>> disabled,
>>>> as soon as they don't break functionality. If other features break
>>>> functionality, it should be reported so that it gets fixed.
>>>>
>>>> Moreover, we are looking at cross-version migration for vhost
>>>> currently,
>>>> and rte_vhost_feature_disable is problematic, because it prevent us to
>>>> export virtio features supported by a given DPDK version reliably at
>>>> build time.
>>>
>>> I see your reasoning too. Would like to hear input from other folks
>>> in the community before I respin.
>>
>> Are there any tests where having indirect descriptors improves
>> performance for OVS...0% pkt loss? larger pkt sizes?
> In my tests (DPDK only, no OVS), it improves 0% pkt loss.
> Currently, 2 descs are used by packets, so using indirect descs,
> "doubles" the queue size, so better absorbs glitches.
> The downside is that it is less cache efficient.
> 
> In case of large packets using more than one descriptor, same benefit
> should be seen. I couldn't test it though, as it would require 40G NIC
> I don't have currently.
> 
>>
>> If we're pretty sure that the disable virtio feature calls will be
>> deprecated for a nice feature like cross-version migration, it might be
>> better to take the pain of a small perf drop or the user having to
>> modify their config now, rather than at a later stage. We already have a
>> case with mrg buffers where the user can change their config from the
>> default to get better performance.
> I didn't looked closely, are mrg buffers disabled in Qemu cmdline
> currently when using OVS?

OVS doesn't dictate that. In tuning docs it's recommended that if you
don't need mrg buffers, to turn them off at the QEMU cmdline for
improved performance.

> 
> Thanks,
> Maxime
>>
>>>
>>> Thanks,
>>> Ciara
>>>
>>>>
>>>> Cheers,
>>>> Maxime
>>> _______________________________________________
>>> dev mailing list
>>> [email protected]
>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>>
>>

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to