On 11/22/21 14:37, Eli Britstein via dev wrote:
> 
> On 11/22/2021 3:19 PM, Sriharsha Basavapatna wrote:
>> Hi Eli,
>>
>> On Sun, Nov 21, 2021 at 12:03 PM Eli Britstein via dev
>> <[email protected]> wrote:
>>> Hi Harsha,
>>>
>>> It's a clever idea, though have some problems in the implementation. PSB.
>> Thanks, please see my response below.
>>>
>>> On 11/20/2021 11:20 AM, Sriharsha Basavapatna wrote:
>>>> The hw_miss_packet_recover() API results in performance degradation, for
>>>> ports that are either not offload capable or do not support this specific
>>>> offload API.
>>>>
>>>> For example, in the test configuration shown below, the vhost-user port
>>>> does not support offloads and the VF port doesn't support hw_miss offload
>>>> API. But because tunnel offload needs to be configured in other bridges
>>>> (br-vxlan and br-phy), OVS has been built with -DALLOW_EXPERIMENTAL_API.
>>>>
>>>>       br-vhost            br-vxlan            br-phy
>>>> vhost-user<-->VF    VF-Rep<-->VxLAN       uplink-port
>>>>
>>>> For every packet between the VF and the vhost-user ports, hw_miss API is
>>>> called even though it is not supported by the ports involved. This leads
>>>> to significant performance drop (~3x in some cases; both cycles and pps).
>>>>
>>>> To fix this, return EOPNOTSUPP when this API fails for a device that
>>> "To fix" -> "To improve"
>>>> doesn't support it and avoid this API on that port for subsequent packets.
>>>>
>>>> Signed-off-by: Sriharsha Basavapatna <[email protected]>
>>>> ---
>>>>    lib/dpif-netdev-private.h |  2 +-
>>>>    lib/dpif-netdev.c         | 29 +++++++++++++++++++++--------
>>>>    lib/netdev-offload-dpdk.c |  9 +++++++--
>>>>    3 files changed, 29 insertions(+), 11 deletions(-)
>>>>

<snip>

>>>> diff --git a/lib/netdev-offload-dpdk.c b/lib/netdev-offload-dpdk.c
>>>> index 9fee7570a..8ddddbd2e 100644
>>>> --- a/lib/netdev-offload-dpdk.c
>>>> +++ b/lib/netdev-offload-dpdk.c
>>>> @@ -2292,11 +2292,16 @@ netdev_offload_dpdk_hw_miss_packet_recover(struct 
>>>> netdev *netdev,
>>>>        odp_port_t vport_odp;
>>>>        int ret = 0;
>>>>
>>>> -    if (netdev_dpdk_rte_flow_get_restore_info(netdev, packet,
>>>> -                                              &rte_restore_info, NULL)) {
>>>> +    ret = netdev_dpdk_rte_flow_get_restore_info(netdev, packet,
>>>> +                                                &rte_restore_info, NULL);
>>>> +    if (ret) {
>>>>            /* This function is called for every packet, and in most cases 
>>>> there
>>>>             * will be no restore info from the HW, thus error is expected.
>>>>             */
>>>> +        VLOG_DBG_RL(&rl, "flow_get_restore_info failed: %d\n", ret);
>>> It is likely that most of the packets will have an "error" when calling
>>> this API, if there is nothing to get. See the comment. There was no
>>> print on purpose.
>> I added it for debugging, will remove.
>>>> +        if (ret == -EOPNOTSUPP) {
>>> How can we guarantee EOPNOTSUPP is only if indeed not supported and not
>>> that the PMD returned it (for this packet only)?
>> PMDs must return some other errors for per-packet issues (e.g EINVAL
>> for invalid packet format etc). EOPNOTSUPP indicates lack of API
>> support. It is already being used to indicate that, as seen in
>> netdev-offload.c::hw_miss_packet_recover(). Similarly, a PMD that
>> doesn't support this API should return EOPNOTSUPP.
> 
> OK. maybe it's better than to push a comment in DPDK to clarify this. 
> Currently it just says:
> 
>  *   0 on success, a negative errno value otherwise and rte_errno is set.

+1
The clarification in DPDK docs would be great.

> 
>>
>>      return (flow_api && flow_api->hw_miss_packet_recover)
>>              ? flow_api->hw_miss_packet_recover(netdev, packet)
>>              : EOPNOTSUPP;
>>> Maybe it is better to have another API to query (might also need changes
>>> in dpdk).
>> I don't see the need here, as explained above.

I think that idea of this patch is fine and we should implement the solution
on OVS's side.  However, I also think that we do need the DPDK API change.
For example, if the port actually supports HW miss recover, but user doesn't
have tunnels configured, or flows are not offloadable (e.g. has a destination
on a virtual port).  In these cases OVS will still waste a lot of time calling
the recovery API, going deep inside the DPDK PMD driver basically for nothing.
So, these cases will experience significant performance degradation similarly
to what you see right now for ports that doesn't support miss recovery.

Ideally, that should be a separate mbuf flag that application can check and
make a decision to execute an expensive API call.  I see that sfc driver 
allocates
a dynamic mbuf flag for the purpose of HW miss recover API, but mlx5 driver
re-uses same flags as for the partial offload and I'm not sure if that can
be changed.  But something needs to be changed.  It's essential for the
application to be able to make a decision on a per-packet basis without making
expensive API calls.

For the current patch: As I said, I think that idea is good.  We will need
a slightly different implementation, i.e. the flag should be placed not in
the tx_port structure, but in the dp_netdev_rxq.  That is because we need to
remove the tx port hash lookup.  I just sent a patch for this here:
  
https://patchwork.ozlabs.org/project/openvswitch/patch/[email protected]/

Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to