Just a note for completeness, though this issue is handled with the
snippet I proposed in this same thread.

On Fri, Jul 1, 2022 at 5:58 AM Mike Pattrick <[email protected]> wrote:
> @@ -5024,6 +5052,7 @@ netdev_dpdk_vhost_reconfigure(struct netdev *netdev)
>      int err;
>
>      ovs_mutex_lock(&dev->mutex);
> +    netdev_dpdk_update_netdev_flags(dev);
>      err = dpdk_vhost_reconfigure_helper(dev);

Here...

>      ovs_mutex_unlock(&dev->mutex);
>
> @@ -5088,19 +5117,22 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev 
> *netdev)
>              goto unlock;
>          }
>
> +        vhost_unsup_flags = 1ULL << VIRTIO_NET_F_HOST_ECN
> +                            | 1ULL << VIRTIO_NET_F_HOST_UFO;
> +
> +        dev->hw_ol_features |= NETDEV_TX_IPV4_CKSUM_OFFLOAD;
> +        dev->hw_ol_features |= NETDEV_TX_TCP_CKSUM_OFFLOAD;
> +        dev->hw_ol_features |= NETDEV_TX_UDP_CKSUM_OFFLOAD;
> +        dev->hw_ol_features |= NETDEV_TX_SCTP_CKSUM_OFFLOAD;

... and here, we were missing a call to
netdev_dpdk_update_netdev_flags(), *after* touching hw_ol_features.

This explains why I was always seeing the flags as off for vhost ports:
# ovs-appctl dpif-netdev/offload-show | grep vhost0
vhost0: ip_csum: off, tcp_csum: off, udp_csum: off, sctp_csum: off, tso: off

And this is noticeable in a inter vm setup, testing TSO, where
segmentation was done in OVS.

Before I would reach 6Gb/s in inter vm.
After setting flags correcly I get a 18Gb/s.


-- 
David Marchand

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to