On Thu, Jul 4, 2024 at 7:55 AM Mike Pattrick <[email protected]> wrote:
>
> When sending packets that are flagged as requiring segmentation to an
> interface that doens't support this feature, send the packet to the TSO

does not*

> software fallback instead of dropping it.
>
> Signed-off-by: Mike Pattrick <[email protected]>

I did not test all the combinations yet (busy on dpdk).
In any case, I don't expect much difference in behavior since v2.

I see you intend to send a new revision, here are some small nits.

>
> ---
> v2:
>  - Fixed udp tunnel length
>  - Added test that UDP headers are correct
>  - Split inner and outer ip_id into different counters
>  - Set tunnel flags in reset_tcp_seg
>
> v3:
>  - Changed logic of netdev_send() to account for NICs that support
>  tunnel offload but not checksum offload
>  - Adjusted udp tunnel header length during software tso
> ---
>  lib/dp-packet-gso.c     | 87 +++++++++++++++++++++++++++++++++--------
>  lib/dp-packet.h         | 34 ++++++++++++++++
>  lib/netdev-native-tnl.c |  8 ++++
>  lib/netdev.c            | 44 ++++++++++-----------
>  tests/system-traffic.at | 87 +++++++++++++++++++++++++++++++++++++++++
>  5 files changed, 221 insertions(+), 39 deletions(-)
>
> diff --git a/lib/dp-packet-gso.c b/lib/dp-packet-gso.c
> index 847685ad9..4ac16b7d5 100644
> --- a/lib/dp-packet-gso.c
> +++ b/lib/dp-packet-gso.c

[snip]

> @@ -105,20 +115,37 @@ dp_packet_gso(struct dp_packet *p, struct 
> dp_packet_batch **batches)
>          return false;
>      }
>
> -    tcp_hdr = dp_packet_l4(p);
> -    tcp_offset = TCP_OFFSET(tcp_hdr->tcp_ctl);
> -    tcp_seq = ntohl(get_16aligned_be32(&tcp_hdr->tcp_seq));
> -    hdr_len = ((char *) dp_packet_l4(p) - (char *) dp_packet_eth(p))
> -              + tcp_offset * 4;
> -    ip_id = 0;
> -    if (dp_packet_hwol_is_ipv4(p)) {
> +    if (dp_packet_hwol_is_tunnel_vxlan(p) ||
> +            dp_packet_hwol_is_tunnel_geneve(p)) {

Nit: indent seems wrong ^^.

> +        outer_ipv4 = dp_packet_hwol_is_outer_ipv4(p);
> +        tcp_hdr = dp_packet_inner_l4(p);
> +        ip_hdr = dp_packet_inner_l3(p);

Nit: previously, ip_hdr (which is declared as struct ip_header *) was
initialised only for ipv4.
Now with this patch, it is set regardless of what is present in
dp_packet_inner_l3.
The only use for this ip_hdr is for extracting ip_id, so I would move
ip_hdr setting in the branch where it matters.

@@ -119,22 +119,22 @@ dp_packet_gso(struct dp_packet *p, struct
dp_packet_batch **batches)
             dp_packet_hwol_is_tunnel_geneve(p)) {
         outer_ipv4 = dp_packet_hwol_is_outer_ipv4(p);
         tcp_hdr = dp_packet_inner_l4(p);
-        ip_hdr = dp_packet_inner_l3(p);
         tnl = true;

         if (outer_ipv4) {
             outer_ip_id = ntohs(((struct ip_header *) dp_packet_l3(p))->ip_id);
         }
         if (dp_packet_hwol_is_ipv4(p)) {
+            ip_hdr = dp_packet_inner_l3(p);
             inner_ip_id = ntohs(ip_hdr->ip_id);
         }
     } else {
         outer_ipv4 = dp_packet_hwol_is_ipv4(p);
         tcp_hdr = dp_packet_l4(p);
-        ip_hdr = dp_packet_l3(p);
         tnl = false;

         if (outer_ipv4) {
+            ip_hdr = dp_packet_l3(p);

             outer_ip_id = ntohs(ip_hdr->ip_id);
         }
     }


> +        tnl = true;
> +
> +        if (outer_ipv4) {
> +            outer_ip_id = ntohs(((struct ip_header *) 
> dp_packet_l3(p))->ip_id);
> +        }
> +        if (dp_packet_hwol_is_ipv4(p)) {
> +            inner_ip_id = ntohs(ip_hdr->ip_id);
> +        }
> +    } else {
> +        outer_ipv4 = dp_packet_hwol_is_ipv4(p);
> +        tcp_hdr = dp_packet_l4(p);
>          ip_hdr = dp_packet_l3(p);
> -        ip_id = ntohs(ip_hdr->ip_id);
> +        tnl = false;
> +
> +        if (outer_ipv4) {
> +            outer_ip_id = ntohs(ip_hdr->ip_id);
> +        }
>      }
>

[snip]


Once this patch is in, we can re-enable
RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO / RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
for Intel drivers.
Outer udp checksum is broken for them (until we have a new 23.11.x
release with driver fixes) but, from my previous tests, the tunnel tso
feature works fine with net/ice at least, when (tunnel) csum is not
requested.


-- 
David Marchand

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to