On Tue, Jun 4, 2024 at 10:29 AM David Marchand
<[email protected]> wrote:
>
> On Wed, Feb 21, 2024 at 5:09 AM Mike Pattrick <[email protected]> wrote:
> > @@ -105,20 +115,35 @@ dp_packet_gso(struct dp_packet *p, struct 
> > dp_packet_batch **batches)
> >          return false;
> >      }
> >
> > -    tcp_hdr = dp_packet_l4(p);
> > -    tcp_offset = TCP_OFFSET(tcp_hdr->tcp_ctl);
> > -    tcp_seq = ntohl(get_16aligned_be32(&tcp_hdr->tcp_seq));
> > -    hdr_len = ((char *) dp_packet_l4(p) - (char *) dp_packet_eth(p))
> > -              + tcp_offset * 4;
> > -    ip_id = 0;
> > -    if (dp_packet_hwol_is_ipv4(p)) {
> > +    if (dp_packet_hwol_is_tunnel_vxlan(p) ||
> > +            dp_packet_hwol_is_tunnel_geneve(p)) {
> > +        data_pos =  dp_packet_get_inner_tcp_payload(p);
> > +        outer_ipv4 = dp_packet_hwol_is_outer_ipv4(p);
> > +        tcp_hdr = dp_packet_inner_l4(p);
> > +        ip_hdr = dp_packet_inner_l3(p);
> > +        tnl = true;
> > +        if (outer_ipv4) {
> > +            ip_id = ntohs(((struct ip_header *) dp_packet_l3(p))->ip_id);
> > +        } else if (dp_packet_hwol_is_ipv4(p)) {
> > +            ip_id = ntohs(ip_hdr->ip_id);
> > +        }
> > +    } else {
> > +        data_pos = dp_packet_get_tcp_payload(p);
> > +        outer_ipv4 = dp_packet_hwol_is_ipv4(p);
> > +        tcp_hdr = dp_packet_l4(p);
> >          ip_hdr = dp_packet_l3(p);
> > -        ip_id = ntohs(ip_hdr->ip_id);
> > +        tnl = false;
> > +        if (outer_ipv4) {
> > +            ip_id = ntohs(ip_hdr->ip_id);
> > +        }
> >      }
> >
> > +    tcp_offset = TCP_OFFSET(tcp_hdr->tcp_ctl);
> > +    tcp_seq = ntohl(get_16aligned_be32(&tcp_hdr->tcp_seq));
> > +    hdr_len = ((char *) tcp_hdr - (char *) dp_packet_eth(p))
> > +              + tcp_offset * 4;
> >      const char *data_tail = (char *) dp_packet_tail(p)
> >                              - dp_packet_l2_pad_size(p);
> > -    const char *data_pos = dp_packet_get_tcp_payload(p);
> >      int n_segs = dp_packet_gso_nr_segs(p);
> >
> >      for (int i = 0; i < n_segs; i++) {
> > @@ -130,8 +155,26 @@ dp_packet_gso(struct dp_packet *p, struct 
> > dp_packet_batch **batches)
> >          seg = dp_packet_gso_seg_new(p, hdr_len, data_pos, seg_len);
> >          data_pos += seg_len;
> >
> > +        if (tnl) {
> > +            /* Update tunnel L3 header. */
> > +            if (dp_packet_hwol_is_ipv4(seg)) {
> > +                ip_hdr = dp_packet_inner_l3(seg);
> > +                ip_hdr->ip_tot_len = htons(sizeof *ip_hdr +
> > +                                           dp_packet_inner_l4_size(seg));
> > +                ip_hdr->ip_id = htons(ip_id);
> > +                ip_hdr->ip_csum = 0;
> > +                ip_id++;
>
> Hum, it seems with this change, we are tying outer and inner (in the
> ipv4 in ipv4 case) ip id together.
> I am unclear what the Linux kernel does or what is acceptable, but I
> prefer to mention.

The Linux kernel does +1 as well. The sending OS should be
sufficiently randomizing IP ID's so this shouldn't be an issue. If
that isn't happening then at least we aren't making things worse.

> I also noticed ip_id is incremented a second time later in this loop
> which I find suspicious.
>
> I wonder if OVS should instead increment outer and inner ip ids
> (copied from original packet data) by 'i'.
> Like ip_hdr->ip_id = htons(ntohs(ip_hdr->ip_id) + i) ?

I thought the current solution was simpler. We rely on the sending
host to pick a random initial value, and just need some way to avoid
picking a value that overlaps with another fragment in flight between
the two hosts. That said, keeping them seperate shouldn't hurt
anything, I can make that change.

> And adjusting the outer udp seems missing (length and checksum if
> needed), which is probably what Ilya meant.

Yes, I  missed this.

Thanks,
M

>
>
> --
> David Marchand
>

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to