Re: [ovs-dev] [PATCH] Userspace: Software fallback for UDP encapsulated TCP segmentation.

2024-06-04 Thread David Marchand
On Wed, Feb 21, 2024 at 5:09 AM Mike Pattrick  wrote:
> @@ -105,20 +115,35 @@ dp_packet_gso(struct dp_packet *p, struct 
> dp_packet_batch **batches)
>  return false;
>  }
>
> -tcp_hdr = dp_packet_l4(p);
> -tcp_offset = TCP_OFFSET(tcp_hdr->tcp_ctl);
> -tcp_seq = ntohl(get_16aligned_be32(_hdr->tcp_seq));
> -hdr_len = ((char *) dp_packet_l4(p) - (char *) dp_packet_eth(p))
> -  + tcp_offset * 4;
> -ip_id = 0;
> -if (dp_packet_hwol_is_ipv4(p)) {
> +if (dp_packet_hwol_is_tunnel_vxlan(p) ||
> +dp_packet_hwol_is_tunnel_geneve(p)) {
> +data_pos =  dp_packet_get_inner_tcp_payload(p);
> +outer_ipv4 = dp_packet_hwol_is_outer_ipv4(p);
> +tcp_hdr = dp_packet_inner_l4(p);
> +ip_hdr = dp_packet_inner_l3(p);
> +tnl = true;
> +if (outer_ipv4) {
> +ip_id = ntohs(((struct ip_header *) dp_packet_l3(p))->ip_id);
> +} else if (dp_packet_hwol_is_ipv4(p)) {
> +ip_id = ntohs(ip_hdr->ip_id);
> +}
> +} else {
> +data_pos = dp_packet_get_tcp_payload(p);
> +outer_ipv4 = dp_packet_hwol_is_ipv4(p);
> +tcp_hdr = dp_packet_l4(p);
>  ip_hdr = dp_packet_l3(p);
> -ip_id = ntohs(ip_hdr->ip_id);
> +tnl = false;
> +if (outer_ipv4) {
> +ip_id = ntohs(ip_hdr->ip_id);
> +}
>  }
>
> +tcp_offset = TCP_OFFSET(tcp_hdr->tcp_ctl);
> +tcp_seq = ntohl(get_16aligned_be32(_hdr->tcp_seq));
> +hdr_len = ((char *) tcp_hdr - (char *) dp_packet_eth(p))
> +  + tcp_offset * 4;
>  const char *data_tail = (char *) dp_packet_tail(p)
>  - dp_packet_l2_pad_size(p);
> -const char *data_pos = dp_packet_get_tcp_payload(p);
>  int n_segs = dp_packet_gso_nr_segs(p);
>
>  for (int i = 0; i < n_segs; i++) {
> @@ -130,8 +155,26 @@ dp_packet_gso(struct dp_packet *p, struct 
> dp_packet_batch **batches)
>  seg = dp_packet_gso_seg_new(p, hdr_len, data_pos, seg_len);
>  data_pos += seg_len;
>
> +if (tnl) {
> +/* Update tunnel L3 header. */
> +if (dp_packet_hwol_is_ipv4(seg)) {
> +ip_hdr = dp_packet_inner_l3(seg);
> +ip_hdr->ip_tot_len = htons(sizeof *ip_hdr +
> +   dp_packet_inner_l4_size(seg));
> +ip_hdr->ip_id = htons(ip_id);
> +ip_hdr->ip_csum = 0;
> +ip_id++;

Hum, it seems with this change, we are tying outer and inner (in the
ipv4 in ipv4 case) ip id together.
I am unclear what the Linux kernel does or what is acceptable, but I
prefer to mention.

I also noticed ip_id is incremented a second time later in this loop
which I find suspicious.

I wonder if OVS should instead increment outer and inner ip ids
(copied from original packet data) by 'i'.
Like ip_hdr->ip_id = htons(ntohs(ip_hdr->ip_id) + i) ?


And adjusting the outer udp seems missing (length and checksum if
needed), which is probably what Ilya meant.


-- 
David Marchand

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] Userspace: Software fallback for UDP encapsulated TCP segmentation.

2024-03-27 Thread Ilya Maximets
On 3/27/24 22:07, Mike Pattrick wrote:
> On Wed, Mar 27, 2024 at 1:39 PM Simon Horman  wrote:
>>
>> On Tue, Feb 20, 2024 at 11:08:55PM -0500, Mike Pattrick wrote:
>>> When sending packets that are flagged as requiring segmentation to an
>>> interface that doens't support this feature, send the packet to the TSO
>>> software fallback instead of dropping it.
>>>
>>> Signed-off-by: Mike Pattrick 
>>
>> Hi Mike,
>>
>> Can I confirm that from your PoV this patch is still awaiting review?
>> I ask because it's been sitting around for a while now.
> 
> I believe this patch now needs to be modified for the recent
> recirculation change. I'll update it for that, give it another once
> over, and resubmit.

While at it, please, check that the outer UDP header is correctly
updated.  It is likely not, according to the tests done here:
  https://github.com/openvswitch/ovs-issues/issues/321#issuecomment-2008592610
It seem to keep the length that is larger than IP payload size.

If so, some unit tests for this functionality would be nice to have
as well with the actual comparison of headers for different packets.

Best regards, Ilya Maximets.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] Userspace: Software fallback for UDP encapsulated TCP segmentation.

2024-03-27 Thread Mike Pattrick
On Wed, Mar 27, 2024 at 1:39 PM Simon Horman  wrote:
>
> On Tue, Feb 20, 2024 at 11:08:55PM -0500, Mike Pattrick wrote:
> > When sending packets that are flagged as requiring segmentation to an
> > interface that doens't support this feature, send the packet to the TSO
> > software fallback instead of dropping it.
> >
> > Signed-off-by: Mike Pattrick 
>
> Hi Mike,
>
> Can I confirm that from your PoV this patch is still awaiting review?
> I ask because it's been sitting around for a while now.

I believe this patch now needs to be modified for the recent
recirculation change. I'll update it for that, give it another once
over, and resubmit.

-M


>

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] Userspace: Software fallback for UDP encapsulated TCP segmentation.

2024-03-27 Thread Simon Horman
On Tue, Feb 20, 2024 at 11:08:55PM -0500, Mike Pattrick wrote:
> When sending packets that are flagged as requiring segmentation to an
> interface that doens't support this feature, send the packet to the TSO
> software fallback instead of dropping it.
> 
> Signed-off-by: Mike Pattrick 

Hi Mike,

Can I confirm that from your PoV this patch is still awaiting review?
I ask because it's been sitting around for a while now.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] Userspace: Software fallback for UDP encapsulated TCP segmentation.

2024-02-21 Thread Mike Pattrick
On Tue, Feb 20, 2024 at 11:09 PM Mike Pattrick  wrote:
>
> When sending packets that are flagged as requiring segmentation to an
> interface that doens't support this feature, send the packet to the TSO
> software fallback instead of dropping it.
>
> Signed-off-by: Mike Pattrick 

Recheck-request: github-robot

> ---
>  lib/dp-packet-gso.c | 73 +
>  lib/dp-packet.h | 26 +++
>  lib/netdev-native-tnl.c |  8 +
>  lib/netdev.c| 37 +
>  tests/system-traffic.at | 58 
>  5 files changed, 167 insertions(+), 35 deletions(-)
>
> diff --git a/lib/dp-packet-gso.c b/lib/dp-packet-gso.c
> index 847685ad9..f25abf436 100644
> --- a/lib/dp-packet-gso.c
> +++ b/lib/dp-packet-gso.c
> @@ -47,6 +47,8 @@ dp_packet_gso_seg_new(const struct dp_packet *p, size_t 
> hdr_len,
>  seg->l2_5_ofs = p->l2_5_ofs;
>  seg->l3_ofs = p->l3_ofs;
>  seg->l4_ofs = p->l4_ofs;
> +seg->inner_l3_ofs = p->inner_l3_ofs;
> +seg->inner_l4_ofs = p->inner_l4_ofs;
>
>  /* The protocol headers remain the same, so preserve hash and mark. */
>  *dp_packet_rss_ptr(seg) = *dp_packet_rss_ptr(p);
> @@ -71,7 +73,12 @@ dp_packet_gso_nr_segs(struct dp_packet *p)
>  const char *data_tail;
>  const char *data_pos;
>
> -data_pos = dp_packet_get_tcp_payload(p);
> +if (dp_packet_hwol_is_tunnel_vxlan(p) ||
> +dp_packet_hwol_is_tunnel_geneve(p)) {
> +data_pos = dp_packet_get_inner_tcp_payload(p);
> +} else {
> +data_pos = dp_packet_get_tcp_payload(p);
> +}
>  data_tail = (char *) dp_packet_tail(p) - dp_packet_l2_pad_size(p);
>
>  return DIV_ROUND_UP(data_tail - data_pos, segsz);
> @@ -91,12 +98,15 @@ dp_packet_gso(struct dp_packet *p, struct dp_packet_batch 
> **batches)
>  struct tcp_header *tcp_hdr;
>  struct ip_header *ip_hdr;
>  struct dp_packet *seg;
> +const char *data_pos;
>  uint16_t tcp_offset;
>  uint16_t tso_segsz;
> +uint16_t ip_id = 0;
>  uint32_t tcp_seq;
> -uint16_t ip_id;
> +bool outer_ipv4;
>  int hdr_len;
>  int seg_len;
> +bool tnl;
>
>  tso_segsz = dp_packet_get_tso_segsz(p);
>  if (!tso_segsz) {
> @@ -105,20 +115,35 @@ dp_packet_gso(struct dp_packet *p, struct 
> dp_packet_batch **batches)
>  return false;
>  }
>
> -tcp_hdr = dp_packet_l4(p);
> -tcp_offset = TCP_OFFSET(tcp_hdr->tcp_ctl);
> -tcp_seq = ntohl(get_16aligned_be32(_hdr->tcp_seq));
> -hdr_len = ((char *) dp_packet_l4(p) - (char *) dp_packet_eth(p))
> -  + tcp_offset * 4;
> -ip_id = 0;
> -if (dp_packet_hwol_is_ipv4(p)) {
> +if (dp_packet_hwol_is_tunnel_vxlan(p) ||
> +dp_packet_hwol_is_tunnel_geneve(p)) {
> +data_pos =  dp_packet_get_inner_tcp_payload(p);
> +outer_ipv4 = dp_packet_hwol_is_outer_ipv4(p);
> +tcp_hdr = dp_packet_inner_l4(p);
> +ip_hdr = dp_packet_inner_l3(p);
> +tnl = true;
> +if (outer_ipv4) {
> +ip_id = ntohs(((struct ip_header *) dp_packet_l3(p))->ip_id);
> +} else if (dp_packet_hwol_is_ipv4(p)) {
> +ip_id = ntohs(ip_hdr->ip_id);
> +}
> +} else {
> +data_pos = dp_packet_get_tcp_payload(p);
> +outer_ipv4 = dp_packet_hwol_is_ipv4(p);
> +tcp_hdr = dp_packet_l4(p);
>  ip_hdr = dp_packet_l3(p);
> -ip_id = ntohs(ip_hdr->ip_id);
> +tnl = false;
> +if (outer_ipv4) {
> +ip_id = ntohs(ip_hdr->ip_id);
> +}
>  }
>
> +tcp_offset = TCP_OFFSET(tcp_hdr->tcp_ctl);
> +tcp_seq = ntohl(get_16aligned_be32(_hdr->tcp_seq));
> +hdr_len = ((char *) tcp_hdr - (char *) dp_packet_eth(p))
> +  + tcp_offset * 4;
>  const char *data_tail = (char *) dp_packet_tail(p)
>  - dp_packet_l2_pad_size(p);
> -const char *data_pos = dp_packet_get_tcp_payload(p);
>  int n_segs = dp_packet_gso_nr_segs(p);
>
>  for (int i = 0; i < n_segs; i++) {
> @@ -130,8 +155,26 @@ dp_packet_gso(struct dp_packet *p, struct 
> dp_packet_batch **batches)
>  seg = dp_packet_gso_seg_new(p, hdr_len, data_pos, seg_len);
>  data_pos += seg_len;
>
> +if (tnl) {
> +/* Update tunnel L3 header. */
> +if (dp_packet_hwol_is_ipv4(seg)) {
> +ip_hdr = dp_packet_inner_l3(seg);
> +ip_hdr->ip_tot_len = htons(sizeof *ip_hdr +
> +   dp_packet_inner_l4_size(seg));
> +ip_hdr->ip_id = htons(ip_id);
> +ip_hdr->ip_csum = 0;
> +ip_id++;
> +} else {
> +struct ovs_16aligned_ip6_hdr *ip6_hdr;
> +
> +ip6_hdr = dp_packet_inner_l3(seg);
> +ip6_hdr->ip6_ctlun.ip6_un1.ip6_un1_plen
> += htons(dp_packet_inner_l3_size(seg) - sizeof *ip6_hdr);
> +}

[ovs-dev] [PATCH] Userspace: Software fallback for UDP encapsulated TCP segmentation.

2024-02-20 Thread Mike Pattrick
When sending packets that are flagged as requiring segmentation to an
interface that doens't support this feature, send the packet to the TSO
software fallback instead of dropping it.

Signed-off-by: Mike Pattrick 
---
 lib/dp-packet-gso.c | 73 +
 lib/dp-packet.h | 26 +++
 lib/netdev-native-tnl.c |  8 +
 lib/netdev.c| 37 +
 tests/system-traffic.at | 58 
 5 files changed, 167 insertions(+), 35 deletions(-)

diff --git a/lib/dp-packet-gso.c b/lib/dp-packet-gso.c
index 847685ad9..f25abf436 100644
--- a/lib/dp-packet-gso.c
+++ b/lib/dp-packet-gso.c
@@ -47,6 +47,8 @@ dp_packet_gso_seg_new(const struct dp_packet *p, size_t 
hdr_len,
 seg->l2_5_ofs = p->l2_5_ofs;
 seg->l3_ofs = p->l3_ofs;
 seg->l4_ofs = p->l4_ofs;
+seg->inner_l3_ofs = p->inner_l3_ofs;
+seg->inner_l4_ofs = p->inner_l4_ofs;
 
 /* The protocol headers remain the same, so preserve hash and mark. */
 *dp_packet_rss_ptr(seg) = *dp_packet_rss_ptr(p);
@@ -71,7 +73,12 @@ dp_packet_gso_nr_segs(struct dp_packet *p)
 const char *data_tail;
 const char *data_pos;
 
-data_pos = dp_packet_get_tcp_payload(p);
+if (dp_packet_hwol_is_tunnel_vxlan(p) ||
+dp_packet_hwol_is_tunnel_geneve(p)) {
+data_pos = dp_packet_get_inner_tcp_payload(p);
+} else {
+data_pos = dp_packet_get_tcp_payload(p);
+}
 data_tail = (char *) dp_packet_tail(p) - dp_packet_l2_pad_size(p);
 
 return DIV_ROUND_UP(data_tail - data_pos, segsz);
@@ -91,12 +98,15 @@ dp_packet_gso(struct dp_packet *p, struct dp_packet_batch 
**batches)
 struct tcp_header *tcp_hdr;
 struct ip_header *ip_hdr;
 struct dp_packet *seg;
+const char *data_pos;
 uint16_t tcp_offset;
 uint16_t tso_segsz;
+uint16_t ip_id = 0;
 uint32_t tcp_seq;
-uint16_t ip_id;
+bool outer_ipv4;
 int hdr_len;
 int seg_len;
+bool tnl;
 
 tso_segsz = dp_packet_get_tso_segsz(p);
 if (!tso_segsz) {
@@ -105,20 +115,35 @@ dp_packet_gso(struct dp_packet *p, struct dp_packet_batch 
**batches)
 return false;
 }
 
-tcp_hdr = dp_packet_l4(p);
-tcp_offset = TCP_OFFSET(tcp_hdr->tcp_ctl);
-tcp_seq = ntohl(get_16aligned_be32(_hdr->tcp_seq));
-hdr_len = ((char *) dp_packet_l4(p) - (char *) dp_packet_eth(p))
-  + tcp_offset * 4;
-ip_id = 0;
-if (dp_packet_hwol_is_ipv4(p)) {
+if (dp_packet_hwol_is_tunnel_vxlan(p) ||
+dp_packet_hwol_is_tunnel_geneve(p)) {
+data_pos =  dp_packet_get_inner_tcp_payload(p);
+outer_ipv4 = dp_packet_hwol_is_outer_ipv4(p);
+tcp_hdr = dp_packet_inner_l4(p);
+ip_hdr = dp_packet_inner_l3(p);
+tnl = true;
+if (outer_ipv4) {
+ip_id = ntohs(((struct ip_header *) dp_packet_l3(p))->ip_id);
+} else if (dp_packet_hwol_is_ipv4(p)) {
+ip_id = ntohs(ip_hdr->ip_id);
+}
+} else {
+data_pos = dp_packet_get_tcp_payload(p);
+outer_ipv4 = dp_packet_hwol_is_ipv4(p);
+tcp_hdr = dp_packet_l4(p);
 ip_hdr = dp_packet_l3(p);
-ip_id = ntohs(ip_hdr->ip_id);
+tnl = false;
+if (outer_ipv4) {
+ip_id = ntohs(ip_hdr->ip_id);
+}
 }
 
+tcp_offset = TCP_OFFSET(tcp_hdr->tcp_ctl);
+tcp_seq = ntohl(get_16aligned_be32(_hdr->tcp_seq));
+hdr_len = ((char *) tcp_hdr - (char *) dp_packet_eth(p))
+  + tcp_offset * 4;
 const char *data_tail = (char *) dp_packet_tail(p)
 - dp_packet_l2_pad_size(p);
-const char *data_pos = dp_packet_get_tcp_payload(p);
 int n_segs = dp_packet_gso_nr_segs(p);
 
 for (int i = 0; i < n_segs; i++) {
@@ -130,8 +155,26 @@ dp_packet_gso(struct dp_packet *p, struct dp_packet_batch 
**batches)
 seg = dp_packet_gso_seg_new(p, hdr_len, data_pos, seg_len);
 data_pos += seg_len;
 
+if (tnl) {
+/* Update tunnel L3 header. */
+if (dp_packet_hwol_is_ipv4(seg)) {
+ip_hdr = dp_packet_inner_l3(seg);
+ip_hdr->ip_tot_len = htons(sizeof *ip_hdr +
+   dp_packet_inner_l4_size(seg));
+ip_hdr->ip_id = htons(ip_id);
+ip_hdr->ip_csum = 0;
+ip_id++;
+} else {
+struct ovs_16aligned_ip6_hdr *ip6_hdr;
+
+ip6_hdr = dp_packet_inner_l3(seg);
+ip6_hdr->ip6_ctlun.ip6_un1.ip6_un1_plen
+= htons(dp_packet_inner_l3_size(seg) - sizeof *ip6_hdr);
+}
+}
+
 /* Update L3 header. */
-if (dp_packet_hwol_is_ipv4(seg)) {
+if (outer_ipv4) {
 ip_hdr = dp_packet_l3(seg);
 ip_hdr->ip_tot_len = htons(sizeof *ip_hdr +
dp_packet_l4_size(seg));
@@ -146,7 +189,11 @@ dp_packet_gso(struct 

Re: [ovs-dev] [PATCH] Userspace: Software fallback for UDP encapsulated TCP segmentation.

2024-01-24 Thread Mike Pattrick
On Wed, Jan 24, 2024 at 1:14 PM Mike Pattrick  wrote:
>
> When sending packets that are flagged as requiring segmentation to an
> interface that doens't support this feature, send the packet to the TSO
> software fallback instead of dropping it.
>
> Signed-off-by: Mike Pattrick 

For some reason, this patch fails github CI but not Intel CI and
didn't have any errors when I tested it. I'm looking into the issue.

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH] Userspace: Software fallback for UDP encapsulated TCP segmentation.

2024-01-24 Thread Mike Pattrick
When sending packets that are flagged as requiring segmentation to an
interface that doens't support this feature, send the packet to the TSO
software fallback instead of dropping it.

Signed-off-by: Mike Pattrick 
---
 lib/dp-packet-gso.c | 73 +
 lib/dp-packet.h | 26 +++
 lib/netdev-native-tnl.c |  8 +
 lib/netdev.c| 19 ---
 tests/system-traffic.at | 48 +++
 5 files changed, 142 insertions(+), 32 deletions(-)

diff --git a/lib/dp-packet-gso.c b/lib/dp-packet-gso.c
index 847685ad9..f25abf436 100644
--- a/lib/dp-packet-gso.c
+++ b/lib/dp-packet-gso.c
@@ -47,6 +47,8 @@ dp_packet_gso_seg_new(const struct dp_packet *p, size_t 
hdr_len,
 seg->l2_5_ofs = p->l2_5_ofs;
 seg->l3_ofs = p->l3_ofs;
 seg->l4_ofs = p->l4_ofs;
+seg->inner_l3_ofs = p->inner_l3_ofs;
+seg->inner_l4_ofs = p->inner_l4_ofs;
 
 /* The protocol headers remain the same, so preserve hash and mark. */
 *dp_packet_rss_ptr(seg) = *dp_packet_rss_ptr(p);
@@ -71,7 +73,12 @@ dp_packet_gso_nr_segs(struct dp_packet *p)
 const char *data_tail;
 const char *data_pos;
 
-data_pos = dp_packet_get_tcp_payload(p);
+if (dp_packet_hwol_is_tunnel_vxlan(p) ||
+dp_packet_hwol_is_tunnel_geneve(p)) {
+data_pos = dp_packet_get_inner_tcp_payload(p);
+} else {
+data_pos = dp_packet_get_tcp_payload(p);
+}
 data_tail = (char *) dp_packet_tail(p) - dp_packet_l2_pad_size(p);
 
 return DIV_ROUND_UP(data_tail - data_pos, segsz);
@@ -91,12 +98,15 @@ dp_packet_gso(struct dp_packet *p, struct dp_packet_batch 
**batches)
 struct tcp_header *tcp_hdr;
 struct ip_header *ip_hdr;
 struct dp_packet *seg;
+const char *data_pos;
 uint16_t tcp_offset;
 uint16_t tso_segsz;
+uint16_t ip_id = 0;
 uint32_t tcp_seq;
-uint16_t ip_id;
+bool outer_ipv4;
 int hdr_len;
 int seg_len;
+bool tnl;
 
 tso_segsz = dp_packet_get_tso_segsz(p);
 if (!tso_segsz) {
@@ -105,20 +115,35 @@ dp_packet_gso(struct dp_packet *p, struct dp_packet_batch 
**batches)
 return false;
 }
 
-tcp_hdr = dp_packet_l4(p);
-tcp_offset = TCP_OFFSET(tcp_hdr->tcp_ctl);
-tcp_seq = ntohl(get_16aligned_be32(_hdr->tcp_seq));
-hdr_len = ((char *) dp_packet_l4(p) - (char *) dp_packet_eth(p))
-  + tcp_offset * 4;
-ip_id = 0;
-if (dp_packet_hwol_is_ipv4(p)) {
+if (dp_packet_hwol_is_tunnel_vxlan(p) ||
+dp_packet_hwol_is_tunnel_geneve(p)) {
+data_pos =  dp_packet_get_inner_tcp_payload(p);
+outer_ipv4 = dp_packet_hwol_is_outer_ipv4(p);
+tcp_hdr = dp_packet_inner_l4(p);
+ip_hdr = dp_packet_inner_l3(p);
+tnl = true;
+if (outer_ipv4) {
+ip_id = ntohs(((struct ip_header *) dp_packet_l3(p))->ip_id);
+} else if (dp_packet_hwol_is_ipv4(p)) {
+ip_id = ntohs(ip_hdr->ip_id);
+}
+} else {
+data_pos = dp_packet_get_tcp_payload(p);
+outer_ipv4 = dp_packet_hwol_is_ipv4(p);
+tcp_hdr = dp_packet_l4(p);
 ip_hdr = dp_packet_l3(p);
-ip_id = ntohs(ip_hdr->ip_id);
+tnl = false;
+if (outer_ipv4) {
+ip_id = ntohs(ip_hdr->ip_id);
+}
 }
 
+tcp_offset = TCP_OFFSET(tcp_hdr->tcp_ctl);
+tcp_seq = ntohl(get_16aligned_be32(_hdr->tcp_seq));
+hdr_len = ((char *) tcp_hdr - (char *) dp_packet_eth(p))
+  + tcp_offset * 4;
 const char *data_tail = (char *) dp_packet_tail(p)
 - dp_packet_l2_pad_size(p);
-const char *data_pos = dp_packet_get_tcp_payload(p);
 int n_segs = dp_packet_gso_nr_segs(p);
 
 for (int i = 0; i < n_segs; i++) {
@@ -130,8 +155,26 @@ dp_packet_gso(struct dp_packet *p, struct dp_packet_batch 
**batches)
 seg = dp_packet_gso_seg_new(p, hdr_len, data_pos, seg_len);
 data_pos += seg_len;
 
+if (tnl) {
+/* Update tunnel L3 header. */
+if (dp_packet_hwol_is_ipv4(seg)) {
+ip_hdr = dp_packet_inner_l3(seg);
+ip_hdr->ip_tot_len = htons(sizeof *ip_hdr +
+   dp_packet_inner_l4_size(seg));
+ip_hdr->ip_id = htons(ip_id);
+ip_hdr->ip_csum = 0;
+ip_id++;
+} else {
+struct ovs_16aligned_ip6_hdr *ip6_hdr;
+
+ip6_hdr = dp_packet_inner_l3(seg);
+ip6_hdr->ip6_ctlun.ip6_un1.ip6_un1_plen
+= htons(dp_packet_inner_l3_size(seg) - sizeof *ip6_hdr);
+}
+}
+
 /* Update L3 header. */
-if (dp_packet_hwol_is_ipv4(seg)) {
+if (outer_ipv4) {
 ip_hdr = dp_packet_l3(seg);
 ip_hdr->ip_tot_len = htons(sizeof *ip_hdr +
dp_packet_l4_size(seg));
@@ -146,7 +189,11 @@ dp_packet_gso(struct dp_packet *p,