Richard Gobert wrote:
> This patch adds network_offset and inner_network_offset to napi_gro_cb, and
> makes sure both are set correctly. In the common path there's only one
> write (skb_gro_reset_offset).
> 
> Signed-off-by: Richard Gobert <[email protected]>
> ---
>  drivers/net/geneve.c           |  1 +
>  drivers/net/vxlan/vxlan_core.c |  1 +
>  include/net/gro.h              | 18 ++++++++++++++++--
>  net/8021q/vlan_core.c          |  2 ++
>  net/core/gro.c                 |  1 +
>  net/ethernet/eth.c             |  1 +
>  net/ipv4/af_inet.c             |  5 +----
>  net/ipv4/gre_offload.c         |  1 +
>  net/ipv6/ip6_offload.c         |  8 ++++----
>  9 files changed, 28 insertions(+), 10 deletions(-)
> 
> diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c
> index d4520c3f7c09..ae596285d78c 100644
> --- a/net/ipv4/gre_offload.c
> +++ b/net/ipv4/gre_offload.c
> @@ -224,6 +224,7 @@ static struct sk_buff *gre_gro_receive(struct list_head 
> *head,
>       /* Adjusted NAPI_GRO_CB(skb)->csum after skb_gro_pull()*/
>       skb_gro_postpull_rcsum(skb, greh, grehlen);
>  
> +     NAPI_GRO_CB(skb)->inner_network_offset = hlen;
>       pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);
>       flush = 0;

Nice that this even works for ETH_P_TEB, as eth_gro_receive will
overwrite the offset written here.
  
  
>       list_for_each_entry(p, head, list) {
>               const struct ipv6hdr *iph2;
> @@ -327,6 +325,7 @@ static struct sk_buff *sit_ip6ip6_gro_receive(struct 
> list_head *head,
>       }
>  
>       NAPI_GRO_CB(skb)->encap_mark = 1;
> +     NAPI_GRO_CB(skb)->inner_network_offset = skb_gro_offset(skb);
>  
>       return ipv6_gro_receive(head, skb);
>  }
> @@ -342,6 +341,7 @@ static struct sk_buff *ip4ip6_gro_receive(struct 
> list_head *head,
>       }
>  
>       NAPI_GRO_CB(skb)->encap_mark = 1;
> +     NAPI_GRO_CB(skb)->inner_network_offset = skb_gro_offset(skb);

Do we still need encap_mark, or is it always set at the same time that
inner_network_offset becomes non-zero?


Reply via email to