> -----Original Message-----
> From: netdev-ow...@vger.kernel.org [mailto:netdev-ow...@vger.kernel.org]
> On Behalf Of Florian Westphal
> Sent: Monday, August 28, 2017 10:01 PM
> To: liujian (CE)
> Cc: Jesper Dangaard Brouer; da...@davemloft.net; kuz...@ms2.inr.ac.ru;
> yoshf...@linux-ipv6.org; elena.reshet...@intel.com; eduma...@google.com;
> netdev@vger.kernel.org; Wangkefeng (Kevin); weiyongjun (A)
> Subject: Re: Question about ip_defrag
> 
> liujian (CE) <liujia...@huawei.com> wrote:
> > Hi
> >
> > I checked our 3.10 kernel, we had backported all percpu_counter bug fix in
> lib/percpu_counter.c and include/linux/percpu_counter.h.
> > And I check 4.13-rc6, also has the issue if NIC's rx cpu num big enough.
> >
> > > > > > the issue:
> > > > > > Ip_defrag fail caused by frag_mem_limit reached
> 4M(frags.high_thresh).
> > > > > > At this moment,sum_frag_mem_limit is about 10K.
> >
> > So should we change ipfrag high/low thresh to a reasonable value ?
> > And if it is, is there a standard to change the value?
> 
> Each cpu can have frag_percpu_counter_batch bytes rest doesn't know about
> so with 64 cpus that is ~8 mbyte.
> 
> possible solutions:
> 1. reduce frag_percpu_counter_batch to 16k or so 2. make both low and high
> thresh depend on NR_CPUS
> 
Thank you for your reply.
 
> liujian, does this change help in any way?

I will have a try.

> diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
> --- a/net/ipv4/inet_fragment.c
> +++ b/net/ipv4/inet_fragment.c
> @@ -123,6 +123,17 @@ static bool inet_fragq_should_evict(const struct
> inet_frag_queue *q)
>              frag_mem_limit(q->net) >= q->net->low_thresh;  }
> 
> +/* ->mem batch size is huge, this can cause severe discrepancies
> + * between actual value (sum of pcpu values) and the global estimate.
> + *
> + * Use a smaller batch to give an opportunity for the global estimate
> + * to more accurately reflect current state.
> + */
> +static void update_frag_mem_limit(struct netns_frags *nf, unsigned int
> +batch) {
> +      percpu_counter_add_batch(&nf->mem, 0, batch); }
> +
>  static unsigned int
>  inet_evict_bucket(struct inet_frags *f, struct inet_frag_bucket *hb)  { @@
> -146,8 +157,12 @@ inet_evict_bucket(struct inet_frags *f, struct
> inet_frag_bucket *hb)
> 
>       spin_unlock(&hb->chain_lock);
> 
> -     hlist_for_each_entry_safe(fq, n, &expired, list_evictor)
> +     hlist_for_each_entry_safe(fq, n, &expired, list_evictor) {
> +             struct netns_frags *nf = fq->net;
> +
>               f->frag_expire((unsigned long) fq);
> +             update_frag_mem_limit(nf, 1);

> +     }
> 
>       return evicted;
>  }
> @@ -396,8 +411,10 @@ struct inet_frag_queue *inet_frag_find(struct
> netns_frags *nf,
>       struct inet_frag_queue *q;
>       int depth = 0;
> 
> -     if (frag_mem_limit(nf) > nf->low_thresh)
> +     if (frag_mem_limit(nf) > nf->low_thresh) {
>               inet_frag_schedule_worker(f);
> +             update_frag_mem_limit(nf, SKB_TRUESIZE(1500) * 16); 
> +     }
> 
>       hash &= (INETFRAGS_HASHSZ - 1);
>       hb = &f->hash[hash];
> @@ -416,6 +433,8 @@ struct inet_frag_queue *inet_frag_find(struct
> netns_frags *nf,
>       if (depth <= INETFRAGS_MAXDEPTH)
>               return inet_frag_create(nf, f, key);
> 
> +     update_frag_mem_limit(nf, 1);
> +
>       if (inet_frag_may_rebuild(f)) {
>               if (!f->rebuild)
>                       f->rebuild = true;

Reply via email to