On Mon, Mar 05, 2018 at 03:57:52PM +0300, Alexey Kodanev wrote: > On 03/03/2018 03:20 PM, Neil Horman wrote: > > On Fri, Mar 02, 2018 at 09:16:48PM +0300, Alexey Kodanev wrote: > >> When we exceed current packets limit and have more than one > >> segment in the list returned by skb_gso_segment(), netem drops > >> only the first one, skipping the rest, hence kmemleak reports: > >> > ... > >> > >> Fix it by adding the rest of the segments, if any, to skb > >> 'to_free' list in that case. > >> > >> Fixes: 6071bd1aa13e ("netem: Segment GSO packets on enqueue") > >> Signed-off-by: Alexey Kodanev <alexey.koda...@oracle.com> > >> --- > >> net/sched/sch_netem.c | 8 +++++++- > >> 1 file changed, 7 insertions(+), 1 deletion(-) > >> > >> diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c > >> index 7c179ad..a5023a2 100644 > >> --- a/net/sched/sch_netem.c > >> +++ b/net/sched/sch_netem.c > >> @@ -508,8 +508,14 @@ static int netem_enqueue(struct sk_buff *skb, struct > >> Qdisc *sch, > >> 1<<(prandom_u32() % 8); > >> } > >> > >> - if (unlikely(sch->q.qlen >= sch->limit)) > >> + if (unlikely(sch->q.qlen >= sch->limit)) { > >> + while (segs) { > >> + skb2 = segs->next; > >> + __qdisc_drop(segs, to_free); > >> + segs = skb2; > >> + } > >> return qdisc_drop(skb, sch, to_free); > >> + } > >> > > It seems like it might be nice to wrap up this drop loop into a > > qdisc_drop_all inline function. Then we can easily drop segments in other > > locations if we should need to > > > Agree, will prepare the patch. I guess we could just add 'segs' to 'to_free' > list, then add qdisc_drop_all() with stats counter and returning status, > something like this: > > @@ -824,6 +824,18 @@ static inline void __qdisc_drop(struct sk_buff *skb, > struct sk_buff **to_free) > *to_free = skb; > } > > +static inline void __qdisc_drop_all(struct sk_buff *skb, > + struct sk_buff **to_free) > +{ > + struct sk_buff *first = skb; > + > + while (skb->next) > + skb = skb->next; > + > + skb->next = *to_free; > + *to_free = first; > +} > I agree
Thanks! Neil > Thanks, > Alexey >