It's is mainline since v4.20-rc6. Have you tried it?

You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.

  IPv6 fragments with nf_conntrack_reasm loaded cause net_mutex deadlock
  upon LXD container shutdown

Status in linux package in Ubuntu:
Status in linux source package in Bionic:

Bug description:
  I've spent the last few days tracking down an issue where an attempt
  to shutdown an LXD container after several hours of host uptime on
  Ubuntu Bionic (4.15.0-15.16-generic) would cause a kworker thread to
  start spinning on one CPU core and all subsequent container start/stop
  operations to fail.

  The underlying issue is that a kworker thread (executing cleanup_net)
  spins in inet_frags_exit_net, waiting for sum_frag_mem_limit(nf) to
  become zero, which never happens becacuse it has underflowed to some
  negative multiple of 64. That kworker thread keeps holding net_mutex
  and therefore blocks any further container start/stops. That in turn
  is triggered by receiving a fragmented IPv6 MDNS packet in my
  instance, but it could probably be triggered by any fragmented IPv6

  The reason for the frag mem limit counter to underflow is
  nf_ct_frag6_reasm deducting more from it than the sum of all previous
  nf_ct_frag6_queue calls added, due to pskb_expand_head (called through
  skb_unclone) adding a multiple of 64 to the SKB's truesize, due to
  kmalloc_reserve allocating some additional slack space to the buffer.

  Removing this line:
  size = SKB_WITH_OVERHEAD(ksize(data));
  or making it conditional with nhead or ntail being nonzero works around the 
issue, but a proper fix for this seems complicated.
  There is already a comment saying "It is not generally safe to change 
skb->truesize." right above the offending modification of truesize, but the if 
statement guarding it apparently doesn't keep out all problematic cases.
  I'll leave figuring out the proper way to fix this to the maintainers of this 
area... ;)

To manage notifications about this bug go to:

Mailing list:
Post to     :
Unsubscribe :
More help   :

Reply via email to