Is this being verified for Xenial as well?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1649905

Title:
  On boot excessive number of kworker threads are running

Status in linux package in Ubuntu:
  Fix Released
Status in linux source package in Yakkety:
  Fix Released
Status in linux source package in Zesty:
  Fix Released

Bug description:
  [SRU REQUEST, Yakkety]

  Ubuntu Yakkety 4.8 kernels have an excessive amount of kworker threads
  running, this is especially noticeable on boot, one can easily have >
  1000 kworker threads on a 4 CPU box.

  Bisected this down to:

  commit 81ae6d03952c1bfb96e1a716809bd65e7cd14360
  Author: Vladimir Davydov <vdavy...@virtuozzo.com>
  Date:   Thu May 19 17:10:34 2016 -0700

      mm/slub.c: replace kick_all_cpus_sync() with synchronize_sched() in
  kmem_cache_shrink()

  [FIX]

  The synchronize_sched calls seem to create all these excessive kworker
  threads.  This is fixed with upstream commit:

  commit 89e364db71fb5e7fc8d93228152abfa67daf35fa
  Author: Vladimir Davydov <vdavydov....@gmail.com>
  Date:   Mon Dec 12 16:41:32 2016 -0800

      slub: move synchronize_sched out of slab_mutex on shrink
      
      synchronize_sched() is a heavy operation and calling it per each cache
      owned by a memory cgroup being destroyed may take quite some time.  What
      is worse, it's currently called under the slab_mutex, stalling all works
      doing cache creation/destruction.
      
      Actually, there isn't much point in calling synchronize_sched() for each
      cache - it's enough to call it just once - after setting cpu_partial for
      all caches and before shrinking them.  This way, we can also move it out
      of the slab_mutex, which we have to hold for iterating over the slab
      cache list.

  [TEST CASE]

  Without the fix, boot a Yakkety and count the number of kthreads:

  ps -ef | grep kworker | wc -l
  1034

  With the fix, boot count the number kthreads and it will be
  dramatically less:

  ps -ef | grep kworker | wc -l
  32

  Since this touches the slub allocator and cgroups too, I have
  regression tested this against the kernel-team autotest regression
  tests to sanity check this fix. All seems OK.

  Note: this only affects kernels from 4.7-rc1 through to 4.8

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1649905/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to