[Kernel-packages] [Bug 1655356] Re: NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50]; oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

2017-07-02 Thread Launchpad Bug Tracker
[Expired for linux (Ubuntu) because there has been no activity for 60
days.]

** Changed in: linux (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1655356

Title:
  NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50];
  oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

Status in linux package in Ubuntu:
  Expired

Bug description:
  I have a Dell (PowerEdge T110/0V52N7, BIOS 1.6.4 03/02/2011) was
  running Ubuntu 16.04 for a while. Ever since I upgraded to 16.10, this
  problem started with errors, OOM and an eventual kernel panic. It can
  run fine for about 3-4 hours or so. I see the following errors on
  syslog (also attached w/ other logs and information I can gather).

  Jan  9 07:36:32 gorilla kernel: [69304.099302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:00 gorilla kernel: [69332.119587] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:33 gorilla kernel: [69364.114705] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:01 gorilla kernel: [69392.127352] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:37 gorilla kernel: [69428.134132] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:39:45 gorilla kernel: [69496.112694] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  Jan  9 07:40:13 gorilla kernel: [69524.112050] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:40:49 gorilla kernel: [69560.104511] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:17 gorilla kernel: [69588.107302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:45 gorilla kernel: [69616.104843] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]

  Jan  8 11:52:27 gorilla kernel: [ 2852.818471] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:38:56 gorilla kernel: [69448.096571] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:39:46 gorilla kernel: [69497.705922] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:40:50 gorilla kernel: [69561.956773] sh invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:41:10 gorilla kernel: [69582.329364] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:42:40 gorilla kernel: [69672.181041] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:42:41 gorilla kernel: [69673.298714] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:42:59 gorilla kernel: [69691.320169] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:43:03 gorilla kernel: [69694.769140] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:43:20 gorilla kernel: [69712.255535] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0

  
  Jan  8 11:46:11 gorilla kernel: [ 2476.342532] perf: interrupt took too long 
(2512 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 11:49:04 gorilla kernel: [ 2650.045417] perf: interrupt took too long 
(3147 > 3140), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  8 11:49:56 gorilla kernel: [ 2701.973751] perf: interrupt took too long 
(3982 > 3933), lowering kernel.perf_event_max_sample_rate to 5
  Jan  8 11:51:47 gorilla kernel: [ 2812.208307] perf: interrupt took too long 
(4980 > 4977), lowering kernel.perf_event_max_sample_rate to 4
  Jan  8 13:56:06 gorilla kernel: [ 5678.539070] perf: interrupt took too long 
(2513 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 15:59:49 gorilla kernel: [13101.158417] perf: interrupt took too long 
(3148 > 3141), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  9 02:15:54 gorilla kernel: [50065.939132] perf: interrupt took too long 
(3942 > 3935), lowering kernel.perf_event_max_sample_rate to 50500
  Jan  9 07:35:30 gorilla kernel: [69241.742219] perf: interrupt took too long 
(4932 > 4927), lowering kernel.perf_event_max_sample_rate to 40500
  Jan  9 07:35:54 gorilla kernel: [69265.928531] perf: interrupt took too long 
(6170 > 6165), lowering kernel.perf_event_max_sample_rate to 32250
  Jan  9 07:36:53 gorilla kernel: [69325.386696] perf: interrupt

[Kernel-packages] [Bug 1655356] Re: NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50]; oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

2017-05-03 Thread Joseph Salisbury
** Changed in: linux (Ubuntu)
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1655356

Title:
  NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50];
  oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  I have a Dell (PowerEdge T110/0V52N7, BIOS 1.6.4 03/02/2011) was
  running Ubuntu 16.04 for a while. Ever since I upgraded to 16.10, this
  problem started with errors, OOM and an eventual kernel panic. It can
  run fine for about 3-4 hours or so. I see the following errors on
  syslog (also attached w/ other logs and information I can gather).

  Jan  9 07:36:32 gorilla kernel: [69304.099302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:00 gorilla kernel: [69332.119587] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:33 gorilla kernel: [69364.114705] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:01 gorilla kernel: [69392.127352] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:37 gorilla kernel: [69428.134132] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:39:45 gorilla kernel: [69496.112694] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  Jan  9 07:40:13 gorilla kernel: [69524.112050] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:40:49 gorilla kernel: [69560.104511] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:17 gorilla kernel: [69588.107302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:45 gorilla kernel: [69616.104843] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]

  Jan  8 11:52:27 gorilla kernel: [ 2852.818471] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:38:56 gorilla kernel: [69448.096571] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:39:46 gorilla kernel: [69497.705922] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:40:50 gorilla kernel: [69561.956773] sh invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:41:10 gorilla kernel: [69582.329364] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:42:40 gorilla kernel: [69672.181041] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:42:41 gorilla kernel: [69673.298714] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:42:59 gorilla kernel: [69691.320169] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:43:03 gorilla kernel: [69694.769140] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:43:20 gorilla kernel: [69712.255535] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0

  
  Jan  8 11:46:11 gorilla kernel: [ 2476.342532] perf: interrupt took too long 
(2512 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 11:49:04 gorilla kernel: [ 2650.045417] perf: interrupt took too long 
(3147 > 3140), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  8 11:49:56 gorilla kernel: [ 2701.973751] perf: interrupt took too long 
(3982 > 3933), lowering kernel.perf_event_max_sample_rate to 5
  Jan  8 11:51:47 gorilla kernel: [ 2812.208307] perf: interrupt took too long 
(4980 > 4977), lowering kernel.perf_event_max_sample_rate to 4
  Jan  8 13:56:06 gorilla kernel: [ 5678.539070] perf: interrupt took too long 
(2513 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 15:59:49 gorilla kernel: [13101.158417] perf: interrupt took too long 
(3148 > 3141), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  9 02:15:54 gorilla kernel: [50065.939132] perf: interrupt took too long 
(3942 > 3935), lowering kernel.perf_event_max_sample_rate to 50500
  Jan  9 07:35:30 gorilla kernel: [69241.742219] perf: interrupt took too long 
(4932 > 4927), lowering kernel.perf_event_max_sample_rate to 40500
  Jan  9 07:35:54 gorilla kernel: [69265.928531] perf: interrupt took too long 
(6170 > 6165), lowering kernel.perf_event_max_sample_rate to 32250
  Jan  9 07:36:53 gorilla kernel: [69325.386696] perf: interrupt took too long 
(7723 > 7712), lowering kernel.perf_event_max_sample_rate

[Kernel-packages] [Bug 1655356] Re: NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50]; oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

2017-04-30 Thread Arul
Update: After upgrading to 17.04 (i.e. kernel 4.10*), I no longer have
this problem.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1655356

Title:
  NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50];
  oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  I have a Dell (PowerEdge T110/0V52N7, BIOS 1.6.4 03/02/2011) was
  running Ubuntu 16.04 for a while. Ever since I upgraded to 16.10, this
  problem started with errors, OOM and an eventual kernel panic. It can
  run fine for about 3-4 hours or so. I see the following errors on
  syslog (also attached w/ other logs and information I can gather).

  Jan  9 07:36:32 gorilla kernel: [69304.099302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:00 gorilla kernel: [69332.119587] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:33 gorilla kernel: [69364.114705] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:01 gorilla kernel: [69392.127352] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:37 gorilla kernel: [69428.134132] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:39:45 gorilla kernel: [69496.112694] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  Jan  9 07:40:13 gorilla kernel: [69524.112050] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:40:49 gorilla kernel: [69560.104511] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:17 gorilla kernel: [69588.107302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:45 gorilla kernel: [69616.104843] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]

  Jan  8 11:52:27 gorilla kernel: [ 2852.818471] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:38:56 gorilla kernel: [69448.096571] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:39:46 gorilla kernel: [69497.705922] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:40:50 gorilla kernel: [69561.956773] sh invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:41:10 gorilla kernel: [69582.329364] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:42:40 gorilla kernel: [69672.181041] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:42:41 gorilla kernel: [69673.298714] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:42:59 gorilla kernel: [69691.320169] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:43:03 gorilla kernel: [69694.769140] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:43:20 gorilla kernel: [69712.255535] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0

  
  Jan  8 11:46:11 gorilla kernel: [ 2476.342532] perf: interrupt took too long 
(2512 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 11:49:04 gorilla kernel: [ 2650.045417] perf: interrupt took too long 
(3147 > 3140), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  8 11:49:56 gorilla kernel: [ 2701.973751] perf: interrupt took too long 
(3982 > 3933), lowering kernel.perf_event_max_sample_rate to 5
  Jan  8 11:51:47 gorilla kernel: [ 2812.208307] perf: interrupt took too long 
(4980 > 4977), lowering kernel.perf_event_max_sample_rate to 4
  Jan  8 13:56:06 gorilla kernel: [ 5678.539070] perf: interrupt took too long 
(2513 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 15:59:49 gorilla kernel: [13101.158417] perf: interrupt took too long 
(3148 > 3141), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  9 02:15:54 gorilla kernel: [50065.939132] perf: interrupt took too long 
(3942 > 3935), lowering kernel.perf_event_max_sample_rate to 50500
  Jan  9 07:35:30 gorilla kernel: [69241.742219] perf: interrupt took too long 
(4932 > 4927), lowering kernel.perf_event_max_sample_rate to 40500
  Jan  9 07:35:54 gorilla kernel: [69265.928531] perf: interrupt took too long 
(6170 > 6165), lowering kernel.perf_event_max_sample_rate to 32250
  Jan  9 07:36:53 gorilla kernel: [69325.386696] perf: interrupt took too long 
(7723 > 7712), lowering kernel.perf_event_

[Kernel-packages] [Bug 1655356] Re: NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50]; oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

2017-01-19 Thread Joseph Salisbury
This may be related to bug 1655842 .  Can you test the test kernel for
that bug which can be downloaded here:

http://kernel.ubuntu.com/~jsalisbury/lp1655842/

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1655356

Title:
  NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50];
  oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  I have a Dell (PowerEdge T110/0V52N7, BIOS 1.6.4 03/02/2011) was
  running Ubuntu 16.04 for a while. Ever since I upgraded to 16.10, this
  problem started with errors, OOM and an eventual kernel panic. It can
  run fine for about 3-4 hours or so. I see the following errors on
  syslog (also attached w/ other logs and information I can gather).

  Jan  9 07:36:32 gorilla kernel: [69304.099302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:00 gorilla kernel: [69332.119587] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:33 gorilla kernel: [69364.114705] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:01 gorilla kernel: [69392.127352] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:37 gorilla kernel: [69428.134132] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:39:45 gorilla kernel: [69496.112694] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  Jan  9 07:40:13 gorilla kernel: [69524.112050] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:40:49 gorilla kernel: [69560.104511] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:17 gorilla kernel: [69588.107302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:45 gorilla kernel: [69616.104843] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]

  Jan  8 11:52:27 gorilla kernel: [ 2852.818471] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:38:56 gorilla kernel: [69448.096571] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:39:46 gorilla kernel: [69497.705922] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:40:50 gorilla kernel: [69561.956773] sh invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:41:10 gorilla kernel: [69582.329364] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:42:40 gorilla kernel: [69672.181041] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:42:41 gorilla kernel: [69673.298714] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:42:59 gorilla kernel: [69691.320169] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:43:03 gorilla kernel: [69694.769140] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:43:20 gorilla kernel: [69712.255535] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0

  
  Jan  8 11:46:11 gorilla kernel: [ 2476.342532] perf: interrupt took too long 
(2512 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 11:49:04 gorilla kernel: [ 2650.045417] perf: interrupt took too long 
(3147 > 3140), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  8 11:49:56 gorilla kernel: [ 2701.973751] perf: interrupt took too long 
(3982 > 3933), lowering kernel.perf_event_max_sample_rate to 5
  Jan  8 11:51:47 gorilla kernel: [ 2812.208307] perf: interrupt took too long 
(4980 > 4977), lowering kernel.perf_event_max_sample_rate to 4
  Jan  8 13:56:06 gorilla kernel: [ 5678.539070] perf: interrupt took too long 
(2513 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 15:59:49 gorilla kernel: [13101.158417] perf: interrupt took too long 
(3148 > 3141), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  9 02:15:54 gorilla kernel: [50065.939132] perf: interrupt took too long 
(3942 > 3935), lowering kernel.perf_event_max_sample_rate to 50500
  Jan  9 07:35:30 gorilla kernel: [69241.742219] perf: interrupt took too long 
(4932 > 4927), lowering kernel.perf_event_max_sample_rate to 40500
  Jan  9 07:35:54 gorilla kernel: [69265.928531] perf: interrupt took too long 
(6170 > 6165), lowering kernel.perf_event_max_sample_rate to 32250
  Jan  9 07:36:53 gorilla kernel: [69325.386696]

[Kernel-packages] [Bug 1655356] Re: NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50]; oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

2017-01-17 Thread Arul
Before trying the upstream kernel, I tried to replicate the issue. After
noticing it was happening every time there are heavy file I/O. I was
able to easily reproduce it at will by running apps that do lot of file
I/O. I was also monitoring free memory every second to understand why
kernel is invoking oom-killer to randomly killing applications. When
oom-killer started to kill random applications, the memory looked like
this.

Every 1.0s: free -h gorilla: Sat Jan 14 09:52:01 2017
  totalusedfree  shared  buff/cache   available
Mem:   5.9G755M127M 17M5.1G4.6G
Swap:  2.0G  0B2.0G

As you can see, there are lot of available memory (mostly in cache and I am 
very sure most of it are clean cache) but for some reason, it was not reclaimed 
by kernel (kswapd0?). So I decided to run "echo 3 > /proc/sys/vm/drop_caches" 
frequently to force dropping cache, and sure enough everything worked fine. 
Right now, I haven't seen this problem in the last 2+ days. 
 
root@gorilla:~# cat /var/log/syslog|egrep "NMI watchdog: BUG: soft 
lockup|oom-killer"
root@gorilla:~# uptime
 07:37:29 up 2 days, 19:34,  1 user,  load average: 1.63, 0.77, 0.29

Now that I suspect this may be a possible bug in kswapd0, I did a search
here for similar issues for kswapd0 and found one (see below) but I am
not sure it is the same problem though the symptoms and workaround are
same.

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1518457

At the end of this report (comment #142) says, they have no problem in
4.4.0-45 kernel but Yakkety based 4.8+ kernel has this problem. Assuming
this is the same issue, I can confirm the same as I have never had this
problem before upgrading to Yakkety. I am wondering if the bug made its
way back since this fix. Since I have a workaround, I am going to
continue with it; it is not ideal but seem to hold it. The last note on
the above report says the bug is fixed and any new problem should be
opened as a new bug. Can this report be treated as new bug to address
this problem?

Thanks
 


** Changed in: linux (Ubuntu)
   Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1655356

Title:
  NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50];
  oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  I have a Dell (PowerEdge T110/0V52N7, BIOS 1.6.4 03/02/2011) was
  running Ubuntu 16.04 for a while. Ever since I upgraded to 16.10, this
  problem started with errors, OOM and an eventual kernel panic. It can
  run fine for about 3-4 hours or so. I see the following errors on
  syslog (also attached w/ other logs and information I can gather).

  Jan  9 07:36:32 gorilla kernel: [69304.099302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:00 gorilla kernel: [69332.119587] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:33 gorilla kernel: [69364.114705] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:01 gorilla kernel: [69392.127352] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:37 gorilla kernel: [69428.134132] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:39:45 gorilla kernel: [69496.112694] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  Jan  9 07:40:13 gorilla kernel: [69524.112050] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:40:49 gorilla kernel: [69560.104511] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:17 gorilla kernel: [69588.107302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:45 gorilla kernel: [69616.104843] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]

  Jan  8 11:52:27 gorilla kernel: [ 2852.818471] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:38:56 gorilla kernel: [69448.096571] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:39:46 gorilla kernel: [69497.705922] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:40:50 gorilla kernel: [69561.956773] sh invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:41:10 gorilla kernel: [69582.329364] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:42:40 gorilla kernel: [69672.181041] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRAC

[Kernel-packages] [Bug 1655356] Re: NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50]; oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

2017-01-13 Thread Arul
@jsalisbury, Thank you for the suggestions. I don't have the xenial
kernel handy as I have the habit of running apt-get autoremove after
upgrades. However, I will try the latest upstream kernel over this
weekend and report the results.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1655356

Title:
  NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50];
  oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  I have a Dell (PowerEdge T110/0V52N7, BIOS 1.6.4 03/02/2011) was
  running Ubuntu 16.04 for a while. Ever since I upgraded to 16.10, this
  problem started with errors, OOM and an eventual kernel panic. It can
  run fine for about 3-4 hours or so. I see the following errors on
  syslog (also attached w/ other logs and information I can gather).

  Jan  9 07:36:32 gorilla kernel: [69304.099302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:00 gorilla kernel: [69332.119587] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:33 gorilla kernel: [69364.114705] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:01 gorilla kernel: [69392.127352] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:37 gorilla kernel: [69428.134132] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:39:45 gorilla kernel: [69496.112694] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  Jan  9 07:40:13 gorilla kernel: [69524.112050] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:40:49 gorilla kernel: [69560.104511] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:17 gorilla kernel: [69588.107302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:45 gorilla kernel: [69616.104843] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]

  Jan  8 11:52:27 gorilla kernel: [ 2852.818471] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:38:56 gorilla kernel: [69448.096571] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:39:46 gorilla kernel: [69497.705922] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:40:50 gorilla kernel: [69561.956773] sh invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:41:10 gorilla kernel: [69582.329364] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:42:40 gorilla kernel: [69672.181041] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:42:41 gorilla kernel: [69673.298714] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:42:59 gorilla kernel: [69691.320169] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:43:03 gorilla kernel: [69694.769140] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:43:20 gorilla kernel: [69712.255535] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0

  
  Jan  8 11:46:11 gorilla kernel: [ 2476.342532] perf: interrupt took too long 
(2512 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 11:49:04 gorilla kernel: [ 2650.045417] perf: interrupt took too long 
(3147 > 3140), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  8 11:49:56 gorilla kernel: [ 2701.973751] perf: interrupt took too long 
(3982 > 3933), lowering kernel.perf_event_max_sample_rate to 5
  Jan  8 11:51:47 gorilla kernel: [ 2812.208307] perf: interrupt took too long 
(4980 > 4977), lowering kernel.perf_event_max_sample_rate to 4
  Jan  8 13:56:06 gorilla kernel: [ 5678.539070] perf: interrupt took too long 
(2513 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 15:59:49 gorilla kernel: [13101.158417] perf: interrupt took too long 
(3148 > 3141), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  9 02:15:54 gorilla kernel: [50065.939132] perf: interrupt took too long 
(3942 > 3935), lowering kernel.perf_event_max_sample_rate to 50500
  Jan  9 07:35:30 gorilla kernel: [69241.742219] perf: interrupt took too long 
(4932 > 4927), lowering kernel.perf_event_max_sample_rate to 40500
  Jan  9 07:35:54 gorilla kernel: [69265.928531] perf: interrupt took too long 
(6170 > 6165), lowering kernel.perf_e

[Kernel-packages] [Bug 1655356] Re: NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50]; oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

2017-01-12 Thread Joseph Salisbury
Does booting back with the Xenial kernel make the bug go away?

Would it be possible for you to test the latest upstream kernel? Refer
to https://wiki.ubuntu.com/KernelMainlineBuilds . Please test the latest
v4.10 kernel[0].

If this bug is fixed in the mainline kernel, please add the following
tag 'kernel-fixed-upstream'.

If the mainline kernel does not fix this bug, please add the tag:
'kernel-bug-exists-upstream'.

Once testing of the upstream kernel is complete, please mark this bug as
"Confirmed".


Thanks in advance.

[0] http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.10-rc3


** Changed in: linux (Ubuntu)
   Importance: Undecided => High

** Changed in: linux (Ubuntu)
   Status: Confirmed => Incomplete

** Tags added: kernel-da-key

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1655356

Title:
  NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50];
  oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  I have a Dell (PowerEdge T110/0V52N7, BIOS 1.6.4 03/02/2011) was
  running Ubuntu 16.04 for a while. Ever since I upgraded to 16.10, this
  problem started with errors, OOM and an eventual kernel panic. It can
  run fine for about 3-4 hours or so. I see the following errors on
  syslog (also attached w/ other logs and information I can gather).

  Jan  9 07:36:32 gorilla kernel: [69304.099302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:00 gorilla kernel: [69332.119587] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:33 gorilla kernel: [69364.114705] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:01 gorilla kernel: [69392.127352] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:37 gorilla kernel: [69428.134132] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:39:45 gorilla kernel: [69496.112694] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  Jan  9 07:40:13 gorilla kernel: [69524.112050] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:40:49 gorilla kernel: [69560.104511] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:17 gorilla kernel: [69588.107302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:45 gorilla kernel: [69616.104843] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]

  Jan  8 11:52:27 gorilla kernel: [ 2852.818471] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:38:56 gorilla kernel: [69448.096571] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:39:46 gorilla kernel: [69497.705922] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:40:50 gorilla kernel: [69561.956773] sh invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:41:10 gorilla kernel: [69582.329364] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:42:40 gorilla kernel: [69672.181041] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:42:41 gorilla kernel: [69673.298714] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:42:59 gorilla kernel: [69691.320169] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:43:03 gorilla kernel: [69694.769140] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:43:20 gorilla kernel: [69712.255535] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0

  
  Jan  8 11:46:11 gorilla kernel: [ 2476.342532] perf: interrupt took too long 
(2512 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 11:49:04 gorilla kernel: [ 2650.045417] perf: interrupt took too long 
(3147 > 3140), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  8 11:49:56 gorilla kernel: [ 2701.973751] perf: interrupt took too long 
(3982 > 3933), lowering kernel.perf_event_max_sample_rate to 5
  Jan  8 11:51:47 gorilla kernel: [ 2812.208307] perf: interrupt took too long 
(4980 > 4977), lowering kernel.perf_event_max_sample_rate to 4
  Jan  8 13:56:06 gorilla kernel: [ 5678.539070] perf: interrupt took too long 
(2513 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 15:59:49 gorilla kernel: [131

[Kernel-packages] [Bug 1655356] Re: NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50]; oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

2017-01-12 Thread Arul
Kernel update to 4.8.0-34 did not make any difference.

Help please.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1655356

Title:
  NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50];
  oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  I have a Dell (PowerEdge T110/0V52N7, BIOS 1.6.4 03/02/2011) was
  running Ubuntu 16.04 for a while. Ever since I upgraded to 16.10, this
  problem started with errors, OOM and an eventual kernel panic. It can
  run fine for about 3-4 hours or so. I see the following errors on
  syslog (also attached w/ other logs and information I can gather).

  Jan  9 07:36:32 gorilla kernel: [69304.099302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:00 gorilla kernel: [69332.119587] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:33 gorilla kernel: [69364.114705] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:01 gorilla kernel: [69392.127352] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:37 gorilla kernel: [69428.134132] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:39:45 gorilla kernel: [69496.112694] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  Jan  9 07:40:13 gorilla kernel: [69524.112050] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:40:49 gorilla kernel: [69560.104511] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:17 gorilla kernel: [69588.107302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:45 gorilla kernel: [69616.104843] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]

  Jan  8 11:52:27 gorilla kernel: [ 2852.818471] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:38:56 gorilla kernel: [69448.096571] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:39:46 gorilla kernel: [69497.705922] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:40:50 gorilla kernel: [69561.956773] sh invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:41:10 gorilla kernel: [69582.329364] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:42:40 gorilla kernel: [69672.181041] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:42:41 gorilla kernel: [69673.298714] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:42:59 gorilla kernel: [69691.320169] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:43:03 gorilla kernel: [69694.769140] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:43:20 gorilla kernel: [69712.255535] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0

  
  Jan  8 11:46:11 gorilla kernel: [ 2476.342532] perf: interrupt took too long 
(2512 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 11:49:04 gorilla kernel: [ 2650.045417] perf: interrupt took too long 
(3147 > 3140), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  8 11:49:56 gorilla kernel: [ 2701.973751] perf: interrupt took too long 
(3982 > 3933), lowering kernel.perf_event_max_sample_rate to 5
  Jan  8 11:51:47 gorilla kernel: [ 2812.208307] perf: interrupt took too long 
(4980 > 4977), lowering kernel.perf_event_max_sample_rate to 4
  Jan  8 13:56:06 gorilla kernel: [ 5678.539070] perf: interrupt took too long 
(2513 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 15:59:49 gorilla kernel: [13101.158417] perf: interrupt took too long 
(3148 > 3141), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  9 02:15:54 gorilla kernel: [50065.939132] perf: interrupt took too long 
(3942 > 3935), lowering kernel.perf_event_max_sample_rate to 50500
  Jan  9 07:35:30 gorilla kernel: [69241.742219] perf: interrupt took too long 
(4932 > 4927), lowering kernel.perf_event_max_sample_rate to 40500
  Jan  9 07:35:54 gorilla kernel: [69265.928531] perf: interrupt took too long 
(6170 > 6165), lowering kernel.perf_event_max_sample_rate to 32250
  Jan  9 07:36:53 gorilla kernel: [69325.386696] perf: interrupt took too long 
(7723 > 7712), lowering kernel.perf_event_max_sample_rate 

[Kernel-packages] [Bug 1655356] Re: NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50]; oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

2017-01-11 Thread Arul
Yesterday, I disabled NMI watch (a workaround discussed in another
similar bug report) but it still crashed this morning.

root@gorilla:~# sysctl -a|grep kernel.nmi_watchdog
kernel.nmi_watchdog = 0

Today, I upgraded to 4.8.0-34 from 4.8.0-32 kernel to see if that makes
any difference.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1655356

Title:
  NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50];
  oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  I have a Dell (PowerEdge T110/0V52N7, BIOS 1.6.4 03/02/2011) was
  running Ubuntu 16.04 for a while. Ever since I upgraded to 16.10, this
  problem started with errors, OOM and an eventual kernel panic. It can
  run fine for about 3-4 hours or so. I see the following errors on
  syslog (also attached w/ other logs and information I can gather).

  Jan  9 07:36:32 gorilla kernel: [69304.099302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:00 gorilla kernel: [69332.119587] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:33 gorilla kernel: [69364.114705] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:01 gorilla kernel: [69392.127352] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:37 gorilla kernel: [69428.134132] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:39:45 gorilla kernel: [69496.112694] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  Jan  9 07:40:13 gorilla kernel: [69524.112050] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:40:49 gorilla kernel: [69560.104511] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:17 gorilla kernel: [69588.107302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:45 gorilla kernel: [69616.104843] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]

  Jan  8 11:52:27 gorilla kernel: [ 2852.818471] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:38:56 gorilla kernel: [69448.096571] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:39:46 gorilla kernel: [69497.705922] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:40:50 gorilla kernel: [69561.956773] sh invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:41:10 gorilla kernel: [69582.329364] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:42:40 gorilla kernel: [69672.181041] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:42:41 gorilla kernel: [69673.298714] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:42:59 gorilla kernel: [69691.320169] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:43:03 gorilla kernel: [69694.769140] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:43:20 gorilla kernel: [69712.255535] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0

  
  Jan  8 11:46:11 gorilla kernel: [ 2476.342532] perf: interrupt took too long 
(2512 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 11:49:04 gorilla kernel: [ 2650.045417] perf: interrupt took too long 
(3147 > 3140), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  8 11:49:56 gorilla kernel: [ 2701.973751] perf: interrupt took too long 
(3982 > 3933), lowering kernel.perf_event_max_sample_rate to 5
  Jan  8 11:51:47 gorilla kernel: [ 2812.208307] perf: interrupt took too long 
(4980 > 4977), lowering kernel.perf_event_max_sample_rate to 4
  Jan  8 13:56:06 gorilla kernel: [ 5678.539070] perf: interrupt took too long 
(2513 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 15:59:49 gorilla kernel: [13101.158417] perf: interrupt took too long 
(3148 > 3141), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  9 02:15:54 gorilla kernel: [50065.939132] perf: interrupt took too long 
(3942 > 3935), lowering kernel.perf_event_max_sample_rate to 50500
  Jan  9 07:35:30 gorilla kernel: [69241.742219] perf: interrupt took too long 
(4932 > 4927), lowering kernel.perf_event_max_sample_rate to 40500
  Jan  9 07:35:54 gorilla kernel: [69265.928531] perf: interrupt to

[Kernel-packages] [Bug 1655356] Re: NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50]; oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

2017-01-10 Thread Arul
apport information

** Tags added: apport-collected

** Description changed:

  I have a Dell (PowerEdge T110/0V52N7, BIOS 1.6.4 03/02/2011) was running
  Ubuntu 16.04 for a while. Ever since I upgraded to 16.10, this problem
  started with errors, OOM and an eventual kernel panic. It can run fine
  for about 3-4 hours or so. I see the following errors on syslog (also
  attached w/ other logs and information I can gather).
  
  Jan  9 07:36:32 gorilla kernel: [69304.099302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:00 gorilla kernel: [69332.119587] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:33 gorilla kernel: [69364.114705] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:01 gorilla kernel: [69392.127352] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:37 gorilla kernel: [69428.134132] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:39:45 gorilla kernel: [69496.112694] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  Jan  9 07:40:13 gorilla kernel: [69524.112050] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:40:49 gorilla kernel: [69560.104511] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:17 gorilla kernel: [69588.107302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:45 gorilla kernel: [69616.104843] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  
  Jan  8 11:52:27 gorilla kernel: [ 2852.818471] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:38:56 gorilla kernel: [69448.096571] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:39:46 gorilla kernel: [69497.705922] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:40:50 gorilla kernel: [69561.956773] sh invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:41:10 gorilla kernel: [69582.329364] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:42:40 gorilla kernel: [69672.181041] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:42:41 gorilla kernel: [69673.298714] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:42:59 gorilla kernel: [69691.320169] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:43:03 gorilla kernel: [69694.769140] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:43:20 gorilla kernel: [69712.255535] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  
  
  Jan  8 11:46:11 gorilla kernel: [ 2476.342532] perf: interrupt took too long 
(2512 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 11:49:04 gorilla kernel: [ 2650.045417] perf: interrupt took too long 
(3147 > 3140), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  8 11:49:56 gorilla kernel: [ 2701.973751] perf: interrupt took too long 
(3982 > 3933), lowering kernel.perf_event_max_sample_rate to 5
  Jan  8 11:51:47 gorilla kernel: [ 2812.208307] perf: interrupt took too long 
(4980 > 4977), lowering kernel.perf_event_max_sample_rate to 4
  Jan  8 13:56:06 gorilla kernel: [ 5678.539070] perf: interrupt took too long 
(2513 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 15:59:49 gorilla kernel: [13101.158417] perf: interrupt took too long 
(3148 > 3141), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  9 02:15:54 gorilla kernel: [50065.939132] perf: interrupt took too long 
(3942 > 3935), lowering kernel.perf_event_max_sample_rate to 50500
  Jan  9 07:35:30 gorilla kernel: [69241.742219] perf: interrupt took too long 
(4932 > 4927), lowering kernel.perf_event_max_sample_rate to 40500
  Jan  9 07:35:54 gorilla kernel: [69265.928531] perf: interrupt took too long 
(6170 > 6165), lowering kernel.perf_event_max_sample_rate to 32250
  Jan  9 07:36:53 gorilla kernel: [69325.386696] perf: interrupt took too long 
(7723 > 7712), lowering kernel.perf_event_max_sample_rate to 25750
  
  
  Just to make sure if this is not memory related, I ran memtest for 12 passes 
over night and found no errors on memory. Removed the external backup drives to 
isolate the problem. Checked similar issues on lanchpad.net but most of them 
are related to video driver and power supply.
  
  Appreciate help.
  
  Thanks
  -Arul
  
  Attachments:
  --

[Kernel-packages] [Bug 1655356] Re: NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50]; oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

2017-01-10 Thread Brian Murray
** Tags added: yakkety

** Package changed: ubuntu => linux (Ubuntu)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1655356

Title:
  NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:50];
  oom-killer; and eventual kernel panic on 16.10 (upgrade from 16.04)

Status in linux package in Ubuntu:
  New

Bug description:
  I have a Dell (PowerEdge T110/0V52N7, BIOS 1.6.4 03/02/2011) was
  running Ubuntu 16.04 for a while. Ever since I upgraded to 16.10, this
  problem started with errors, OOM and an eventual kernel panic. It can
  run fine for about 3-4 hours or so. I see the following errors on
  syslog (also attached w/ other logs and information I can gather).

  Jan  9 07:36:32 gorilla kernel: [69304.099302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:00 gorilla kernel: [69332.119587] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:37:33 gorilla kernel: [69364.114705] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:01 gorilla kernel: [69392.127352] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:38:37 gorilla kernel: [69428.134132] NMI watchdog: BUG: soft lockup 
- CPU#3 stuck for 22s! [kswapd0:50]
  Jan  9 07:39:45 gorilla kernel: [69496.112694] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]
  Jan  9 07:40:13 gorilla kernel: [69524.112050] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:40:49 gorilla kernel: [69560.104511] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:17 gorilla kernel: [69588.107302] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 22s! [kswapd0:50]
  Jan  9 07:41:45 gorilla kernel: [69616.104843] NMI watchdog: BUG: soft lockup 
- CPU#1 stuck for 23s! [kswapd0:50]

  Jan  8 11:52:27 gorilla kernel: [ 2852.818471] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:38:56 gorilla kernel: [69448.096571] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:39:46 gorilla kernel: [69497.705922] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:40:50 gorilla kernel: [69561.956773] sh invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:41:10 gorilla kernel: [69582.329364] rsync invoked oom-killer: 
gfp_mask=0x26000d0(GFP_TEMPORARY|__GFP_NOTRACK), order=0, oom_score_adj=0
  Jan  9 07:42:40 gorilla kernel: [69672.181041] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:42:41 gorilla kernel: [69673.298714] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:42:59 gorilla kernel: [69691.320169] apache2 invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0
  Jan  9 07:43:03 gorilla kernel: [69694.769140] sessionclean invoked 
oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, 
oom_score_adj=0
  Jan  9 07:43:20 gorilla kernel: [69712.255535] kthreadd invoked oom-killer: 
gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=1, oom_score_adj=0

  
  Jan  8 11:46:11 gorilla kernel: [ 2476.342532] perf: interrupt took too long 
(2512 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 11:49:04 gorilla kernel: [ 2650.045417] perf: interrupt took too long 
(3147 > 3140), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  8 11:49:56 gorilla kernel: [ 2701.973751] perf: interrupt took too long 
(3982 > 3933), lowering kernel.perf_event_max_sample_rate to 5
  Jan  8 11:51:47 gorilla kernel: [ 2812.208307] perf: interrupt took too long 
(4980 > 4977), lowering kernel.perf_event_max_sample_rate to 4
  Jan  8 13:56:06 gorilla kernel: [ 5678.539070] perf: interrupt took too long 
(2513 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
  Jan  8 15:59:49 gorilla kernel: [13101.158417] perf: interrupt took too long 
(3148 > 3141), lowering kernel.perf_event_max_sample_rate to 63500
  Jan  9 02:15:54 gorilla kernel: [50065.939132] perf: interrupt took too long 
(3942 > 3935), lowering kernel.perf_event_max_sample_rate to 50500
  Jan  9 07:35:30 gorilla kernel: [69241.742219] perf: interrupt took too long 
(4932 > 4927), lowering kernel.perf_event_max_sample_rate to 40500
  Jan  9 07:35:54 gorilla kernel: [69265.928531] perf: interrupt took too long 
(6170 > 6165), lowering kernel.perf_event_max_sample_rate to 32250
  Jan  9 07:36:53 gorilla kernel: [69325.386696] perf: interrupt took too long 
(7723 > 7712), lowering kernel.perf_event_max_sample_rate to 257