[Kernel-packages] [Bug 1784152] Re: i2c_hid_get_input floods system logs

2019-02-01 Thread Scott Emmons
Hmmm, filtering out syslog treats only a single symptom (dmesg, etc will
still have log spew) not to mention the events are still occurring
resulting in wasted CPU cycles. Why not just unload and reload the
module? It's much more straightforward...

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1784152

Title:
  i2c_hid_get_input floods system logs

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Ubuntu 4.15.0-29.31-generic 4.15.18

  After upgrading to kernel version 4.15.0-29 from 4.15.0-23, the system
  logs are flooded whenever I move the cursor with my touchpad.

  It looks like this:
  i2c_hid i2c-ELAN1010:00: i2c_hid_get_input: incomplete report (14/65535)
  i2c_hid i2c-ELAN1010:00: i2c_hid_get_input: incomplete report (14/65535)
  i2c_hid i2c-ELAN1010:00: i2c_hid_get_input: incomplete report (14/65535)
  i2c_hid i2c-ELAN1010:00: i2c_hid_get_input: incomplete report (14/65535)
  etc...

  This problem did not occur on the previous kernel version so there
  must have been a change to the "drivers/hid/i2c-hid/i2c-hid.c" file.
  This seems to be fixed in a recent commit here:
  
https://github.com/torvalds/linux/commit/ef6eaf27274c0351f7059163918f3795da13199c

  I am currently running the older kernel version but would still like
  to be up to date without this flooding happening.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1784152/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1784152] Re: i2c_hid_get_input floods system logs

2018-12-04 Thread Scott Emmons
I have a similar workaround as Rekby, only as a pm-action hook.

https://gitlab.com/snippets/1786967

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1784152

Title:
  i2c_hid_get_input floods system logs

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  Ubuntu 4.15.0-29.31-generic 4.15.18

  After upgrading to kernel version 4.15.0-29 from 4.15.0-23, the system
  logs are flooded whenever I move the cursor with my touchpad.

  It looks like this:
  i2c_hid i2c-ELAN1010:00: i2c_hid_get_input: incomplete report (14/65535)
  i2c_hid i2c-ELAN1010:00: i2c_hid_get_input: incomplete report (14/65535)
  i2c_hid i2c-ELAN1010:00: i2c_hid_get_input: incomplete report (14/65535)
  i2c_hid i2c-ELAN1010:00: i2c_hid_get_input: incomplete report (14/65535)
  etc...

  This problem did not occur on the previous kernel version so there
  must have been a change to the "drivers/hid/i2c-hid/i2c-hid.c" file.
  This seems to be fixed in a recent commit here:
  
https://github.com/torvalds/linux/commit/ef6eaf27274c0351f7059163918f3795da13199c

  I am currently running the older kernel version but would still like
  to be up to date without this flooding happening.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1784152/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1788035] Re: nvme: avoid cqe corruption

2018-10-18 Thread Scott Emmons
We can confirm that this patch does not solve the issue as we are still
seeing the same dmesg pattern with the 4.4.0-1069-aws kernel.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1788035

Title:
  nvme: avoid cqe corruption

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Xenial:
  Fix Released

Bug description:
  To address customer-reported NVMe issue with instance types (notably
  c5 and m5) that expose EBS volumes as NVMe devices, this commit from
  mainline v4.6 should be backported to Xenial:

  d783e0bd02e700e7a893ef4fa71c69438ac1c276 nvme: avoid cqe corruption
  when update at the same time as read

  dmesg sample:

  [Wed Aug 15 01:11:21 2018] nvme :00:1f.0: I/O 8 QID 1 timeout, aborting
  [Wed Aug 15 01:11:21 2018] nvme :00:1f.0: I/O 9 QID 1 timeout, aborting
  [Wed Aug 15 01:11:21 2018] nvme :00:1f.0: I/O 21 QID 2 timeout, aborting
  [Wed Aug 15 01:11:32 2018] nvme :00:1f.0: I/O 10 QID 1 timeout, aborting
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: I/O 8 QID 1 timeout, reset 
controller
  [Wed Aug 15 01:11:51 2018] nvme nvme1: Abort status: 0x2
  [Wed Aug 15 01:11:51 2018] nvme nvme1: Abort status: 0x2
  [Wed Aug 15 01:11:51 2018] nvme nvme1: Abort status: 0x2
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: Cancelling I/O 21 QID 2
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: completing aborted command with 
status: 0007
  [Wed Aug 15 01:11:51 2018] blk_update_request: I/O error, dev nvme1n1, sector 
83887751
  [Wed Aug 15 01:11:51 2018] blk_update_request: I/O error, dev nvme1n1, sector 
83887751
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: Cancelling I/O 22 QID 2
  [Wed Aug 15 01:11:51 2018] blk_update_request: I/O error, dev nvme1n1, sector 
83887767
  [Wed Aug 15 01:11:51 2018] blk_update_request: I/O error, dev nvme1n1, sector 
83887767
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: Cancelling I/O 23 QID 2
  [Wed Aug 15 01:11:51 2018] blk_update_request: I/O error, dev nvme1n1, sector 
83887769
  [Wed Aug 15 01:11:51 2018] blk_update_request: I/O error, dev nvme1n1, sector 
83887769
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: Cancelling I/O 8 QID 1
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: Cancelling I/O 9 QID 1
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: completing aborted command with 
status: 0007
  [Wed Aug 15 01:11:51 2018] blk_update_request: I/O error, dev nvme1n1, sector 
41943136
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: Cancelling I/O 10 QID 1
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: completing aborted command with 
status: 0007
  [Wed Aug 15 01:11:51 2018] blk_update_request: I/O error, dev nvme1n1, sector 
6976
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: Cancelling I/O 22 QID 1
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: Cancelling I/O 23 QID 1
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: Cancelling I/O 24 QID 1
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: Cancelling I/O 25 QID 1
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: Cancelling I/O 2 QID 0
  [Wed Aug 15 01:11:51 2018] nvme nvme1: Abort status: 0x7
  [Wed Aug 15 01:11:51 2018] nvme :00:1f.0: completing aborted command with 
status: fffc
  [Wed Aug 15 01:11:51 2018] blk_update_request: I/O error, dev nvme1n1, sector 
96
  [Wed Aug 15 01:11:51 2018] XFS (nvme1n1): metadata I/O error: block 0x5000687 
("xlog_iodone") error 5 numblks 64
  [Wed Aug 15 01:11:51 2018] XFS (nvme1n1): xfs_do_force_shutdown(0x2) called 
from line 1197 of file /build/linux-c2Z51P/linux-4.4.0/fs/xfs/xfs_log.c. Return 
address = 0xc075d428
  [Wed Aug 15 01:11:51 2018] XFS (nvme1n1): xfs_log_force: error -5 returned.
  [Wed Aug 15 01:11:51 2018] XFS (nvme1n1): Log I/O Error Detected. Shutting 
down filesystem
  [Wed Aug 15 01:11:51 2018] XFS (nvme1n1): Please umount the filesystem and 
rectify the problem(s)
  [Wed Aug 15 01:11:51 2018] Buffer I/O error on dev nvme1n1, logical block 
872, lost async page write
  [Wed Aug 15 01:11:51 2018] XFS (nvme1n1): xfs_imap_to_bp: 
xfs_trans_read_buf() returned error -5.
  [Wed Aug 15 01:11:51 2018] XFS (nvme1n1): xfs_iunlink_remove: xfs_imap_to_bp 
returned error -5.
  [Wed Aug 15 01:11:51 2018] Buffer I/O error on dev nvme1n1, logical block 
873, lost async page write
  [Wed Aug 15 01:11:51 2018] Buffer I/O error on dev nvme1n1, logical block 
874, lost async page write
  [Wed Aug 15 01:11:51 2018] Buffer I/O error on dev nvme1n1, logical block 
875, lost async page write
  [Wed Aug 15 01:11:51 2018] Buffer I/O error on dev nvme1n1, logical block 
876, lost async page write
  [Wed Aug 15 01:11:51 2018] Buffer I/O error on dev nvme1n1, logical block 
877, lost async page write
  [Wed Aug 15 01:11:51 2018] Buffer I/O error on dev nvme1n1, logical block 
878, lost async page write
  [Wed Aug 15 01:11:51 2018] Buffer I/O error on dev nvme1n1, logical block 
879, lost async page write
  

[Kernel-packages] [Bug 1755627] Re: ibrs/ibpb fixes result in excessive kernel logging

2018-05-10 Thread Scott Emmons
It looks like it got deferred to 4.4.0-125 according to the changelog
[1].

[1] https://launchpad.net/ubuntu/+source/linux/4.4.0-125.150

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1755627

Title:
  ibrs/ibpb fixes result in excessive kernel logging

Status in linux package in Ubuntu:
  Fix Committed
Status in linux source package in Trusty:
  Fix Committed
Status in linux source package in Xenial:
  Fix Committed
Status in linux source package in Artful:
  Fix Committed

Bug description:
  Since at least kernel 4.4.0-116, every invocation of `sysctl -a`
  results in kernel logs similar to the following:

  % sysctl -a &>/dev/null; dmesg -T | tail -8
  [Wed Mar 14 00:06:36 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:06:36 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:06:36 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:06:36 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:06:36 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:06:36 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:06:36 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:06:36 2018] read cpu 1 ibrs val 0

  The output varies with the number of CPUs.

  After digging a bit, it turns out this is triggered upon every read of
  `kernel.ibrs_dump`:

  % for i in {1..3}; do sysctl kernel.ibrs_dump; dmesg -T | tail -8; echo; 
sleep 1; done
  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:48 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:48 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:48 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:48 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:48 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:48 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:48 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:48 2018] read cpu 1 ibrs val 0

  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:49 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:49 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:49 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:49 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:49 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:49 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:49 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:49 2018] read cpu 1 ibrs val 0

  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:50 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:50 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:50 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:50 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:50 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:50 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:50 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:50 2018] read cpu 1 ibrs val 0

  
  Those tests were against an EC2 instance running Ubuntu 4.4.0-116.140-generic 
4.4.98 per /proc/version_signature

  Normally this would not be the biggest concern but we have tooling
  that gathers instance info on a schedule, including sysctl output,
  thus resulting in the kernel ring buffer being full of nothing but
  said output in most cases and hindering live troubleshooting as a
  result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1755627/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1755627] Re: ibrs/ibpb fixes result in excessive kernel logging

2018-04-12 Thread Scott Emmons
Thank you @kamalmostafa - we'll keep an eye out for the updated packages
in the repositories and follow up if anything is not as expected. Thanks
again for fixing this!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1755627

Title:
  ibrs/ibpb fixes result in excessive kernel logging

Status in linux package in Ubuntu:
  Fix Committed
Status in linux source package in Trusty:
  Fix Committed
Status in linux source package in Xenial:
  Fix Committed
Status in linux source package in Artful:
  Fix Committed

Bug description:
  Since at least kernel 4.4.0-116, every invocation of `sysctl -a`
  results in kernel logs similar to the following:

  % sysctl -a &>/dev/null; dmesg -T | tail -8
  [Wed Mar 14 00:06:36 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:06:36 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:06:36 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:06:36 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:06:36 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:06:36 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:06:36 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:06:36 2018] read cpu 1 ibrs val 0

  The output varies with the number of CPUs.

  After digging a bit, it turns out this is triggered upon every read of
  `kernel.ibrs_dump`:

  % for i in {1..3}; do sysctl kernel.ibrs_dump; dmesg -T | tail -8; echo; 
sleep 1; done
  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:48 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:48 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:48 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:48 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:48 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:48 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:48 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:48 2018] read cpu 1 ibrs val 0

  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:49 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:49 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:49 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:49 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:49 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:49 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:49 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:49 2018] read cpu 1 ibrs val 0

  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:50 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:50 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:50 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:50 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:50 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:50 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:50 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:50 2018] read cpu 1 ibrs val 0

  
  Those tests were against an EC2 instance running Ubuntu 4.4.0-116.140-generic 
4.4.98 per /proc/version_signature

  Normally this would not be the biggest concern but we have tooling
  that gathers instance info on a schedule, including sysctl output,
  thus resulting in the kernel ring buffer being full of nothing but
  said output in most cases and hindering live troubleshooting as a
  result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1755627/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1755627] Re: ibrs/ibpb fixes result in excessive kernel logging

2018-03-14 Thread Scott Emmons
Thank you Leann!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1755627

Title:
  ibrs/ibpb fixes result in excessive kernel logging

Status in linux package in Ubuntu:
  Confirmed
Status in linux source package in Trusty:
  Triaged
Status in linux source package in Xenial:
  Triaged

Bug description:
  Since at least kernel 4.4.0-116, every invocation of `sysctl -a`
  results in kernel logs similar to the following:

  % sysctl -a &>/dev/null; dmesg -T | tail -8
  [Wed Mar 14 00:06:36 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:06:36 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:06:36 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:06:36 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:06:36 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:06:36 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:06:36 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:06:36 2018] read cpu 1 ibrs val 0

  The output varies with the number of CPUs.

  After digging a bit, it turns out this is triggered upon every read of
  `kernel.ibrs_dump`:

  % for i in {1..3}; do sysctl kernel.ibrs_dump; dmesg -T | tail -8; echo; 
sleep 1; done
  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:48 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:48 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:48 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:48 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:48 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:48 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:48 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:48 2018] read cpu 1 ibrs val 0

  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:49 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:49 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:49 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:49 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:49 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:49 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:49 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:49 2018] read cpu 1 ibrs val 0

  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:50 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:50 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:50 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:50 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:50 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:50 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:50 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:50 2018] read cpu 1 ibrs val 0

  
  Those tests were against an EC2 instance running Ubuntu 4.4.0-116.140-generic 
4.4.98 per /proc/version_signature

  Normally this would not be the biggest concern but we have tooling
  that gathers instance info on a schedule, including sysctl output,
  thus resulting in the kernel ring buffer being full of nothing but
  said output in most cases and hindering live troubleshooting as a
  result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1755627/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1755627] Re: ibrs/ibpb fixes result in excessive kernel logging

2018-03-14 Thread Scott Emmons
Returning to confirmed status - easily reproducible with LTS kernels.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1755627

Title:
  ibrs/ibpb fixes result in excessive kernel logging

Status in linux package in Ubuntu:
  Confirmed
Status in linux source package in Xenial:
  Confirmed

Bug description:
  Since at least kernel 4.4.0-116, every invocation of `sysctl -a`
  results in kernel logs similar to the following:

  % sysctl -a &>/dev/null; dmesg -T | tail -8
  [Wed Mar 14 00:06:36 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:06:36 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:06:36 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:06:36 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:06:36 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:06:36 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:06:36 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:06:36 2018] read cpu 1 ibrs val 0

  The output varies with the number of CPUs.

  After digging a bit, it turns out this is triggered upon every read of
  `kernel.ibrs_dump`:

  % for i in {1..3}; do sysctl kernel.ibrs_dump; dmesg -T | tail -8; echo; 
sleep 1; done
  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:48 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:48 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:48 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:48 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:48 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:48 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:48 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:48 2018] read cpu 1 ibrs val 0

  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:49 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:49 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:49 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:49 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:49 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:49 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:49 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:49 2018] read cpu 1 ibrs val 0

  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:50 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:50 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:50 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:50 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:50 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:50 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:50 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:50 2018] read cpu 1 ibrs val 0

  
  Those tests were against an EC2 instance running Ubuntu 4.4.0-116.140-generic 
4.4.98 per /proc/version_signature

  Normally this would not be the biggest concern but we have tooling
  that gathers instance info on a schedule, including sysctl output,
  thus resulting in the kernel ring buffer being full of nothing but
  said output in most cases and hindering live troubleshooting as a
  result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1755627/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1755627] Re: ibrs/ibpb fixes result in excessive kernel logging

2018-03-14 Thread Scott Emmons
We use LTS kernels, so no - unfortunately we cannot.

** Changed in: linux (Ubuntu Xenial)
   Status: Incomplete => Confirmed

** Changed in: linux (Ubuntu)
   Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1755627

Title:
  ibrs/ibpb fixes result in excessive kernel logging

Status in linux package in Ubuntu:
  Confirmed
Status in linux source package in Xenial:
  Confirmed

Bug description:
  Since at least kernel 4.4.0-116, every invocation of `sysctl -a`
  results in kernel logs similar to the following:

  % sysctl -a &>/dev/null; dmesg -T | tail -8
  [Wed Mar 14 00:06:36 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:06:36 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:06:36 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:06:36 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:06:36 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:06:36 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:06:36 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:06:36 2018] read cpu 1 ibrs val 0

  The output varies with the number of CPUs.

  After digging a bit, it turns out this is triggered upon every read of
  `kernel.ibrs_dump`:

  % for i in {1..3}; do sysctl kernel.ibrs_dump; dmesg -T | tail -8; echo; 
sleep 1; done
  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:48 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:48 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:48 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:48 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:48 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:48 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:48 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:48 2018] read cpu 1 ibrs val 0

  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:49 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:49 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:49 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:49 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:49 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:49 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:49 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:49 2018] read cpu 1 ibrs val 0

  kernel.ibrs_dump = 0
  [Wed Mar 14 00:08:50 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:50 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:50 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:50 2018] read cpu 1 ibrs val 0
  [Wed Mar 14 00:08:50 2018] sysctl_ibrs_enabled = 0, sysctl_ibpb_enabled = 0
  [Wed Mar 14 00:08:50 2018] use_ibrs = 4, use_ibpb = 4
  [Wed Mar 14 00:08:50 2018] read cpu 0 ibrs val 0
  [Wed Mar 14 00:08:50 2018] read cpu 1 ibrs val 0

  
  Those tests were against an EC2 instance running Ubuntu 4.4.0-116.140-generic 
4.4.98 per /proc/version_signature

  Normally this would not be the biggest concern but we have tooling
  that gathers instance info on a schedule, including sysctl output,
  thus resulting in the kernel ring buffer being full of nothing but
  said output in most cases and hindering live troubleshooting as a
  result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1755627/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1741081] Re: zfs-import-cache.service fails on startup

2018-03-05 Thread Scott Emmons
I can confirm that with the latest bionic packages (zfsutils-linux
0.7.5-1ubuntu4) all units start successfully for the case where no ZFS
pools are present. This is exactly as I would expect.

I can't really speak to the discussion about tainted kernel. If I didn't
want ZFS, I wouldn't install zfsutils-linux, so I think it's a
reasonable expectation as-is.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1741081

Title:
  zfs-import-cache.service fails on startup

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Artful:
  In Progress
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  == SRU Request, Artful ==

  Enable ZFS module to be loaded without the broken ubuntu-load-zfs-
  unconditionally.patch.

  == Fix ==

  Add a new zfs-load-module.service script that modprobes the ZFS module
  and remove any hard coded module loading from zfs-import-cache.service
  & zfs-import-scan.service and make these latter scripts require the
  new zfs-load-module.service script.  Also remove the now defunct
  ubuntu-load-zfs-unconditionally.patch as this will no longer be
  required.

  == Testcase ==

  On a clean VM, install with the fixed package, zfs should load
  automatically.

  == Regression potential ==

  ZFS module may not load if the changes are broken. However, testing
  proves this not to be the case.

  

  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
  that, it fails on startup, since the cache file does not exist.

  This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
  https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749

  This patch still exists in bionic, so I assume it will be similarly
  broken.

  If the goal of the patch is to load the module (and only that), I
  think it should create a third unit instead:

  zfs-load-module.service
   ^^ runs modprobe zfs

  zfs-import-cache.service & zfs-import-scan.service
    ^^ per upstream minus modprobe plus Requires=zfs-load-module.service

  I have tested this manually and it works. I can submit a package patch
  if this is the desired solution.

  Interestingly, before this change, zfs-import-scan.service wasn't
  starting. If started manually, it worked. I had to give it a
  `systemctl enable zfs-import-scan.service` to create the Wants
  symlinks. Looking at the zfsutils-linux.postinst, I see the correct
  boilerplate from dh_systemd, so I'm not sure why this wasn't already
  done. Can anyone confirm or deny whether zfs-import-scan.service is
  enabled out-of-the-box on their system?

  Is the zfs-import-scan.service not starting actually the cause of the
  original bug? The design is that *either* zfs-import-cache.service or
  zfs-import-scan.service starts. They both call modprobe zfs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1741081/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1741081] Re: zfs-import-cache.service fails on startup

2018-03-02 Thread Scott Emmons
OK, I retested with 0.7.5-1ubuntu3 and it's almost there but zfs-
mount.service still runs before the zfs kernel module is loaded:

$ systemctl --failed
  UNIT  LOAD   ACTIVE SUBDESCRIPTION
● zfs-mount.service loaded failed failed Mount ZFS filesystems

$ sudo journalctl -u zfs-mount.service
-- Logs begin at Fri 2018-03-02 19:08:55 UTC, end at Fri 2018-03-02 19:31:42 
UTC. --
Mar 02 19:09:00 ubuntu systemd[1]: Starting Mount ZFS filesystems...
Mar 02 19:09:00 ubuntu zfs[557]: The ZFS modules are not loaded.
Mar 02 19:09:00 ubuntu zfs[557]: Try running '/sbin/modprobe zfs' as root to 
load them.
Mar 02 19:09:00 ubuntu systemd[1]: zfs-mount.service: Main process exited, 
code=exited, status=1/FAILURE
Mar 02 19:09:00 ubuntu systemd[1]: zfs-mount.service: Failed with result 
'exit-code'.
Mar 02 19:09:00 ubuntu systemd[1]: Failed to start Mount ZFS filesystems.

But, by the time the system is fully up, something else has loaded the
zfs kernel module along the way:

$ lsmod|grep zfs
zfs  3407872  3

I do see that zfs-load-module.service has "WantedBy=zfs-mount.service",
but maybe zfs-mount.service needs "After=zfs-load-module.service" to
insure the dependency order? Looking at the dependency tree, zfs-
mount.service and zfs-import.target happen in parallel with the start of
zfs-load-module.service resulting in a race condition.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1741081

Title:
  zfs-import-cache.service fails on startup

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Artful:
  In Progress
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  == SRU Request, Artful ==

  Enable ZFS module to be loaded without the broken ubuntu-load-zfs-
  unconditionally.patch.

  == Fix ==

  Add a new zfs-load-module.service script that modprobes the ZFS module
  and remove any hard coded module loading from zfs-import-cache.service
  & zfs-import-scan.service and make these latter scripts require the
  new zfs-load-module.service script.  Also remove the now defunct
  ubuntu-load-zfs-unconditionally.patch as this will no longer be
  required.

  == Testcase ==

  On a clean VM, install with the fixed package, zfs should load
  automatically.

  == Regression potential ==

  ZFS module may not load if the changes are broken. However, testing
  proves this not to be the case.

  

  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
  that, it fails on startup, since the cache file does not exist.

  This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
  https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749

  This patch still exists in bionic, so I assume it will be similarly
  broken.

  If the goal of the patch is to load the module (and only that), I
  think it should create a third unit instead:

  zfs-load-module.service
   ^^ runs modprobe zfs

  zfs-import-cache.service & zfs-import-scan.service
    ^^ per upstream minus modprobe plus Requires=zfs-load-module.service

  I have tested this manually and it works. I can submit a package patch
  if this is the desired solution.

  Interestingly, before this change, zfs-import-scan.service wasn't
  starting. If started manually, it worked. I had to give it a
  `systemctl enable zfs-import-scan.service` to create the Wants
  symlinks. Looking at the zfsutils-linux.postinst, I see the correct
  boilerplate from dh_systemd, so I'm not sure why this wasn't already
  done. Can anyone confirm or deny whether zfs-import-scan.service is
  enabled out-of-the-box on their system?

  Is the zfs-import-scan.service not starting actually the cause of the
  original bug? The design is that *either* zfs-import-cache.service or
  zfs-import-scan.service starts. They both call modprobe zfs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1741081/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1741081] Re: zfs-import-cache.service fails on startup

2018-03-02 Thread Scott Emmons
Great, thanks Colin! We will test this on bionic very soon and will
follow up to confirm.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1741081

Title:
  zfs-import-cache.service fails on startup

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Artful:
  In Progress
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  == SRU Request, Artful ==

  Enable ZFS module to be loaded without the broken ubuntu-load-zfs-
  unconditionally.patch.

  == Fix ==

  Add a new zfs-load-module.service script that modprobes the ZFS module
  and remove any hard coded module loading from zfs-import-cache.service
  & zfs-import-scan.service and make these latter scripts require the
  new zfs-load-module.service script.  Also remove the now defunct
  ubuntu-load-zfs-unconditionally.patch as this will no longer be
  required.

  == Testcase ==

  On a clean VM, install with the fixed package, zfs should load
  automatically.

  == Regression potential ==

  ZFS module may not load if the changes are broken. However, testing
  proves this not to be the case.

  

  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
  that, it fails on startup, since the cache file does not exist.

  This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
  https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749

  This patch still exists in bionic, so I assume it will be similarly
  broken.

  If the goal of the patch is to load the module (and only that), I
  think it should create a third unit instead:

  zfs-load-module.service
   ^^ runs modprobe zfs

  zfs-import-cache.service & zfs-import-scan.service
    ^^ per upstream minus modprobe plus Requires=zfs-load-module.service

  I have tested this manually and it works. I can submit a package patch
  if this is the desired solution.

  Interestingly, before this change, zfs-import-scan.service wasn't
  starting. If started manually, it worked. I had to give it a
  `systemctl enable zfs-import-scan.service` to create the Wants
  symlinks. Looking at the zfsutils-linux.postinst, I see the correct
  boilerplate from dh_systemd, so I'm not sure why this wasn't already
  done. Can anyone confirm or deny whether zfs-import-scan.service is
  enabled out-of-the-box on their system?

  Is the zfs-import-scan.service not starting actually the cause of the
  original bug? The design is that *either* zfs-import-cache.service or
  zfs-import-scan.service starts. They both call modprobe zfs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1741081/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1741081] Re: zfs-import-cache.service fails on startup

2018-03-01 Thread Scott Emmons
I found the xenial issue I was thinking of [1] but I'd be surprised if
that particular case regressed (it had to do with use of /etc/mtab). For
completeness, I'll mention it here anyway.

[1] https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1607920

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1741081

Title:
  zfs-import-cache.service fails on startup

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
  that, it fails on startup, since the cache file does not exist.

  This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
  https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749

  This patch still exists in bionic, so I assume it will be similarly
  broken.

  If the goal of the patch is to load the module (and only that), I
  think it should create a third unit instead:

  zfs-load-module.service
   ^^ runs modprobe zfs

  zfs-import-cache.service & zfs-import-scan.service
    ^^ per upstream minus modprobe plus Requires=zfs-load-module.service

  I have tested this manually and it works. I can submit a package patch
  if this is the desired solution.

  Interestingly, before this change, zfs-import-scan.service wasn't
  starting. If started manually, it worked. I had to give it a
  `systemctl enable zfs-import-scan.service` to create the Wants
  symlinks. Looking at the zfsutils-linux.postinst, I see the correct
  boilerplate from dh_systemd, so I'm not sure why this wasn't already
  done. Can anyone confirm or deny whether zfs-import-scan.service is
  enabled out-of-the-box on their system?

  Is the zfs-import-scan.service not starting actually the cause of the
  original bug? The design is that *either* zfs-import-cache.service or
  zfs-import-scan.service starts. They both call modprobe zfs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1741081/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1741081] Re: zfs-import-cache.service fails on startup

2018-03-01 Thread Scott Emmons
Note that for bionic as of this week the behavior is now different. This
particular problem doesn't surface for zfs-import-cache.service (because
the ConditionPathExists expression is back in the unit).

It's not all good news, however, as that one failed unit has been
replaced with:

$ systemctl --failed
  UNIT  LOAD   ACTIVE SUBDESCRIPTION
● zfs-mount.service loaded failed failed Mount ZFS filesystems
● zfs-share.service loaded failed failed ZFS file system shares
● zfs-zed.service   loaded failed failed ZFS Event Daemon (zed)

All because the zfs kernel module is now no longer being loaded. So,
it's related to this bug in that the zfs-import-cache.service was
loading the zfs kernel module because it was running unconditionally.
Now that this unit is no longer running do to ConditionPathExists, other
zfs units fail now.

I'm fairly certain we ran into this same issue back in xenial, I'll see
if I can track down the launchpad # for it. Thanks!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1741081

Title:
  zfs-import-cache.service fails on startup

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
  that, it fails on startup, since the cache file does not exist.

  This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
  https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749

  This patch still exists in bionic, so I assume it will be similarly
  broken.

  If the goal of the patch is to load the module (and only that), I
  think it should create a third unit instead:

  zfs-load-module.service
   ^^ runs modprobe zfs

  zfs-import-cache.service & zfs-import-scan.service
    ^^ per upstream minus modprobe plus Requires=zfs-load-module.service

  I have tested this manually and it works. I can submit a package patch
  if this is the desired solution.

  Interestingly, before this change, zfs-import-scan.service wasn't
  starting. If started manually, it worked. I had to give it a
  `systemctl enable zfs-import-scan.service` to create the Wants
  symlinks. Looking at the zfsutils-linux.postinst, I see the correct
  boilerplate from dh_systemd, so I'm not sure why this wasn't already
  done. Can anyone confirm or deny whether zfs-import-scan.service is
  enabled out-of-the-box on their system?

  Is the zfs-import-scan.service not starting actually the cause of the
  original bug? The design is that *either* zfs-import-cache.service or
  zfs-import-scan.service starts. They both call modprobe zfs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1741081/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1741081] Re: zfs-import-cache.service fails on startup

2018-02-13 Thread Scott Emmons
Let me know what I can do to help move this forward. The ubuntu-load-
zfs-unconditionally patch was not a great solution as it results in
systems that install zfsutils-linux to come up in a failed state when
there are no ZFS pools present:

$ systemctl --failed
  UNIT LOAD   ACTIVE SUBDESCRIPTION   
* zfs-import-cache.service loaded failed failed Import ZFS pools by cache file

We use a common system image across a large fleet, some with ZFS pools,
most without - and would prefer not having to do a local override.conf
in our image.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1741081

Title:
  zfs-import-cache.service fails on startup

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
  that, it fails on startup, since the cache file does not exist.

  This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
  https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749

  This patch still exists in bionic, so I assume it will be similarly
  broken.

  If the goal of the patch is to load the module (and only that), I
  think it should create a third unit instead:

  zfs-load-module.service
   ^^ runs modprobe zfs

  zfs-import-cache.service & zfs-import-scan.service
    ^^ per upstream minus modprobe plus Requires=zfs-load-module.service

  I have tested this manually and it works. I can submit a package patch
  if this is the desired solution.

  Interestingly, before this change, zfs-import-scan.service wasn't
  starting. If started manually, it worked. I had to give it a
  `systemctl enable zfs-import-scan.service` to create the Wants
  symlinks. Looking at the zfsutils-linux.postinst, I see the correct
  boilerplate from dh_systemd, so I'm not sure why this wasn't already
  done. Can anyone confirm or deny whether zfs-import-scan.service is
  enabled out-of-the-box on their system?

  Is the zfs-import-scan.service not starting actually the cause of the
  original bug? The design is that *either* zfs-import-cache.service or
  zfs-import-scan.service starts. They both call modprobe zfs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1741081/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1741081] Re: zfs-import-cache.service fails on startup

2018-01-23 Thread Scott Emmons
Confirmed to also affect bionic:

$ sudo journalctl -u zfs-import-cache.service
-- Logs begin at Tue 2018-01-23 17:46:55 UTC, end at Tue 2018-01-23 17:49:18 
UTC. --
Jan 23 17:46:58 ubuntu systemd[1]: Starting Import ZFS pools by cache file...
Jan 23 17:46:59 ubuntu zpool[640]: failed to open cache file: No such file or 
directory
Jan 23 17:46:59 ubuntu systemd[1]: zfs-import-cache.service: Main process 
exited, code=exited, status=1/FAILURE
Jan 23 17:46:59 ubuntu systemd[1]: zfs-import-cache.service: Failed with result 
'exit-code'.
Jan 23 17:46:59 ubuntu systemd[1]: Failed to start Import ZFS pools by cache 
file.

This is on a host with no ZFS filesystems, so expected behavior is a no-
op.

** Tags added: artful bionic

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1741081

Title:
  zfs-import-cache.service fails on startup

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
  that, it fails on startup, since the cache file does not exist.

  This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
  https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749

  This patch still exists in bionic, so I assume it will be similarly
  broken.

  If the goal of the patch is to load the module (and only that), I
  think it should create a third unit instead:

  zfs-load-module.service
   ^^ runs modprobe zfs

  zfs-import-cache.service & zfs-import-scan.service
    ^^ per upstream minus modprobe plus Requires=zfs-load-module.service

  I have tested this manually and it works. I can submit a package patch
  if this is the desired solution.

  Interestingly, before this change, zfs-import-scan.service wasn't
  starting. If started manually, it worked. I had to give it a
  `systemctl enable zfs-import-scan.service` to create the Wants
  symlinks. Looking at the zfsutils-linux.postinst, I see the correct
  boilerplate from dh_systemd, so I'm not sure why this wasn't already
  done. Can anyone confirm or deny whether zfs-import-scan.service is
  enabled out-of-the-box on their system?

  Is the zfs-import-scan.service not starting actually the cause of the
  original bug? The design is that *either* zfs-import-cache.service or
  zfs-import-scan.service starts. They both call modprobe zfs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1741081/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1515513] Re: /boot/initrd.img-*.old-dkms files left behind

2017-09-01 Thread Scott Emmons
The commit for this in upstream Debian (0.123 version of initramfs-
tools) is here: https://anonscm.debian.org/cgit/kernel/initramfs-
tools.git/commit/?id=ac6d31fc2c707b72ff8af9944c9b4f8af303a6a3 - I would
be happy to make a patch for this commit to xenial (zesty and later has
0.125+, so should already include the fix).

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to dkms in Ubuntu.
https://bugs.launchpad.net/bugs/1515513

Title:
  /boot/initrd.img-*.old-dkms files left behind

Status in dkms package in Ubuntu:
  Confirmed
Status in initramfs-tools package in Ubuntu:
  Confirmed
Status in dkms package in Debian:
  New

Bug description:
  One notices *.old-dkms files being left behind still sitting on the
  disk after purging the related kernel. This can cause /boot to become
  full, and when it gets really bad, even sudo apt-get autoremove won't
  fix the problem - only deleting the old-dkms files manually solves the
  problem.

  Note:  Filling up the /boot partition causes updates to fail.

  ProblemType: Bug
  DistroRelease: Ubuntu 15.04
  Package: dkms 2.2.0.3-2ubuntu3.3
  ProcVersionSignature: Ubuntu 3.19.0-28.30-generic 3.19.8-ckt5
  Uname: Linux 3.19.0-28-generic x86_64
  ApportVersion: 2.17.2-0ubuntu1.7
  Architecture: amd64
  CurrentDesktop: KDE
  Date: Thu Nov 12 08:17:10 2015
  InstallationDate: Installed on 2015-05-05 (190 days ago)
  InstallationMedia: Ubuntu 15.04 "Vivid Vervet" - Release amd64 (20150422)
  PackageArchitecture: all
  SourcePackage: dkms
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dkms/+bug/1515513/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1515513] Re: /boot/initrd.img-*.old-dkms files left behind

2017-09-01 Thread Scott Emmons
Sorry, my previous comment was for another initramfs-tools related bug.
Please disregard #18.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to dkms in Ubuntu.
https://bugs.launchpad.net/bugs/1515513

Title:
  /boot/initrd.img-*.old-dkms files left behind

Status in dkms package in Ubuntu:
  Confirmed
Status in initramfs-tools package in Ubuntu:
  Confirmed
Status in dkms package in Debian:
  New

Bug description:
  One notices *.old-dkms files being left behind still sitting on the
  disk after purging the related kernel. This can cause /boot to become
  full, and when it gets really bad, even sudo apt-get autoremove won't
  fix the problem - only deleting the old-dkms files manually solves the
  problem.

  Note:  Filling up the /boot partition causes updates to fail.

  ProblemType: Bug
  DistroRelease: Ubuntu 15.04
  Package: dkms 2.2.0.3-2ubuntu3.3
  ProcVersionSignature: Ubuntu 3.19.0-28.30-generic 3.19.8-ckt5
  Uname: Linux 3.19.0-28-generic x86_64
  ApportVersion: 2.17.2-0ubuntu1.7
  Architecture: amd64
  CurrentDesktop: KDE
  Date: Thu Nov 12 08:17:10 2015
  InstallationDate: Installed on 2015-05-05 (190 days ago)
  InstallationMedia: Ubuntu 15.04 "Vivid Vervet" - Release amd64 (20150422)
  PackageArchitecture: all
  SourcePackage: dkms
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dkms/+bug/1515513/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1679768] Re: Docker build hangs on XFS with kernel 4.10

2017-08-14 Thread Scott Emmons
Duplicate: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1709749
Related: https://github.com/moby/moby/issues/34361

** Bug watch added: github.com/moby/moby/issues #34361
   https://github.com/moby/moby/issues/34361

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1679768

Title:
  Docker build hangs on XFS with kernel 4.10

Status in docker.io package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  I have my docker partition on XFS, and recently upgraded to Zesty Beta.
  Most of my `docker build.` processes hangs with a similar message in syslog:

  ```
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171267] INFO: task 
kworker/1:3:5589 blocked for more than 120 seconds.
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171275]   Tainted: P  
 O4.10.0-15-generic #17-Ubuntu
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171278] "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171282] kworker/1:3 D0  
5589  2 0x
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171310] Workqueue: aufsd wkq_func 
[aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171313] Call Trace:
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171323]  __schedule+0x233/0x6f0
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171415]  ? 
kmem_zone_alloc+0x81/0x120 [xfs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171419]  schedule+0x36/0x80
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171423]  
rwsem_down_read_failed+0xfa/0x150
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171491]  ? 
xfs_file_buffered_aio_read+0x3d/0xc0 [xfs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171496]  
call_rwsem_down_read_failed+0x18/0x30
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171500]  down_read+0x20/0x40
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171567]  xfs_ilock+0xf5/0x110 
[xfs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171628]  
xfs_file_buffered_aio_read+0x3d/0xc0 [xfs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171686]  
xfs_file_read_iter+0x68/0xc0 [xfs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171692]  new_sync_read+0xd2/0x120
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171695]  __vfs_read+0x26/0x40
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171697]  vfs_read+0x96/0x130
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171715]  vfsub_read_u+0x14/0x30 
[aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171729]  vfsub_read_k+0x2c/0x40 
[aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171743]  au_copy_file+0x10c/0x370 
[aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171756]  
au_cp_regular+0x11a/0x200 [aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171767]  ? 
au_cp_regular+0x1ef/0x200 [aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171777]  ? 
au_cp_regular+0x188/0x200 [aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171788]  cpup_entry+0x538/0x5f0 
[aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171802]  ? 
vfsub_lookup_one_len+0x31/0x70 [aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171814]  
au_cpup_single.constprop.18+0x145/0x6a0 [aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171820]  ? dput+0x40/0x270
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171831]  au_cpup_simple+0x4d/0x80 
[aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171842]  
au_call_cpup_simple+0x28/0x40 [aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171855]  wkq_func+0x14/0x80 [aufs]
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171861]  
process_one_work+0x1fc/0x4b0
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171864]  worker_thread+0x4b/0x500
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171870]  kthread+0x101/0x140
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171873]  ? 
process_one_work+0x4b0/0x4b0
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171878]  ? 
kthread_create_on_node+0x60/0x60
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171883]  ? 
SyS_exit_group+0x14/0x20
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171887]  ret_from_fork+0x2c/0x40
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171895] INFO: task useradd:5758 
blocked for more than 120 seconds.
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171899]   Tainted: P  
 O4.10.0-15-generic #17-Ubuntu
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1679768/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1709749] Re: Docker hangs with xfs using aufs storage driver

2017-08-14 Thread Scott Emmons
This appears to be a duplicate of
https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1679768 and
also affects Xenial when using hwe backported kernel.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1709749

Title:
  Docker hangs with xfs using aufs storage driver

Status in linux package in Ubuntu:
  Fix Released
Status in linux source package in Zesty:
  In Progress

Bug description:
  SRU Justification

  Impact: Running the yum command in a Centos/RHEL container causes the
  process to hang enter uninterruptible sleep mode if /var/lib/docker is
  hosted on an XFS filesystem while the AUFS storage driver is used.

  Fix: Commit e34c81ff96415c64ca827ec30e7935454d26c1d3 from upstream
  aufs-standalone. Requires prerequisite commit
  b4d3dcc92a13d53952fe6e9a640201ef87475302.

  Test Case: Test scenario described above. Kernel with fixes applied
  has been confirmed by reporter to fix the issue.

  Regression Potential: Both patches have been in upstream aufs-
  standalone for a while now and included in the artful kernel as well
  as currently being included in the upstream branch for 4.10. Therefore
  they are well tested and unlikely to cause regressions.

  ---

  1) The release of Ubuntu you are using, via 'lsb_release -rd' or System -> 
About Ubuntu
  Ubuntu 16.04 and 17.04

  2) The version of the package you are using, via 'apt-cache policy pkgname' 
or by checking in Software Center
  kernel 4.10.0-28-generic

  3) What you expected to happen

  Docker with AUFS on XFS works.

  4) What happened instead

  It hangs up

  ---

  As reported in https://github.com/moby/moby/issues/34361 , AUFS on XFS
  needs this fix for kernel 4.10:
  
https://github.com/sfjro/aufs4-standalone/commit/e34c81ff96415c64ca827ec30e7935454d26c1d3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1709749/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1679768] Re: Docker build hangs on XFS with kernel 4.10

2017-08-14 Thread Scott Emmons
We are seeing the same issue here with 4.10.0-24-generic, and will see
if we can reproduce with a newer hwe backport (4.11.0-13-generic).

Jul 20 14:32:47 ubuntu kernel: INFO: task kworker/5:2:574 blocked for more than 
120 seconds.
Jul 20 14:32:47 ubuntu kernel:   Tainted: P   O
4.10.0-24-generic #28~16.04.1-Ubuntu
Jul 20 14:32:47 ubuntu kernel: "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this messa
Jul 20 14:32:47 ubuntu kernel: kworker/5:2 D0   574  2 0x
Jul 20 14:32:47 ubuntu kernel: Workqueue: aufsd wkq_func [aufs]
Jul 20 14:32:47 ubuntu kernel: Call Trace:
Jul 20 14:32:47 ubuntu kernel:  __schedule+0x232/0x700
Jul 20 14:32:47 ubuntu kernel:  ? kmem_cache_alloc+0xd7/0x1b0
Jul 20 14:32:47 ubuntu kernel:  ? kmem_zone_alloc+0x81/0x120 [xfs]
Jul 20 14:32:47 ubuntu kernel:  schedule+0x36/0x80
Jul 20 14:32:47 ubuntu kernel:  rwsem_down_read_failed+0xf9/0x150
Jul 20 14:32:47 ubuntu kernel:  ? xfs_trans_free_item_desc+0x33/0x40 [xfs]
Jul 20 14:32:47 ubuntu kernel:  ? xfs_trans_free_items+0x80/0xb0 [xfs]
Jul 20 14:32:47 ubuntu kernel:  ? xfs_file_buffered_aio_read+0x3d/0xc0 [xfs]
Jul 20 14:32:47 ubuntu kernel:  call_rwsem_down_read_failed+0x18/0x30
Jul 20 14:32:47 ubuntu kernel:  down_read+0x20/0x40
Jul 20 14:32:47 ubuntu kernel:  xfs_ilock+0xfa/0x110 [xfs]
Jul 20 14:32:47 ubuntu kernel:  xfs_file_buffered_aio_read+0x3d/0xc0 [xfs]
Jul 20 14:32:47 ubuntu kernel:  xfs_file_read_iter+0x68/0xc0 [xfs]
Jul 20 14:32:47 ubuntu kernel:  new_sync_read+0xd0/0x120
Jul 20 14:32:47 ubuntu kernel:  __vfs_read+0x26/0x40
Jul 20 14:32:47 ubuntu kernel:  vfs_read+0x93/0x130
Jul 20 14:32:47 ubuntu kernel:  vfsub_read_u+0x14/0x30 [aufs]
Jul 20 14:32:47 ubuntu kernel:  vfsub_read_k+0x2c/0x40 [aufs]
Jul 20 14:32:47 ubuntu kernel:  au_copy_file+0x10f/0x370 [aufs]
Jul 20 14:32:47 ubuntu kernel:  au_cp_regular+0x10f/0x200 [aufs]
Jul 20 14:32:47 ubuntu kernel:  ? au_cp_regular+0x1ad/0x200 [aufs]
Jul 20 14:32:47 ubuntu kernel:  ? au_cp_regular+0x177/0x200 [aufs]
Jul 20 14:32:47 ubuntu kernel:  cpup_entry+0x552/0x610 [aufs]
Jul 20 14:32:47 ubuntu kernel:  au_cpup_single.constprop.18+0x145/0x6a0 [aufs]
Jul 20 14:32:47 ubuntu kernel:  ? dput+0x34/0x250
Jul 20 14:32:47 ubuntu kernel:  au_cpup_simple+0x53/0x90 [aufs]
Jul 20 14:32:47 ubuntu kernel:  au_call_cpup_simple+0x28/0x40 [aufs]
Jul 20 14:32:47 ubuntu kernel:  wkq_func+0x14/0x80 [aufs]
Jul 20 14:32:47 ubuntu kernel:  process_one_work+0x16b/0x4a0
Jul 20 14:32:47 ubuntu kernel:  worker_thread+0x4b/0x500
Jul 20 14:32:47 ubuntu kernel:  kthread+0x109/0x140
Jul 20 14:32:47 ubuntu kernel:  ? process_one_work+0x4a0/0x4a0
Jul 20 14:32:47 ubuntu kernel:  ? kthread_create_on_node+0x60/0x60
Jul 20 14:32:47 ubuntu kernel:  ret_from_fork+0x2c/0x40
Jul 20 14:32:47 ubuntu kernel: INFO: task s6-chown:5548 blocked for more than 
120 seconds.
Jul 20 14:32:47 ubuntu kernel:   Tainted: P   O
4.10.0-24-generic #28~16.04.1-Ubuntu
Jul 20 14:32:47 ubuntu kernel: "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this messa
Jul 20 14:32:47 ubuntu kernel: s6-chownD0  5548   5546 0x0100
Jul 20 14:32:47 ubuntu kernel: Call Trace:
Jul 20 14:32:47 ubuntu kernel:  __schedule+0x232/0x700
Jul 20 14:32:47 ubuntu kernel:  ? set_next_entity+0xc3/0x1b0
Jul 20 14:32:47 ubuntu kernel:  schedule+0x36/0x80
Jul 20 14:32:47 ubuntu kernel:  schedule_timeout+0x235/0x3f0
Jul 20 14:32:47 ubuntu kernel:  ? finish_task_switch+0x76/0x210
Jul 20 14:32:47 ubuntu kernel:  ? __schedule+0x23a/0x700
Jul 20 14:32:47 ubuntu kernel:  ? wake_up_process+0x15/0x20
Jul 20 14:32:47 ubuntu kernel:  ? insert_work+0x85/0xc0
Jul 20 14:32:47 ubuntu kernel:  wait_for_completion+0xb4/0x140
Jul 20 14:32:47 ubuntu kernel:  ? wake_up_q+0x70/0x70
Jul 20 14:32:47 ubuntu kernel:  au_wkq_do_wait+0x91/0xf0 [aufs]
Jul 20 14:32:47 ubuntu kernel:  ? au_wkq_run+0x60/0x60 [aufs]
Jul 20 14:32:47 ubuntu kernel:  ? au_do_sio_cpup_simple+0x110/0x110 [aufs]
Jul 20 14:32:47 ubuntu kernel:  au_do_sio_cpup_simple+0x99/0x110 [aufs]
Jul 20 14:32:47 ubuntu kernel:  au_sio_cpup_simple+0x21/0x70 [aufs]
Jul 20 14:32:47 ubuntu kernel:  au_pin_and_icpup+0x232/0x440 [aufs]
Jul 20 14:32:47 ubuntu kernel:  aufs_setattr+0x309/0x4e0 [aufs]
Jul 20 14:32:47 ubuntu kernel:  ? evm_inode_setattr+0x1e/0x70
Jul 20 14:32:47 ubuntu kernel:  notify_change+0x2d8/0x430
[...]

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1679768

Title:
  Docker build hangs on XFS with kernel 4.10

Status in docker.io package in Ubuntu:
  Incomplete
Status in linux package in Ubuntu:
  Incomplete

Bug description:
  I have my docker partition on XFS, and recently upgraded to Zesty Beta.
  Most of my `docker build.` processes hangs with a similar message in syslog:

  ```
  Apr  4 09:19:04 epuspdxn0004 kernel: [ 1330.171267] INFO: task 
kworker/1:3:5589 blocked for more than 120 seconds.
  Apr  4 09:19:04 

[Kernel-packages] [Bug 1696558] Re: Enable CONFIG_SECURITY_DMESG_RESTRICT

2017-06-12 Thread Scott Emmons
This bug is not to track a problem, it's to change a default kernel
option to a more secure setting already in use by other distributions,
such as upstream Debian.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1696558

Title:
  Enable CONFIG_SECURITY_DMESG_RESTRICT

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  There is a request to enable the following for linux-aws.

  config SECURITY_DMESG_RESTRICT
  bool "Restrict unprivileged access to the kernel syslog"
  default n
  help
This enforces restrictions on unprivileged users reading the kernel
syslog via dmesg(8).

If this option is not selected, no restrictions will be enforced
unless the dmesg_restrict sysctl is explicitly set to (1).

If you are unsure how to answer this question, answer N.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1696558/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp