[Kernel-packages] [Bug 1766964] Re: zpool status -v aborts with SIGABRT with and without arguments

2018-07-17 Thread Colin Ian King
I've not heard back on any feedback for information from comment #1, so
I'm going to mark this as "Won't Fix". If this is still and issue,
please re-open the bug and provide debugging feedback as requested.

** Changed in: zfs-linux (Ubuntu)
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766964

Title:
  zpool status -v aborts with SIGABRT with and without arguments

Status in zfs-linux package in Ubuntu:
  Won't Fix

Bug description:
  I am currently running Ubuntu 16.04 with ZFS 0.6.5.

  Distributor ID: Ubuntu
  Description:Ubuntu 16.04.4 LTS
  Release:16.04
  Codename:   xenial

  christian@kepler ~ $ apt-cache policy zfsutils-linux
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu20
Candidate: 0.6.5.6-0ubuntu20
Version table:
   *** 0.6.5.6-0ubuntu20 500
  500 http://mirror.math.ucdavis.edu/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.6.5.6-0ubuntu8 500
  500 http://mirror.math.ucdavis.edu/ubuntu xenial/universe amd64 
Packages

  
  Here is the package listing:

  (standard input):ii  libzfs2linux  0.6.5.6-0ubuntu20  
 amd64Native OpenZFS filesystem library 
for Linux
  (standard input):ii  zfs-dkms  0.6.5.6-0ubuntu20  
 amd64Native OpenZFS filesystem kernel 
modules for Linux
  (standard input):ii  zfs-doc   0.6.5.6-0ubuntu20  
 all  Native OpenZFS filesystem 
documentation and examples.
  (standard input):ii  zfs-zed   0.6.5.6-0ubuntu20  
 amd64OpenZFS Event Daemon (zed)
  (standard input):ii  zfsutils-linux0.6.5.6-0ubuntu20  
 amd64Native OpenZFS management 
utilities for Linux

  
  I try to run status on my zpool using the command `zpool status zkepler` and 
get this result:

pool: zkepler
   state: ONLINE
scan: scrub in progress since Wed Apr 25 13:10:24 2018
  802G scanned out of 2.28T at 217M/s, 2h0m to go
  0 repaired, 34.32% done
  Aborted

  I would expect an extended report of status but it just aborts with
  SIGABRT when run through gdb.

  (gdb) run status -v
  Starting program: /sbin/zpool status -v
  [Thread debugging using libthread_db enabled]
  Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
pool: zkepler
   state: ONLINE
scan: scrub in progress since Wed Apr 25 13:10:24 2018
  825G scanned out of 2.28T at 211M/s, 2h2m to go
  0 repaired, 35.32% done

  Program received signal SIGABRT, Aborted.
  0x768d6428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
  54  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.

  I have upgraded this machine from 14.04 LTS within the last few months
  but I purged all the ZFS packages and the ZFS PPA and reinstalled all
  the packages. My kernel version is 4.4.0-121-generic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1766964/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1722261] Re: deadlock in mount umount and sync

2018-07-17 Thread Colin Ian King
I'm going to mark this as "Won't Fix" as this issue is not occurring
with the zil_slog_limit change and I've not fixed anything. If this
problem re-occurs please re-open this bug.

** Changed in: zfs-linux (Ubuntu)
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1722261

Title:
  deadlock in mount umount and sync

Status in zfs-linux package in Ubuntu:
  Won't Fix

Bug description:
  I use zfs vesion  0.6.5.6  on  Ubuntu 16.04.2 LTS . I have  many zombie 
process on auto-mount of snapshot and on sync !
  903 process are   in deadlock.  i can't mount a new file system or snapshot . 
Partial output of ps alx |grep 'call_r D'  below .

  what is the cause? what can I do ?

  0 0   2371  1  20   0   6016   752 call_r D?  0:00 
/bin/sync
  0 0  15290  1  20   0   6016   676 call_r D?  0:00 
/bin/sync
  0 0  18919  1  20   0   6016   708 call_r D?  0:00 
/bin/sync
  0 0  27076  1  20   0   6016   808 call_r D?  0:00 
/bin/sync
  4 0  31976  1  20   0  22084  1344 call_r D?  0:00 umount 
-t zfs -n /samba/shares/Aat/.zfs/snapshot/2017-10-04_09.00.05--5d

  error in kern.log:
  9 13:20:28 zfs-cis kernel: [5368563.592834] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:20:28 zfs-cis kernel: [5368563.597868] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:20:28 zfs-cis kernel: [5368563.601730] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.187001] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.13] WARNING: Unable to automount 
/samba/shares/Cardiologia2/.zfs/snapshot/2017-10-03_12.00.03--5d/pool_z2_samba/shares/Cardiologia2@2017-10-03_12.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.15] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.189005] WARNING: Unable to automount 
/samba/shares/Aat/.zfs/snapshot/2017-10-03_20.00.04--5d/pool_z2_samba/shares/Aat@2017-10-03_20.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.190105] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.192847] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.193617] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.198096] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256


  
  in syslog :

  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dChtrapianti_2emount 
interface=org.freedesktop.DBus.Properties member=GetAll cookie=155 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dLaboratorio_5fTrapianti_2emount
 interface=org.freedesktop.DBus.Properties member=GetAll cookie=202 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dProgettoTrapianti_2emount
 interface=org.freedesktop.DBus.Properties member=GetAll cookie=260 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dTrapianti_2emount 
interface=org.freedesktop.DBus.Properties member=GetAll cookie=291 
reply_cookie=0 error=n/a
  Oct  9 12:22:13 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dChtrapianti_2emount 
interface=org.freedesktop.DBus.Properties member=GetAll 

[Kernel-packages] [Bug 1654517] Re: ZFS I/O hangs for minutes

2018-07-17 Thread Colin Ian King
This stack trace is similar to one in
https://github.com/zfsonlinux/zfs/issues/2934 - the suspected issue is
that one of the devices is taking a long time to complete and ZFS is
blocked waiting for this.

I suggest installing sysstat

sudo apt install sysstat

and when the lockup happens run:

iostat -mx

so that we can see if it is a long device delay or not.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1654517

Title:
  ZFS I/O hangs for minutes

Status in Native ZFS for Linux:
  Fix Released
Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  I/O for multiple programs, like `thunderbird`, `firefox`, etc., hangs
  for minutes and approx. 100 `z_rd_int_[n]` and `z_wr_int_[n]` kernel
  threads are created, `dmesg` contains

  [ 9184.451606] INFO: task txg_sync:11471 blocked for more than 120 
seconds.
  [ 9184.451610]   Tainted: P   OE   4.8.0-32-generic #34-Ubuntu
  [ 9184.451611] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
  [ 9184.451612] txg_syncD a240ab3a7aa8 0 11471  2 
0x
  [ 9184.451616]  a240ab3a7aa8 00ffbb6ade1f a24095148000 
a240e5ca5580
  [ 9184.451618]  0046 a240ab3a8000 a240ff359200 
7fff
  [ 9184.451620]  a23d36cf9050 0001 a240ab3a7ac0 
bbe96b15
  [ 9184.451621] Call Trace:
  [ 9184.451627]  [] schedule+0x35/0x80
  [ 9184.451628]  [] schedule_timeout+0x22a/0x3f0
  [ 9184.451631]  [] ? __switch_to+0x2ce/0x6c0
  [ 9184.451633]  [] ? pick_next_task_fair+0x48c/0x4c0
  [ 9184.451635]  [] ? ktime_get+0x41/0xb0
  [ 9184.451636]  [] io_schedule_timeout+0xa4/0x110
  [ 9184.451644]  [] cv_wait_common+0xb2/0x130 [spl]
  [ 9184.451646]  [] ? wake_atomic_t_function+0x60/0x60
  [ 9184.451650]  [] __cv_wait_io+0x18/0x20 [spl]
  [ 9184.451689]  [] zio_wait+0xfd/0x1d0 [zfs]
  [ 9184.451716]  [] dsl_pool_sync+0xb8/0x480 [zfs]
  [ 9184.451745]  [] spa_sync+0x37f/0xb30 [zfs]
  [ 9184.451747]  [] ? default_wake_function+0x12/0x20
  [ 9184.451779]  [] txg_sync_thread+0x3a5/0x600 [zfs]
  [ 9184.451807]  [] ? txg_delay+0x160/0x160 [zfs]
  [ 9184.451811]  [] thread_generic_wrapper+0x71/0x80 
[spl]
  [ 9184.451815]  [] ? __thread_exit+0x20/0x20 [spl]
  [ 9184.451817]  [] kthread+0xd8/0xf0
  [ 9184.451819]  [] ret_from_fork+0x1f/0x40
  [ 9184.451821]  [] ? kthread_create_on_node+0x1e0/0x1e0
  [ 9184.451849] INFO: task mozStorage #2:21607 blocked for more than 120 
seconds.
  [ 9184.451851]   Tainted: P   OE   4.8.0-32-generic #34-Ubuntu
  [ 9184.451852] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
  [ 9184.451853] mozStorage #2   D a23fe8a5bd38 0 21607  19750 
0x0004
  [ 9184.451855]  a23fe8a5bd38 00ffa240ee8feb40 a240ecf72ac0 
a2403803b900
  [ 9184.451857]  bc2c02f7 a23fe8a5c000 a240aa940828 
a240aa940800
  [ 9184.451858]  a240aa940980  a23fe8a5bd50 
bbe96b15
  [ 9184.451860] Call Trace:
  [ 9184.451861]  [] schedule+0x35/0x80
  [ 9184.451866]  [] cv_wait_common+0x110/0x130 [spl]
  [ 9184.451868]  [] ? wake_atomic_t_function+0x60/0x60
  [ 9184.451872]  [] __cv_wait+0x15/0x20 [spl]
  [ 9184.451904]  [] zil_commit.part.11+0x79/0x7a0 [zfs]
  [ 9184.451909]  [] ? tsd_hash_search.isra.0+0x46/0xa0 
[spl]
  [ 9184.451913]  [] ? tsd_set+0x2b4/0x500 [spl]
  [ 9184.451914]  [] ? mutex_lock+0x12/0x30
  [ 9184.451945]  [] zil_commit+0x17/0x20 [zfs]
  [ 9184.451975]  [] zfs_fsync+0x7a/0xf0 [zfs]
  [ 9184.452005]  [] zpl_fsync+0x68/0xa0 [zfs]
  [ 9184.452008]  [] vfs_fsync_range+0x4b/0xb0
  [ 9184.452010]  [] do_fsync+0x3d/0x70
  [ 9184.452011]  [] SyS_fsync+0x10/0x20
  [ 9184.452013]  [] entry_SYSCALL_64_fastpath+0x1e/0xa8
  [ 9184.452023] INFO: task bitcoin-msghand:663 blocked for more than 120 
seconds.
  [ 9184.452024]   Tainted: P   OE   4.8.0-32-generic #34-Ubuntu
  [ 9184.452025] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
  [ 9184.452026] bitcoin-msghand D a23eeb23bd38 0   663  26994 
0x
  [ 9184.452028]  a23eeb23bd38 00ffa23eab434000 a240ecf7 
a24095148000
  [ 9184.452030]  a23eeb23bd20 a23eeb23c000 a240aa940828 
a240aa940800
  [ 9184.452031]  a240aa940980  a23eeb23bd50 
bbe96b15
  [ 9184.452033] Call Trace:
  [ 9184.452034]  [] schedule+0x35/0x80
  [ 9184.452039]  [] cv_wait_common+0x110/0x130 [spl]
  [ 9184.452041]  [] ? wake_atomic_t_function+0x60/0x60
  [ 9184.452044]  [] __cv_wait+0x15/0x20 [spl]
  [ 9184.452074]  [] zil_commit.part.11+0x79/0x7a0 

[Kernel-packages] [Bug 1781364] Re: Kernel error "task zfs:pid blocked for more than 120 seconds"

2018-07-16 Thread Colin Ian King
Vasiliy, please refer to comment #8

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1781364

Title:
  Kernel error "task zfs:pid blocked for more than 120 seconds"

Status in Linux:
  Fix Released
Status in linux package in Ubuntu:
  Fix Committed
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  Confirmed
Status in zfs-linux source package in Xenial:
  Confirmed
Status in linux source package in Bionic:
  Confirmed
Status in zfs-linux source package in Bionic:
  Fix Committed
Status in linux source package in Cosmic:
  Fix Committed
Status in zfs-linux source package in Cosmic:
  Fix Released

Bug description:
  == SRU Justification, XENIAL, BIONIC ==

  Exercising ZFS with lxd with many mount/umounts can cause lockups and
  120 second timeout messages.

  == How to reproduce bug ==

  In a VM, 2 CPUs, 16GB of memory running Bionic:

  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux
  sudo lxd init

  (and with the default init options)

  then run:

  lxd-benchmark launch --count 96 --parallel 96

  This will reliably show the lockup every time without the fix.  With
  the fix (detailed below) one cannot reproduce the lockup.

  == Fix ==

  Upstream ZFS commit

  commit ac09630d8b0bf6c92084a30fdaefd03fd0adbdc1
  Author: Brian Behlendorf 
  Date: Wed Jul 11 15:49:10 2018 -0700

  Fix zpl_mount() deadlock

  == Regression Potential ==

  This just changes the locking in the mount path of ZFS and will only
  affect ZFS mount/unmounts.  The regression potential is small as this
  touches a very small code path that has been exhaustively exercises
  this code under multiple thread/CPU contention and shown not to break.

  --

  ZFS bug report: https://github.com/zfsonlinux/zfs/issues/7691

  "I am using LXD containers that are configured to use a ZFS storage backend.
  I create many containers using a benchmark tool, which probably stresses the 
use of ZFS.
  In two out of four attempts, I got

  [  725.970508] INFO: task lxd:4455 blocked for more than 120 seconds.
  [  725.976730]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  725.983551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  725.991624] INFO: task txg_sync:4202 blocked for more than 120 seconds.
  [  725.998264]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.005071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.013313] INFO: task lxd:99919 blocked for more than 120 seconds.
  [  726.019609]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.026418] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.034560] INFO: task zfs:100513 blocked for more than 120 seconds.
  [  726.040936]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.047746] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.055791] INFO: task zfs:100584 blocked for more than 120 seconds.
  [  726.062170]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.068979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.

  Describe how to reproduce the problem

  Start an Ubuntu 18.04 LTS server.
  Install LXD if not already installed.

  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux

  Configure LXD with sudo lxd init. When prompted for the storage
  backend, select ZFS and specify an empty disk.

  $ sudo lxd init
  Would you like to use LXD clustering? (yes/no) [default=no]:
   Do you want to configure a new storage pool? (yes/no) [default=yes]:
   Name of the new storage pool [default=default]:
   Name of the storage backend to use (dir, zfs) [default=zfs]:
   Create a new ZFS pool? (yes/no) [default=yes]:
   Would you like to use an existing block device? (yes/no) [default=no]: yes
   Path to the existing block device: /dev/sdb
   Would you like to connect to a MAAS server? (yes/no) [default=no]:
   Would you like to create a new local network bridge? (yes/no) [default=yes]: 
no
   Would you like to configure LXD to use an existing bridge or host interface? 
(yes/no) [default=no]: no
   Would you like LXD to be available over the network? (yes/no) [default=no]:
   Would you like stale cached images to be updated automatically? (yes/no) 
[default=yes]
   Would you like a YAML "lxd init" preseed to be printed? (yes/no) 
[default=no]:

  Now run the following to launch 48 containers in batches of 12.

  lxd-benchmark launch --count 48 --parallel 12

  In two out of four attempts, I got the kernel errors.

  I also tried

  echo 1 >/sys/module/spl/parameters/spl_taskq_kick

  but did not manage to continue.
  Include any warning/errors/backtraces from the system logs
  dmesg output


[Kernel-packages] [Bug 1781364] Re: Kernel error "task zfs:pid blocked for more than 120 seconds"

2018-07-16 Thread Colin Ian King
The kernel driver fix will land in the next -proposed kernel as the
Ubuntu ZFS driver comes bundled with the kernel.

If you build zfs from source, then that will build the kernel driver as
a DKMS module with the fix in it and *that* will work.

One needs both the zfs userspace and the kernel for the entire bug fix.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1781364

Title:
  Kernel error "task zfs:pid blocked for more than 120 seconds"

Status in Linux:
  Fix Released
Status in linux package in Ubuntu:
  Fix Committed
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  Confirmed
Status in zfs-linux source package in Xenial:
  Confirmed
Status in linux source package in Bionic:
  Confirmed
Status in zfs-linux source package in Bionic:
  Fix Committed
Status in linux source package in Cosmic:
  Fix Committed
Status in zfs-linux source package in Cosmic:
  Fix Released

Bug description:
  == SRU Justification, XENIAL, BIONIC ==

  Exercising ZFS with lxd with many mount/umounts can cause lockups and
  120 second timeout messages.

  == How to reproduce bug ==

  In a VM, 2 CPUs, 16GB of memory running Bionic:

  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux
  sudo lxd init

  (and with the default init options)

  then run:

  lxd-benchmark launch --count 96 --parallel 96

  This will reliably show the lockup every time without the fix.  With
  the fix (detailed below) one cannot reproduce the lockup.

  == Fix ==

  Upstream ZFS commit

  commit ac09630d8b0bf6c92084a30fdaefd03fd0adbdc1
  Author: Brian Behlendorf 
  Date: Wed Jul 11 15:49:10 2018 -0700

  Fix zpl_mount() deadlock

  == Regression Potential ==

  This just changes the locking in the mount path of ZFS and will only
  affect ZFS mount/unmounts.  The regression potential is small as this
  touches a very small code path that has been exhaustively exercises
  this code under multiple thread/CPU contention and shown not to break.

  --

  ZFS bug report: https://github.com/zfsonlinux/zfs/issues/7691

  "I am using LXD containers that are configured to use a ZFS storage backend.
  I create many containers using a benchmark tool, which probably stresses the 
use of ZFS.
  In two out of four attempts, I got

  [  725.970508] INFO: task lxd:4455 blocked for more than 120 seconds.
  [  725.976730]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  725.983551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  725.991624] INFO: task txg_sync:4202 blocked for more than 120 seconds.
  [  725.998264]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.005071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.013313] INFO: task lxd:99919 blocked for more than 120 seconds.
  [  726.019609]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.026418] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.034560] INFO: task zfs:100513 blocked for more than 120 seconds.
  [  726.040936]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.047746] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.055791] INFO: task zfs:100584 blocked for more than 120 seconds.
  [  726.062170]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.068979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.

  Describe how to reproduce the problem

  Start an Ubuntu 18.04 LTS server.
  Install LXD if not already installed.

  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux

  Configure LXD with sudo lxd init. When prompted for the storage
  backend, select ZFS and specify an empty disk.

  $ sudo lxd init
  Would you like to use LXD clustering? (yes/no) [default=no]:
   Do you want to configure a new storage pool? (yes/no) [default=yes]:
   Name of the new storage pool [default=default]:
   Name of the storage backend to use (dir, zfs) [default=zfs]:
   Create a new ZFS pool? (yes/no) [default=yes]:
   Would you like to use an existing block device? (yes/no) [default=no]: yes
   Path to the existing block device: /dev/sdb
   Would you like to connect to a MAAS server? (yes/no) [default=no]:
   Would you like to create a new local network bridge? (yes/no) [default=yes]: 
no
   Would you like to configure LXD to use an existing bridge or host interface? 
(yes/no) [default=no]: no
   Would you like LXD to be available over the network? (yes/no) [default=no]:
   Would you like stale cached images to be updated automatically? (yes/no) 
[default=yes]
   Would you like a YAML "lxd init" preseed to be printed? (yes/no) 
[default=no]:

  Now run the following to launch 48 containers in batches of 12.

  

[Kernel-packages] [Bug 1781364] Re: Kernel error "task zfs:pid blocked for more than 120 seconds"

2018-07-16 Thread Colin Ian King
This fix will only work once it lands in the updated kernel as well as
the user space packages, so please test once the updated kernel is also
in -proposed.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1781364

Title:
  Kernel error "task zfs:pid blocked for more than 120 seconds"

Status in Linux:
  Fix Released
Status in linux package in Ubuntu:
  Fix Committed
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  Confirmed
Status in zfs-linux source package in Xenial:
  Confirmed
Status in linux source package in Bionic:
  Confirmed
Status in zfs-linux source package in Bionic:
  Fix Committed
Status in linux source package in Cosmic:
  Fix Committed
Status in zfs-linux source package in Cosmic:
  Fix Released

Bug description:
  == SRU Justification, XENIAL, BIONIC ==

  Exercising ZFS with lxd with many mount/umounts can cause lockups and
  120 second timeout messages.

  == How to reproduce bug ==

  In a VM, 2 CPUs, 16GB of memory running Bionic:

  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux
  sudo lxd init

  (and with the default init options)

  then run:

  lxd-benchmark launch --count 96 --parallel 96

  This will reliably show the lockup every time without the fix.  With
  the fix (detailed below) one cannot reproduce the lockup.

  == Fix ==

  Upstream ZFS commit

  commit ac09630d8b0bf6c92084a30fdaefd03fd0adbdc1
  Author: Brian Behlendorf 
  Date: Wed Jul 11 15:49:10 2018 -0700

  Fix zpl_mount() deadlock

  == Regression Potential ==

  This just changes the locking in the mount path of ZFS and will only
  affect ZFS mount/unmounts.  The regression potential is small as this
  touches a very small code path that has been exhaustively exercises
  this code under multiple thread/CPU contention and shown not to break.

  --

  ZFS bug report: https://github.com/zfsonlinux/zfs/issues/7691

  "I am using LXD containers that are configured to use a ZFS storage backend.
  I create many containers using a benchmark tool, which probably stresses the 
use of ZFS.
  In two out of four attempts, I got

  [  725.970508] INFO: task lxd:4455 blocked for more than 120 seconds.
  [  725.976730]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  725.983551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  725.991624] INFO: task txg_sync:4202 blocked for more than 120 seconds.
  [  725.998264]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.005071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.013313] INFO: task lxd:99919 blocked for more than 120 seconds.
  [  726.019609]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.026418] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.034560] INFO: task zfs:100513 blocked for more than 120 seconds.
  [  726.040936]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.047746] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.055791] INFO: task zfs:100584 blocked for more than 120 seconds.
  [  726.062170]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.068979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.

  Describe how to reproduce the problem

  Start an Ubuntu 18.04 LTS server.
  Install LXD if not already installed.

  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux

  Configure LXD with sudo lxd init. When prompted for the storage
  backend, select ZFS and specify an empty disk.

  $ sudo lxd init
  Would you like to use LXD clustering? (yes/no) [default=no]:
   Do you want to configure a new storage pool? (yes/no) [default=yes]:
   Name of the new storage pool [default=default]:
   Name of the storage backend to use (dir, zfs) [default=zfs]:
   Create a new ZFS pool? (yes/no) [default=yes]:
   Would you like to use an existing block device? (yes/no) [default=no]: yes
   Path to the existing block device: /dev/sdb
   Would you like to connect to a MAAS server? (yes/no) [default=no]:
   Would you like to create a new local network bridge? (yes/no) [default=yes]: 
no
   Would you like to configure LXD to use an existing bridge or host interface? 
(yes/no) [default=no]: no
   Would you like LXD to be available over the network? (yes/no) [default=no]:
   Would you like stale cached images to be updated automatically? (yes/no) 
[default=yes]
   Would you like a YAML "lxd init" preseed to be printed? (yes/no) 
[default=no]:

  Now run the following to launch 48 containers in batches of 12.

  lxd-benchmark launch --count 48 --parallel 12

  In two out of four attempts, I got the kernel errors.

  I also tried

  echo 1 

[Kernel-packages] [Bug 1773392] Re: zfs hangs on mount/unmount

2018-07-13 Thread Colin Ian King
*** This bug is a duplicate of bug 1781364 ***
https://bugs.launchpad.net/bugs/1781364

** This bug has been marked a duplicate of bug 1781364
   Kernel error "task zfs:pid blocked for more than 120 seconds"

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1773392

Title:
  zfs hangs on mount/unmount

Status in Linux:
  Fix Released
Status in linux package in Ubuntu:
  Confirmed

Bug description:
  I am running lxd 3.0 on ubuntu 18.04 with kernel 4.15.0-22-generic and
  4.15.0-20-generic (same behaviour) with zfs backend (0.7.5-1ubuntu16;
  also tried 0.7.9).

  Sometimes lxd hangs when I try to stop / restart or "stop && move"
  some containers. Furhter investigation showed that problem is in zfs
  mount or unmount: it just hangs and lxd just wait it. Also commands
  like "zfs list" hangs to.

  It seems that it is not lxd or zfs issue, but kernel bug?
  https://github.com/lxc/lxd/issues/4104#issuecomment-392072939

  I have one test ct that always hangs on restart, so here is info:

  dmesg:
  [ 1330.390938] INFO: task txg_sync:9944 blocked for more than 120 seconds.
  [ 1330.390994]   Tainted: P   O 4.15.0-22-generic #24-Ubuntu
  [ 1330.391044] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [ 1330.391101] txg_syncD0  9944  2 0x8000
  [ 1330.391105] Call Trace:
  [ 1330.391117]  __schedule+0x297/0x8b0
  [ 1330.391122]  schedule+0x2c/0x80
  [ 1330.391136]  cv_wait_common+0x11e/0x140 [spl]
  [ 1330.391141]  ? wait_woken+0x80/0x80
  [ 1330.391152]  __cv_wait+0x15/0x20 [spl]
  [ 1330.391234]  rrw_enter_write+0x3c/0xa0 [zfs]
  [ 1330.391306]  rrw_enter+0x13/0x20 [zfs]
  [ 1330.391380]  spa_sync+0x7c9/0xd80 [zfs]
  [ 1330.391457]  txg_sync_thread+0x2cd/0x4a0 [zfs]
  [ 1330.391534]  ? txg_quiesce_thread+0x3d0/0x3d0 [zfs]
  [ 1330.391543]  thread_generic_wrapper+0x74/0x90 [spl]
  [ 1330.391549]  kthread+0x121/0x140
  [ 1330.391558]  ? __thread_exit+0x20/0x20 [spl]
  [ 1330.391562]  ? kthread_create_worker_on_cpu+0x70/0x70
  [ 1330.391566]  ? kthread_create_worker_on_cpu+0x70/0x70
  [ 1330.391569]  ret_from_fork+0x35/0x40
  [ 1330.391582] INFO: task lxd:12419 blocked for more than 120 seconds.
  [ 1330.391630]   Tainted: P   O 4.15.0-22-generic #24-Ubuntu
  [ 1330.391679] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [ 1330.391735] lxd D0 12419  1 0x
  [ 1330.391739] Call Trace:
  [ 1330.391745]  __schedule+0x297/0x8b0
  [ 1330.391749]  schedule+0x2c/0x80
  [ 1330.391752]  rwsem_down_write_failed+0x162/0x360
  [ 1330.391808]  ? dbuf_rele_and_unlock+0x1a8/0x4b0 [zfs]
  [ 1330.391814]  call_rwsem_down_write_failed+0x17/0x30
  [ 1330.391817]  ? call_rwsem_down_write_failed+0x17/0x30
  [ 1330.391821]  down_write+0x2d/0x40
  [ 1330.391825]  grab_super+0x30/0x90
  [ 1330.391901]  ? zpl_create+0x160/0x160 [zfs]
  [ 1330.391905]  sget_userns+0x91/0x490
  [ 1330.391908]  ? get_anon_bdev+0x100/0x100
  [ 1330.391983]  ? zpl_create+0x160/0x160 [zfs]
  [ 1330.391987]  sget+0x7d/0xa0
  [ 1330.391990]  ? get_anon_bdev+0x100/0x100
  [ 1330.392066]  zpl_mount+0xa8/0x160 [zfs]
  [ 1330.392071]  mount_fs+0x37/0x150
  [ 1330.392077]  vfs_kern_mount.part.23+0x5d/0x110
  [ 1330.392080]  do_mount+0x5ed/0xce0
  [ 1330.392083]  ? copy_mount_options+0x2c/0x220
  [ 1330.392086]  SyS_mount+0x98/0xe0
  [ 1330.392092]  do_syscall_64+0x73/0x130
  [ 1330.392096]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
  [ 1330.392099] RIP: 0033:0x4db36a
  [ 1330.392101] RSP: 002b:00c4207fa768 EFLAGS: 0216 ORIG_RAX: 
00a5
  [ 1330.392104] RAX: ffda RBX:  RCX: 
004db36a
  [ 1330.392106] RDX: 00c4205984cc RSI: 00c420a6ee00 RDI: 
00c420a23b60
  [ 1330.392108] RBP: 00c4207fa808 R08: 00c4209d4960 R09: 

  [ 1330.392110] R10:  R11: 0216 R12: 

  [ 1330.392112] R13: 0039 R14: 0038 R15: 
0080
  [ 1330.392123] INFO: task lxd:16725 blocked for more than 120 seconds.
  [ 1330.392171]   Tainted: P   O 4.15.0-22-generic #24-Ubuntu
  [ 1330.392220] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [ 1330.392276] lxd D0 16725  1 0x0002
  [ 1330.392279] Call Trace:
  [ 1330.392284]  __schedule+0x297/0x8b0
  [ 1330.392289]  ? irq_work_queue+0x8d/0xa0
  [ 1330.392293]  schedule+0x2c/0x80
  [ 1330.392297]  io_schedule+0x16/0x40
  [ 1330.392302]  wait_on_page_bit_common+0xd8/0x160
  [ 1330.392305]  ? page_cache_tree_insert+0xe0/0xe0
  [ 1330.392309]  __filemap_fdatawait_range+0xfa/0x160
  [ 1330.392313]  ? _cond_resched+0x19/0x40
  [ 1330.392317]  ? bdi_split_work_to_wbs+0x45/0x2c0
  [ 1330.392321]  ? _cond_resched+0x19/0x40
  [ 1330.392324]  filemap_fdatawait_keep_errors+0x1e/0x40
  [ 

[Kernel-packages] [Bug 1654517] Re: ZFS I/O hangs for minutes

2018-07-13 Thread Colin Ian King
@Rafael, did the workaround in comment #31 help?

** Changed in: zfs-linux (Ubuntu)
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1654517

Title:
  ZFS I/O hangs for minutes

Status in Native ZFS for Linux:
  Fix Released
Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  I/O for multiple programs, like `thunderbird`, `firefox`, etc., hangs
  for minutes and approx. 100 `z_rd_int_[n]` and `z_wr_int_[n]` kernel
  threads are created, `dmesg` contains

  [ 9184.451606] INFO: task txg_sync:11471 blocked for more than 120 
seconds.
  [ 9184.451610]   Tainted: P   OE   4.8.0-32-generic #34-Ubuntu
  [ 9184.451611] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
  [ 9184.451612] txg_syncD a240ab3a7aa8 0 11471  2 
0x
  [ 9184.451616]  a240ab3a7aa8 00ffbb6ade1f a24095148000 
a240e5ca5580
  [ 9184.451618]  0046 a240ab3a8000 a240ff359200 
7fff
  [ 9184.451620]  a23d36cf9050 0001 a240ab3a7ac0 
bbe96b15
  [ 9184.451621] Call Trace:
  [ 9184.451627]  [] schedule+0x35/0x80
  [ 9184.451628]  [] schedule_timeout+0x22a/0x3f0
  [ 9184.451631]  [] ? __switch_to+0x2ce/0x6c0
  [ 9184.451633]  [] ? pick_next_task_fair+0x48c/0x4c0
  [ 9184.451635]  [] ? ktime_get+0x41/0xb0
  [ 9184.451636]  [] io_schedule_timeout+0xa4/0x110
  [ 9184.451644]  [] cv_wait_common+0xb2/0x130 [spl]
  [ 9184.451646]  [] ? wake_atomic_t_function+0x60/0x60
  [ 9184.451650]  [] __cv_wait_io+0x18/0x20 [spl]
  [ 9184.451689]  [] zio_wait+0xfd/0x1d0 [zfs]
  [ 9184.451716]  [] dsl_pool_sync+0xb8/0x480 [zfs]
  [ 9184.451745]  [] spa_sync+0x37f/0xb30 [zfs]
  [ 9184.451747]  [] ? default_wake_function+0x12/0x20
  [ 9184.451779]  [] txg_sync_thread+0x3a5/0x600 [zfs]
  [ 9184.451807]  [] ? txg_delay+0x160/0x160 [zfs]
  [ 9184.451811]  [] thread_generic_wrapper+0x71/0x80 
[spl]
  [ 9184.451815]  [] ? __thread_exit+0x20/0x20 [spl]
  [ 9184.451817]  [] kthread+0xd8/0xf0
  [ 9184.451819]  [] ret_from_fork+0x1f/0x40
  [ 9184.451821]  [] ? kthread_create_on_node+0x1e0/0x1e0
  [ 9184.451849] INFO: task mozStorage #2:21607 blocked for more than 120 
seconds.
  [ 9184.451851]   Tainted: P   OE   4.8.0-32-generic #34-Ubuntu
  [ 9184.451852] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
  [ 9184.451853] mozStorage #2   D a23fe8a5bd38 0 21607  19750 
0x0004
  [ 9184.451855]  a23fe8a5bd38 00ffa240ee8feb40 a240ecf72ac0 
a2403803b900
  [ 9184.451857]  bc2c02f7 a23fe8a5c000 a240aa940828 
a240aa940800
  [ 9184.451858]  a240aa940980  a23fe8a5bd50 
bbe96b15
  [ 9184.451860] Call Trace:
  [ 9184.451861]  [] schedule+0x35/0x80
  [ 9184.451866]  [] cv_wait_common+0x110/0x130 [spl]
  [ 9184.451868]  [] ? wake_atomic_t_function+0x60/0x60
  [ 9184.451872]  [] __cv_wait+0x15/0x20 [spl]
  [ 9184.451904]  [] zil_commit.part.11+0x79/0x7a0 [zfs]
  [ 9184.451909]  [] ? tsd_hash_search.isra.0+0x46/0xa0 
[spl]
  [ 9184.451913]  [] ? tsd_set+0x2b4/0x500 [spl]
  [ 9184.451914]  [] ? mutex_lock+0x12/0x30
  [ 9184.451945]  [] zil_commit+0x17/0x20 [zfs]
  [ 9184.451975]  [] zfs_fsync+0x7a/0xf0 [zfs]
  [ 9184.452005]  [] zpl_fsync+0x68/0xa0 [zfs]
  [ 9184.452008]  [] vfs_fsync_range+0x4b/0xb0
  [ 9184.452010]  [] do_fsync+0x3d/0x70
  [ 9184.452011]  [] SyS_fsync+0x10/0x20
  [ 9184.452013]  [] entry_SYSCALL_64_fastpath+0x1e/0xa8
  [ 9184.452023] INFO: task bitcoin-msghand:663 blocked for more than 120 
seconds.
  [ 9184.452024]   Tainted: P   OE   4.8.0-32-generic #34-Ubuntu
  [ 9184.452025] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.
  [ 9184.452026] bitcoin-msghand D a23eeb23bd38 0   663  26994 
0x
  [ 9184.452028]  a23eeb23bd38 00ffa23eab434000 a240ecf7 
a24095148000
  [ 9184.452030]  a23eeb23bd20 a23eeb23c000 a240aa940828 
a240aa940800
  [ 9184.452031]  a240aa940980  a23eeb23bd50 
bbe96b15
  [ 9184.452033] Call Trace:
  [ 9184.452034]  [] schedule+0x35/0x80
  [ 9184.452039]  [] cv_wait_common+0x110/0x130 [spl]
  [ 9184.452041]  [] ? wake_atomic_t_function+0x60/0x60
  [ 9184.452044]  [] __cv_wait+0x15/0x20 [spl]
  [ 9184.452074]  [] zil_commit.part.11+0x79/0x7a0 [zfs]
  [ 9184.452079]  [] ? tsd_hash_search.isra.0+0x46/0xa0 
[spl]
  [ 9184.452083]  [] ? tsd_set+0x2b4/0x500 [spl]
  [ 9184.452084]  [] ? mutex_lock+0x12/0x30
  [ 9184.452113]  [] zil_commit+0x17/0x20 [zfs]
  [ 

[Kernel-packages] [Bug 1781364] Re: Kernel error "task zfs:pid blocked for more than 120 seconds"

2018-07-12 Thread Colin Ian King
** Description changed:

- == SRU Justification, BIONIC ==
+ == SRU Justification, XENIAL, BIONIC ==
  
  Exercising ZFS with lxd with many mount/umounts can cause lockups and
  120 second timeout messages.
  
  == How to reproduce bug ==
  
  In a VM, 2 CPUs, 16GB of memory running Bionic:
  
  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux
  sudo lxd init
  
  (and with the default init options)
  
  then run:
  
  lxd-benchmark launch --count 96 --parallel 96
  
  This will reliably show the lockup every time without the fix.  With the
  fix (detailed below) one cannot reproduce the lockup.
  
  == Fix ==
  
  Upstream ZFS commit
  
  commit ac09630d8b0bf6c92084a30fdaefd03fd0adbdc1
  Author: Brian Behlendorf 
  Date: Wed Jul 11 15:49:10 2018 -0700
  
- Fix zpl_mount() deadlock
+ Fix zpl_mount() deadlock
  
  == Regression Potential ==
  
  This just changes the locking in the mount path of ZFS and will only
  affect ZFS mount/unmounts.  The regression potential is small as this
  touches a very small code path that has been exhaustively exercises this
  code under multiple thread/CPU contention and shown not to break.
  
  --
  
  ZFS bug report: https://github.com/zfsonlinux/zfs/issues/7691
  
  "I am using LXD containers that are configured to use a ZFS storage backend.
  I create many containers using a benchmark tool, which probably stresses the 
use of ZFS.
  In two out of four attempts, I got
  
  [  725.970508] INFO: task lxd:4455 blocked for more than 120 seconds.
  [  725.976730]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  725.983551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  725.991624] INFO: task txg_sync:4202 blocked for more than 120 seconds.
  [  725.998264]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.005071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.013313] INFO: task lxd:99919 blocked for more than 120 seconds.
  [  726.019609]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.026418] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.034560] INFO: task zfs:100513 blocked for more than 120 seconds.
  [  726.040936]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.047746] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.055791] INFO: task zfs:100584 blocked for more than 120 seconds.
  [  726.062170]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.068979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  
  Describe how to reproduce the problem
  
  Start an Ubuntu 18.04 LTS server.
  Install LXD if not already installed.
  
  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux
  
  Configure LXD with sudo lxd init. When prompted for the storage
  backend, select ZFS and specify an empty disk.
  
  $ sudo lxd init
  Would you like to use LXD clustering? (yes/no) [default=no]:
   Do you want to configure a new storage pool? (yes/no) [default=yes]:
   Name of the new storage pool [default=default]:
   Name of the storage backend to use (dir, zfs) [default=zfs]:
   Create a new ZFS pool? (yes/no) [default=yes]:
   Would you like to use an existing block device? (yes/no) [default=no]: yes
   Path to the existing block device: /dev/sdb
   Would you like to connect to a MAAS server? (yes/no) [default=no]:
   Would you like to create a new local network bridge? (yes/no) [default=yes]: 
no
   Would you like to configure LXD to use an existing bridge or host interface? 
(yes/no) [default=no]: no
   Would you like LXD to be available over the network? (yes/no) [default=no]:
   Would you like stale cached images to be updated automatically? (yes/no) 
[default=yes]
   Would you like a YAML "lxd init" preseed to be printed? (yes/no) 
[default=no]:
  
  Now run the following to launch 48 containers in batches of 12.
  
  lxd-benchmark launch --count 48 --parallel 12
  
  In two out of four attempts, I got the kernel errors.
  
  I also tried
  
  echo 1 >/sys/module/spl/parameters/spl_taskq_kick
  
  but did not manage to continue.
  Include any warning/errors/backtraces from the system logs
  dmesg output
  
  [  725.970508] INFO: task lxd:4455 blocked for more than 120 seconds.
  [  725.976730]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  725.983551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  725.991408] lxd D0  4455  1 0x
  [  725.991412] Call Trace:
  [  725.991424]  __schedule+0x297/0x8b0
  [  725.991428]  schedule+0x2c/0x80
  [  725.991429]  rwsem_down_write_failed+0x162/0x360
  [  725.991460]  ? dbuf_rele_and_unlock+0x1a8/0x4b0 [zfs]
  [  725.991465]  call_rwsem_down_write_failed+0x17/0x30
  [  725.991468]  ? 

[Kernel-packages] [Bug 1781364] Re: Kernel error "task zfs:pid blocked for more than 120 seconds"

2018-07-12 Thread Colin Ian King
** Also affects: linux (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: zfs-linux (Ubuntu Xenial)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1781364

Title:
  Kernel error "task zfs:pid blocked for more than 120 seconds"

Status in Linux:
  Fix Released
Status in linux package in Ubuntu:
  In Progress
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  New
Status in zfs-linux source package in Xenial:
  New
Status in linux source package in Bionic:
  New
Status in zfs-linux source package in Bionic:
  New
Status in linux source package in Cosmic:
  In Progress
Status in zfs-linux source package in Cosmic:
  Fix Released

Bug description:
  == SRU Justification, BIONIC ==

  Exercising ZFS with lxd with many mount/umounts can cause lockups and
  120 second timeout messages.

  == How to reproduce bug ==

  In a VM, 2 CPUs, 16GB of memory running Bionic:

  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux
  sudo lxd init

  (and with the default init options)

  then run:

  lxd-benchmark launch --count 96 --parallel 96

  This will reliably show the lockup every time without the fix.  With
  the fix (detailed below) one cannot reproduce the lockup.

  == Fix ==

  Upstream ZFS commit

  commit ac09630d8b0bf6c92084a30fdaefd03fd0adbdc1
  Author: Brian Behlendorf 
  Date: Wed Jul 11 15:49:10 2018 -0700

  Fix zpl_mount() deadlock

  == Regression Potential ==

  This just changes the locking in the mount path of ZFS and will only
  affect ZFS mount/unmounts.  The regression potential is small as this
  touches a very small code path that has been exhaustively exercises
  this code under multiple thread/CPU contention and shown not to break.

  --

  ZFS bug report: https://github.com/zfsonlinux/zfs/issues/7691

  "I am using LXD containers that are configured to use a ZFS storage backend.
  I create many containers using a benchmark tool, which probably stresses the 
use of ZFS.
  In two out of four attempts, I got

  [  725.970508] INFO: task lxd:4455 blocked for more than 120 seconds.
  [  725.976730]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  725.983551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  725.991624] INFO: task txg_sync:4202 blocked for more than 120 seconds.
  [  725.998264]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.005071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.013313] INFO: task lxd:99919 blocked for more than 120 seconds.
  [  726.019609]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.026418] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.034560] INFO: task zfs:100513 blocked for more than 120 seconds.
  [  726.040936]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.047746] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.055791] INFO: task zfs:100584 blocked for more than 120 seconds.
  [  726.062170]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.068979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.

  Describe how to reproduce the problem

  Start an Ubuntu 18.04 LTS server.
  Install LXD if not already installed.

  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux

  Configure LXD with sudo lxd init. When prompted for the storage
  backend, select ZFS and specify an empty disk.

  $ sudo lxd init
  Would you like to use LXD clustering? (yes/no) [default=no]:
   Do you want to configure a new storage pool? (yes/no) [default=yes]:
   Name of the new storage pool [default=default]:
   Name of the storage backend to use (dir, zfs) [default=zfs]:
   Create a new ZFS pool? (yes/no) [default=yes]:
   Would you like to use an existing block device? (yes/no) [default=no]: yes
   Path to the existing block device: /dev/sdb
   Would you like to connect to a MAAS server? (yes/no) [default=no]:
   Would you like to create a new local network bridge? (yes/no) [default=yes]: 
no
   Would you like to configure LXD to use an existing bridge or host interface? 
(yes/no) [default=no]: no
   Would you like LXD to be available over the network? (yes/no) [default=no]:
   Would you like stale cached images to be updated automatically? (yes/no) 
[default=yes]
   Would you like a YAML "lxd init" preseed to be printed? (yes/no) 
[default=no]:

  Now run the following to launch 48 containers in batches of 12.

  lxd-benchmark launch --count 48 --parallel 12

  In two out of four attempts, I got the kernel errors.

  I also tried

  echo 1 >/sys/module/spl/parameters/spl_taskq_kick

  but did not 

[Kernel-packages] [Bug 1781364] Re: Kernel error "task zfs:pid blocked for more than 120 seconds"

2018-07-12 Thread Colin Ian King
** Description changed:

+ == SRU Justification, BIONIC ==
+ 
+ Exercising ZFS with lxd with many mount/umounts can cause lockups and
+ 120 second timeout messages.
+ 
+ == How to reproduce bug ==
+ 
+ In a VM, 2 CPUs, 16GB of memory running Bionic:
+ 
+ sudo apt update
+ sudo apt install lxd lxd-client lxd-tools zfsutils-linux
+ sudo lxd init
+ 
+ (and with the default init options)
+ 
+ then run:
+ 
+ lxd-benchmark launch --count 96 --parallel 96
+ 
+ This will reliably show the lockup every time without the fix.  With the
+ fix (detailed below) one cannot reproduce the lockup.
+ 
+ == Fix ==
+ 
+ Upstream ZFS commit
+ 
+ commit ac09630d8b0bf6c92084a30fdaefd03fd0adbdc1
+ Author: Brian Behlendorf 
+ Date: Wed Jul 11 15:49:10 2018 -0700
+ 
+ Fix zpl_mount() deadlock
+ 
+ == Regression Potential ==
+ 
+ This just changes the locking in the mount path of ZFS and will only
+ affect ZFS mount/unmounts.  The regression potential is small as this
+ touches a very small code path that has been exhaustively exercises this
+ code under multiple thread/CPU contention and shown not to break.
+ 
+ --
+ 
  ZFS bug report: https://github.com/zfsonlinux/zfs/issues/7691
  
  "I am using LXD containers that are configured to use a ZFS storage backend.
  I create many containers using a benchmark tool, which probably stresses the 
use of ZFS.
  In two out of four attempts, I got
  
  [  725.970508] INFO: task lxd:4455 blocked for more than 120 seconds.
  [  725.976730]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  725.983551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  725.991624] INFO: task txg_sync:4202 blocked for more than 120 seconds.
  [  725.998264]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.005071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.013313] INFO: task lxd:99919 blocked for more than 120 seconds.
  [  726.019609]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.026418] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.034560] INFO: task zfs:100513 blocked for more than 120 seconds.
  [  726.040936]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.047746] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.055791] INFO: task zfs:100584 blocked for more than 120 seconds.
  [  726.062170]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.068979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  
  Describe how to reproduce the problem
  
- Start an Ubuntu 18.04 LTS server.
- Install LXD if not already installed.
+ Start an Ubuntu 18.04 LTS server.
+ Install LXD if not already installed.
  
  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux
  
- Configure LXD with sudo lxd init. When prompted for the storage
+ Configure LXD with sudo lxd init. When prompted for the storage
  backend, select ZFS and specify an empty disk.
  
  $ sudo lxd init
- Would you like to use LXD clustering? (yes/no) [default=no]: 
-  Do you want to configure a new storage pool? (yes/no) [default=yes]: 
-  Name of the new storage pool [default=default]: 
-  Name of the storage backend to use (dir, zfs) [default=zfs]: 
-  Create a new ZFS pool? (yes/no) [default=yes]: 
-  Would you like to use an existing block device? (yes/no) [default=no]: yes
-  Path to the existing block device: /dev/sdb
-  Would you like to connect to a MAAS server? (yes/no) [default=no]: 
-  Would you like to create a new local network bridge? (yes/no) [default=yes]: 
no
-  Would you like to configure LXD to use an existing bridge or host interface? 
(yes/no) [default=no]: no
-  Would you like LXD to be available over the network? (yes/no) [default=no]: 
-  Would you like stale cached images to be updated automatically? (yes/no) 
[default=yes] 
-  Would you like a YAML "lxd init" preseed to be printed? (yes/no) 
[default=no]: 
- 
- Now run the following to launch 48 containers in batches of 12.
+ Would you like to use LXD clustering? (yes/no) [default=no]:
+  Do you want to configure a new storage pool? (yes/no) [default=yes]:
+  Name of the new storage pool [default=default]:
+  Name of the storage backend to use (dir, zfs) [default=zfs]:
+  Create a new ZFS pool? (yes/no) [default=yes]:
+  Would you like to use an existing block device? (yes/no) [default=no]: yes
+  Path to the existing block device: /dev/sdb
+  Would you like to connect to a MAAS server? (yes/no) [default=no]:
+  Would you like to create a new local network bridge? (yes/no) [default=yes]: 
no
+  Would you like to configure LXD to use an existing bridge or host interface? 
(yes/no) [default=no]: no
+  Would you like LXD to be available over the network? (yes/no) [default=no]:
+  Would you like stale cached images to be updated automatically? 

[Kernel-packages] [Bug 1759848] Re: Allow multiple mounts of zfs datasets

2018-07-12 Thread Colin Ian King
** Tags removed: verification-done-bionic
** Tags added: verification-needed-xenial

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1759848

Title:
  Allow multiple mounts of zfs datasets

Status in linux package in Ubuntu:
  Fix Released
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  In Progress
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in linux source package in Bionic:
  Fix Released
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  === SRU Justification, Xenial ==

  An attempt to mount an already mounted zfs dataset should return a new
  mount referencing the existing super block, but instead it returns an
  error. Operations such as bind mounts and unsharing mount namespaces
  create new mounts for the sb, which can cause operations to fail which
  involve unmounting and remounting the dataset.

  == Fix ==

  Backport of upstream fix https://trello.com/c/l89Ygj28/352-allow-
  multiple-mounts-of-zfs-datasets to allow multiple mounts

  This fix from Seth addresses this issue

  == Regression potential ==

  Like all backports, this has a potential to be incorrectly backported
  and break the ZFS mounting. However, any breakage should be picked up
  via the ZFS smoke tests that thoroughly exercise mounting/dismounting
  options.  At worst, the mounting won't work, but this has been tested,
  so I doubt this is a possibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1759848/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1759848] Re: Allow multiple mounts of zfs datasets

2018-07-12 Thread Colin Ian King
Tested with kernel 4.15.0-27-generic and the fix works. Marking as
verified for Bionic.

** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1759848

Title:
  Allow multiple mounts of zfs datasets

Status in linux package in Ubuntu:
  Fix Released
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  In Progress
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in linux source package in Bionic:
  Fix Released
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  === SRU Justification, Xenial ==

  An attempt to mount an already mounted zfs dataset should return a new
  mount referencing the existing super block, but instead it returns an
  error. Operations such as bind mounts and unsharing mount namespaces
  create new mounts for the sb, which can cause operations to fail which
  involve unmounting and remounting the dataset.

  == Fix ==

  Backport of upstream fix https://trello.com/c/l89Ygj28/352-allow-
  multiple-mounts-of-zfs-datasets to allow multiple mounts

  This fix from Seth addresses this issue

  == Regression potential ==

  Like all backports, this has a potential to be incorrectly backported
  and break the ZFS mounting. However, any breakage should be picked up
  via the ZFS smoke tests that thoroughly exercise mounting/dismounting
  options.  At worst, the mounting won't work, but this has been tested,
  so I doubt this is a possibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1759848/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1781364] Re: Kernel error "task zfs:pid blocked for more than 120 seconds"

2018-07-12 Thread Colin Ian King
Upstream ZFS fix:

commit ac09630d8b0bf6c92084a30fdaefd03fd0adbdc1
Author: Brian Behlendorf 
Date:   Wed Jul 11 15:49:10 2018 -0700

Fix zpl_mount() deadlock


** Also affects: linux (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: zfs-linux (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: linux (Ubuntu Cosmic)
   Importance: High
 Assignee: Colin Ian King (colin-king)
   Status: In Progress

** Also affects: zfs-linux (Ubuntu Cosmic)
   Importance: High
 Assignee: Colin Ian King (colin-king)
   Status: In Progress

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1781364

Title:
  Kernel error "task zfs:pid blocked for more than 120 seconds"

Status in Linux:
  Unknown
Status in linux package in Ubuntu:
  In Progress
Status in zfs-linux package in Ubuntu:
  In Progress
Status in linux source package in Bionic:
  New
Status in zfs-linux source package in Bionic:
  New
Status in linux source package in Cosmic:
  In Progress
Status in zfs-linux source package in Cosmic:
  In Progress

Bug description:
  ZFS bug report: https://github.com/zfsonlinux/zfs/issues/7691

  "I am using LXD containers that are configured to use a ZFS storage backend.
  I create many containers using a benchmark tool, which probably stresses the 
use of ZFS.
  In two out of four attempts, I got

  [  725.970508] INFO: task lxd:4455 blocked for more than 120 seconds.
  [  725.976730]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  725.983551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  725.991624] INFO: task txg_sync:4202 blocked for more than 120 seconds.
  [  725.998264]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.005071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.013313] INFO: task lxd:99919 blocked for more than 120 seconds.
  [  726.019609]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.026418] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.034560] INFO: task zfs:100513 blocked for more than 120 seconds.
  [  726.040936]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.047746] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.055791] INFO: task zfs:100584 blocked for more than 120 seconds.
  [  726.062170]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.068979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.

  Describe how to reproduce the problem

  Start an Ubuntu 18.04 LTS server.
  Install LXD if not already installed.

  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux

  Configure LXD with sudo lxd init. When prompted for the storage
  backend, select ZFS and specify an empty disk.

  $ sudo lxd init
  Would you like to use LXD clustering? (yes/no) [default=no]: 
   Do you want to configure a new storage pool? (yes/no) [default=yes]: 
   Name of the new storage pool [default=default]: 
   Name of the storage backend to use (dir, zfs) [default=zfs]: 
   Create a new ZFS pool? (yes/no) [default=yes]: 
   Would you like to use an existing block device? (yes/no) [default=no]: yes
   Path to the existing block device: /dev/sdb
   Would you like to connect to a MAAS server? (yes/no) [default=no]: 
   Would you like to create a new local network bridge? (yes/no) [default=yes]: 
no
   Would you like to configure LXD to use an existing bridge or host interface? 
(yes/no) [default=no]: no
   Would you like LXD to be available over the network? (yes/no) [default=no]: 
   Would you like stale cached images to be updated automatically? (yes/no) 
[default=yes] 
   Would you like a YAML "lxd init" preseed to be printed? (yes/no) 
[default=no]: 

  Now run the following to launch 48 containers in batches of 12.

  lxd-benchmark launch --count 48 --parallel 12

  In two out of four attempts, I got the kernel errors.

  I also tried

  echo 1 >/sys/module/spl/parameters/spl_taskq_kick

  but did not manage to continue.
  Include any warning/errors/backtraces from the system logs
  dmesg output

  [  725.970508] INFO: task lxd:4455 blocked for more than 120 seconds.
  [  725.976730]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  725.983551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  725.991408] lxd D0  4455  1 0x
  [  725.991412] Call Trace:
  [  725.991424]  __schedule+0x297/0x8b0
  [  725.991428]  schedule+0x2c/0x80
  [  725.991429]  rwsem_down_write_failed+0x162/0x360
  [  725.991460]  ? dbuf_rele_and_unlock+0x1a8/0x4b0 [zfs]
  [  725.

[Kernel-packages] [Bug 1781364] [NEW] Kernel error "task zfs:pid blocked for more than 120 seconds"

2018-07-12 Thread Colin Ian King
948   0%0.25K   5344   64 85504K kmalloc-256
277032 276822   0%0.19K   6596   42 52768K cred_jar   
248352 241634   0%0.66K   5174   48165568K proc_inode_cache   
248320 233052   0%0.01K485  512  1940K kmalloc-8  
214984 143177   0%0.57K   3839   56122848K radix_tree_node

** Affects: linux
 Importance: Unknown
 Status: Unknown

** Affects: linux (Ubuntu)
 Importance: High
 Assignee: Colin Ian King (colin-king)
 Status: In Progress

** Affects: zfs-linux (Ubuntu)
 Importance: High
 Assignee: Colin Ian King (colin-king)
 Status: In Progress

** Affects: linux (Ubuntu Bionic)
 Importance: Undecided
 Status: New

** Affects: zfs-linux (Ubuntu Bionic)
 Importance: Undecided
     Status: New

** Affects: linux (Ubuntu Cosmic)
 Importance: High
 Assignee: Colin Ian King (colin-king)
 Status: In Progress

** Affects: zfs-linux (Ubuntu Cosmic)
 Importance: High
 Assignee: Colin Ian King (colin-king)
 Status: In Progress

** Bug watch added: Github Issue Tracker for ZFS #7691
   https://github.com/zfsonlinux/zfs/issues/7691

** Also affects: linux via
   https://github.com/zfsonlinux/zfs/issues/7691
   Importance: Unknown
   Status: Unknown

** Also affects: zfs-linux (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: zfs-linux (Ubuntu)
   Status: New => In Progress

** Changed in: linux (Ubuntu)
   Status: New => In Progress

** Changed in: linux (Ubuntu)
   Importance: Undecided => High

** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => High

** Changed in: linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1781364

Title:
  Kernel error "task zfs:pid blocked for more than 120 seconds"

Status in Linux:
  Unknown
Status in linux package in Ubuntu:
  In Progress
Status in zfs-linux package in Ubuntu:
  In Progress
Status in linux source package in Bionic:
  New
Status in zfs-linux source package in Bionic:
  New
Status in linux source package in Cosmic:
  In Progress
Status in zfs-linux source package in Cosmic:
  In Progress

Bug description:
  ZFS bug report: https://github.com/zfsonlinux/zfs/issues/7691

  "I am using LXD containers that are configured to use a ZFS storage backend.
  I create many containers using a benchmark tool, which probably stresses the 
use of ZFS.
  In two out of four attempts, I got

  [  725.970508] INFO: task lxd:4455 blocked for more than 120 seconds.
  [  725.976730]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  725.983551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  725.991624] INFO: task txg_sync:4202 blocked for more than 120 seconds.
  [  725.998264]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.005071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.013313] INFO: task lxd:99919 blocked for more than 120 seconds.
  [  726.019609]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.026418] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.034560] INFO: task zfs:100513 blocked for more than 120 seconds.
  [  726.040936]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.047746] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [  726.055791] INFO: task zfs:100584 blocked for more than 120 seconds.
  [  726.062170]   Tainted: P   O 4.15.0-20-generic #21-Ubuntu
  [  726.068979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.

  Describe how to reproduce the problem

  Start an Ubuntu 18.04 LTS server.
  Install LXD if not already installed.

  sudo apt update
  sudo apt install lxd lxd-client lxd-tools zfsutils-linux

  Configure LXD with sudo lxd init. When prompted for the storage
  backend, select ZFS and specify an empty disk.

  $ sudo lxd init
  Would you like to use LXD clustering? (yes/no) [default=no]: 
   Do you want to configure a new storage pool? (yes/no) [default=yes]: 
   Name of the new storage pool [default=default]: 
   Name of the storage backend to use (dir, zfs) [default=zfs]: 
   Create a new ZFS pool? (yes/no) [default=yes]: 
   Would you like to use an existing block device? (yes/no) [default=no]: yes
   Path to the existing block device: /dev/sdb
   Would you like to connect to a MAAS server? (yes/no) [default=no]: 
   Would you like to create a new local network bridge? (yes/no) [default=y

[Kernel-packages] [Bug 1780137] Re: [Regression] EXT4-fs error (device sda1): ext4_validate_inode_bitmap:99: comm stress-ng: Corrupt inode bitmap

2018-07-06 Thread Colin Ian King
I would be useful to see if a non-SMP boot will cause the same issue; if
it only occurs in SMP boots then we know it's a lower level locking/race
style of problem.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1780137

Title:
  [Regression] EXT4-fs error (device sda1):
  ext4_validate_inode_bitmap:99: comm stress-ng: Corrupt inode bitmap

Status in linux package in Ubuntu:
  Triaged
Status in linux source package in Bionic:
  Triaged

Bug description:
  We're seeing a very reproducible regression in the bionic kernel
  triggered by the stress-ng chdir test performed by the Ubuntu
  certification suite. We see this on both the HiSilicon D05 arm64
  server and the HiSilicon D06 arm64 server. We have been unable to
  reproduce on other servers so far.

  [Test Case]
  $ sudo apt-add-repository -y ppa:hardware-certification/public
  $ sudo apt install -y canonical-certification-server
  $ sudo mkfs.ext4 /dev/sda1 (Obviously, this should not be your root disk!!)
  $ sudo /usr/lib/plainbox-provider-checkbox/bin/disk_stress_ng sda --base-time 
240 --really-run

  This test runs a series of stress-ng tests against /dev/sda, and fails
  on the "chdir" test. To speed up reproduction, reduce the test list to
  just "chdir" in the disk_stress_ng script. Attempts to reproduce this
  directly with stress-ng have failed - presumably because of other
  environment setup that this script performs (e.g. setting aio-max-nr
  to 524288).

  Our reproduction test is to use a non-root disk because it can lead to
  corruption, and mkfs.ext4'ing the partition just before running the
  test, to get to a pristine fs state.

  I bisected this down to the following commit:

  commit 555bc9b1421f10d94a1192c7eea4a59faca3e711
  Author: Theodore Ts'o 
  Date:   Mon Feb 19 14:16:47 2018 -0500

  ext4: don't update checksum of new initialized bitmaps

  BugLink: http://bugs.launchpad.net/bugs/1773233

  commit 044e6e3d74a3d7103a0c8a9305dfd94d64000660 upstream.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1780137/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1780137] Re: [Regression] EXT4-fs error (device sda1): ext4_validate_inode_bitmap:99: comm stress-ng: Corrupt inode bitmap

2018-07-05 Thread Colin Ian King
It may be worth grabbing a copy of the /proc/sys on a clean boot and
then a copy after the sysctl changes so we can get and idea of any
specific tweaks that may have occurred.

Just to note, I've been running the stress-ng command as noted in
comment #3 on a 24 CPU ARM64 Synquacer box with the alleged faulty
kernel and cannot reproduce the issue.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1780137

Title:
  [Regression] EXT4-fs error (device sda1):
  ext4_validate_inode_bitmap:99: comm stress-ng: Corrupt inode bitmap

Status in linux package in Ubuntu:
  Triaged
Status in linux source package in Bionic:
  Triaged

Bug description:
  We're seeing a very reproducible regression in the bionic kernel
  triggered by the stress-ng chdir test performed by the Ubuntu
  certification suite. Platform is a HiSilicon D05 arm64 server, but we
  don't have reason to believe it is platform specific at this time.

  [Test Case]
  $ sudo apt-add-repository -y ppa:hardware-certification/public
  $ sudo apt install -y canonical-certification-server
  $ sudo mkfs.ext4 /dev/sda1 (Obviously, this should not be your root disk!!)
  $ sudo /usr/lib/plainbox-provider-checkbox/bin/disk_stress_ng sda --base-time 
240 --really-run

  This test runs a series of stress-ng tests against /dev/sda, and fails
  on the "chdir" test. To speed up reproduction, reduce the test list to
  just "chdir" in the disk_stress_ng script. Attempts to reproduce this
  directly with stress-ng have failed - presumably because of other
  environment setup that this script performs (e.g. setting aio-max-nr
  to 524288).

  Our reproduction test is to use a non-root disk because it can lead to
  corruption, and mkfs.ext4'ing the partition just before running the
  test, to get to a pristine fs state.

  I bisected this down to the following commit:

  commit 555bc9b1421f10d94a1192c7eea4a59faca3e711
  Author: Theodore Ts'o 
  Date:   Mon Feb 19 14:16:47 2018 -0500

  ext4: don't update checksum of new initialized bitmaps
  
  BugLink: http://bugs.launchpad.net/bugs/1773233
  
  commit 044e6e3d74a3d7103a0c8a9305dfd94d64000660 upstream.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1780137/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1780137] Re: [Regression] EXT4-fs error (device sda1): ext4_validate_inode_bitmap:99: comm stress-ng: Corrupt inode bitmap

2018-07-05 Thread Colin Ian King
What is the stress-ng command that is being run by /usr/lib/plainbox-
provider-checkbox/bin/disk_stress_ng - without knowing that it's hard to
figure out the initial stressor conditions

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1780137

Title:
  [Regression] EXT4-fs error (device sda1):
  ext4_validate_inode_bitmap:99: comm stress-ng: Corrupt inode bitmap

Status in linux package in Ubuntu:
  Triaged
Status in linux source package in Bionic:
  Triaged

Bug description:
  We're seeing a very reproducible regression in the bionic kernel
  triggered by the stress-ng chdir test performed by the Ubuntu
  certification suite. Platform is a HiSilicon D05 arm64 server, but we
  don't have reason to believe it is platform specific at this time.

  [Test Case]
  $ sudo apt-add-repository -y ppa:hardware-certification/public
  $ sudo apt install -y canonical-certification-server
  $ sudo mkfs.ext4 /dev/sda1 (Obviously, this should not be your root disk!!)
  $ sudo /usr/lib/plainbox-provider-checkbox/bin/disk_stress_ng sda --base-time 
240 --really-run

  This test runs a series of stress-ng tests against /dev/sda, and fails
  on the "chdir" test. To speed up reproduction, reduce the test list to
  just "chdir" in the disk_stress_ng script. Attempts to reproduce this
  directly with stress-ng have failed - presumably because of other
  environment setup that this script performs (e.g. setting aio-max-nr
  to 524288).

  Our reproduction test is to use a non-root disk because it can lead to
  corruption, and mkfs.ext4'ing the partition just before running the
  test, to get to a pristine fs state.

  I bisected this down to the following commit:

  commit 555bc9b1421f10d94a1192c7eea4a59faca3e711
  Author: Theodore Ts'o 
  Date:   Mon Feb 19 14:16:47 2018 -0500

  ext4: don't update checksum of new initialized bitmaps
  
  BugLink: http://bugs.launchpad.net/bugs/1773233
  
  commit 044e6e3d74a3d7103a0c8a9305dfd94d64000660 upstream.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1780137/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1779827] Re: failure to boot with linux-image-4.15.0-24-generic

2018-07-03 Thread Colin Ian King
** Summary changed:

- failure to boot with inux-image-4.15.0-24-generic
+ failure to boot with linux-image-4.15.0-24-generic

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1779827

Title:
  failure to boot with linux-image-4.15.0-24-generic

Status in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Triaged
Status in The Bionic Beaver:
  Confirmed
Status in linux source package in Bionic:
  Triaged

Bug description:
  This was the last OK then my 18.04 hangs after an update this morning.
  07:00 AM CEST

  Last Ok in boot was Started gnome display manager. dispatcher service
  .. tem changes.pp link was shut down

  Tried install lightdm from command line and the  response was lastest
  already installed.

  Probably it is what is coming after the lastest OK which is to be the
  error. And here I have lots of guesses..

  Any Ideas ? I need to do some work and I may not be waiting long.

  Search and browsed and now close to give up. Yeah it is a Lenovo.

  Guys: turn of auto update it is a machine killer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1779827/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1779758] Re: zfs utils and kernel version mismath

2018-07-03 Thread Colin Ian King
** Changed in: zfs-linux (Ubuntu)
   Status: Incomplete => Won't Fix

** Summary changed:

- zfs  utils and kernel version mismath
+ zfs  utils and kernel version mismatch

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1779758

Title:
  zfs  utils and kernel version mismatch

Status in zfs-linux package in Ubuntu:
  Won't Fix

Bug description:
  I have 16.04 installed with HWE and did a dist-upgrade. This installed
  a new kernel 4.15 with zfs version 7.5 but did not upgrade the
  zfsutils-linux package.

  I now have a mismatch zfsutils and zfs module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1779758/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1779758] Re: zfs utils and kernel version mismath

2018-07-02 Thread Colin Ian King
The zfs kernel module and user space tools have had a compatibility shim
added in the userspace ioctl interface to the kernel to ensure
interoperability with the older userspace tools and the kernel driver.
We have extensively tested the userspace side to confirm that this
works, however, if you do find interoperability issues please file a
separate bug for any issues you find.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Incomplete

** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1779758

Title:
  zfs  utils and kernel version mismath

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  I have 16.04 installed with HWE and did a dist-upgrade. This installed
  a new kernel 4.15 with zfs version 7.5 but did not upgrade the
  zfsutils-linux package.

  I now have a mismatch zfsutils and zfs module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1779758/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1779458] Re: zfs-dkms 0.6.5.6-0ubuntu24: zfs kernel module failed to build

2018-06-30 Thread Colin Ian King
ZFS is not supported on 32 bit platforms because of the need for large
amounts of memory for various ZFS caches. If you require ZFS, please use
a 64 bit kernel.  Marking as Won't fix.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1779458

Title:
  zfs-dkms 0.6.5.6-0ubuntu24: zfs kernel module failed to build

Status in zfs-linux package in Ubuntu:
  Won't Fix

Bug description:
  Mesmo de Wifi ligado ele não se conecta a rede e tem que botar senha.

  ProblemType: Package
  DistroRelease: Ubuntu 16.04
  Package: zfs-dkms 0.6.5.6-0ubuntu24
  ProcVersionSignature: Ubuntu 4.4.0-130.156-generic 4.4.134
  Uname: Linux 4.4.0-130-generic i686
  ApportVersion: 2.20.1-0ubuntu2.18
  Architecture: i386
  DKMSBuildLog:
   DKMS make.log for zfs-0.6.5.6 for kernel 4.4.0-130-generic (i686)
   Sex Jun 29 22:37:31 -03 2018
   make: *** Nenhum alvo indicado e nenhum arquivo make encontrado.  Pare.
  DKMSKernelVersion: 4.4.0-130-generic
  Date: Fri Jun 29 22:38:59 2018
  InstallationDate: Installed on 2016-07-16 (714 days ago)
  InstallationMedia: Ubuntu-Kylin 16.04 LTS "Xenial Xerus" - Release i386 
(20160420.1)
  PackageVersion: 0.6.5.6-0ubuntu24
  RelatedPackageVersions:
   dpkg 1.18.4ubuntu1.4
   apt  1.2.27
  SourcePackage: zfs-linux
  Title: zfs-dkms 0.6.5.6-0ubuntu24: zfs kernel module failed to build
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1779458/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1759848] Re: Allow multiple mounts of zfs datasets

2018-06-29 Thread Colin Ian King
We need to wait until the kernel is updated with the fix before it can
be tested as the fixes require kernel + zfs together to work

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1759848

Title:
  Allow multiple mounts of zfs datasets

Status in linux package in Ubuntu:
  Fix Released
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  In Progress
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in linux source package in Bionic:
  Fix Released
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  === SRU Justification, Xenial ==

  An attempt to mount an already mounted zfs dataset should return a new
  mount referencing the existing super block, but instead it returns an
  error. Operations such as bind mounts and unsharing mount namespaces
  create new mounts for the sb, which can cause operations to fail which
  involve unmounting and remounting the dataset.

  == Fix ==

  Backport of upstream fix https://trello.com/c/l89Ygj28/352-allow-
  multiple-mounts-of-zfs-datasets to allow multiple mounts

  This fix from Seth addresses this issue

  == Regression potential ==

  Like all backports, this has a potential to be incorrectly backported
  and break the ZFS mounting. However, any breakage should be picked up
  via the ZFS smoke tests that thoroughly exercise mounting/dismounting
  options.  At worst, the mounting won't work, but this has been tested,
  so I doubt this is a possibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1759848/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1759848] Re: Allow multiple mounts of zfs datasets

2018-06-25 Thread Colin Ian King
** Changed in: linux (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: zfs-linux (Ubuntu Xenial)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1759848

Title:
  Allow multiple mounts of zfs datasets

Status in linux package in Ubuntu:
  Fix Released
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  In Progress
Status in zfs-linux source package in Xenial:
  In Progress
Status in linux source package in Bionic:
  Fix Released
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  === SRU Justification, Xenial ==

  An attempt to mount an already mounted zfs dataset should return a new
  mount referencing the existing super block, but instead it returns an
  error. Operations such as bind mounts and unsharing mount namespaces
  create new mounts for the sb, which can cause operations to fail which
  involve unmounting and remounting the dataset.

  == Fix ==

  Backport of upstream fix https://trello.com/c/l89Ygj28/352-allow-
  multiple-mounts-of-zfs-datasets to allow multiple mounts

  This fix from Seth addresses this issue

  == Regression potential ==

  Like all backports, this has a potential to be incorrectly backported
  and break the ZFS mounting. However, any breakage should be picked up
  via the ZFS smoke tests that thoroughly exercise mounting/dismounting
  options.  At worst, the mounting won't work, but this has been tested,
  so I doubt this is a possibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1759848/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1759848] Re: Allow multiple mounts of zfs datasets

2018-06-25 Thread Colin Ian King
This SRU requires and update in zfsutils-linux and the change sync'd to
the kernel and the kernel update to complete the fix.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1759848

Title:
  Allow multiple mounts of zfs datasets

Status in linux package in Ubuntu:
  Fix Released
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  New
Status in zfs-linux source package in Xenial:
  New
Status in linux source package in Bionic:
  Fix Released
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  === SRU Justification, Xenial ==

  An attempt to mount an already mounted zfs dataset should return a new
  mount referencing the existing super block, but instead it returns an
  error. Operations such as bind mounts and unsharing mount namespaces
  create new mounts for the sb, which can cause operations to fail which
  involve unmounting and remounting the dataset.

  == Fix ==

  Backport of upstream fix https://trello.com/c/l89Ygj28/352-allow-
  multiple-mounts-of-zfs-datasets to allow multiple mounts

  This fix from Seth addresses this issue

  == Regression potential ==

  Like all backports, this has a potential to be incorrectly backported
  and break the ZFS mounting. However, any breakage should be picked up
  via the ZFS smoke tests that thoroughly exercise mounting/dismounting
  options.  At worst, the mounting won't work, but this has been tested,
  so I doubt this is a possibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1759848/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1759848] Re: Allow multiple mounts of zfs datasets

2018-06-25 Thread Colin Ian King
** Description changed:

- Pull upstream fix https://trello.com/c/l89Ygj28/352-allow-multiple-
- mounts-of-zfs-datasets to allow multiple mounts
+ === SRU Justification, Xenial ==
  
  An attempt to mount an already mounted zfs dataset should return a new
  mount referencing the existing super block, but instead it returns an
  error. Operations such as bind mounts and unsharing mount namespaces
  create new mounts for the sb, which can cause operations to fail which
- involve unmounting and remounting the dataset.  This fix from Seth
- addresses this issue
+ involve unmounting and remounting the dataset.
+ 
+ == Fix ==
+ 
+ Backport of upstream fix https://trello.com/c/l89Ygj28/352-allow-
+ multiple-mounts-of-zfs-datasets to allow multiple mounts
+ 
+ This fix from Seth addresses this issue
+ 
+ == Regression potential ==
+ 
+ Like all backports, this has a potential to be incorrectly backported
+ and break the ZFS mounting. However, any breakage should be picked up
+ via the ZFS smoke tests that thoroughly exercise mounting/dismounting
+ options.  At worst, the mounting won't work, but this has been tested,
+ so I doubt this is a possibility.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1759848

Title:
  Allow multiple mounts of zfs datasets

Status in linux package in Ubuntu:
  Fix Released
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  New
Status in zfs-linux source package in Xenial:
  New
Status in linux source package in Bionic:
  Fix Released
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  === SRU Justification, Xenial ==

  An attempt to mount an already mounted zfs dataset should return a new
  mount referencing the existing super block, but instead it returns an
  error. Operations such as bind mounts and unsharing mount namespaces
  create new mounts for the sb, which can cause operations to fail which
  involve unmounting and remounting the dataset.

  == Fix ==

  Backport of upstream fix https://trello.com/c/l89Ygj28/352-allow-
  multiple-mounts-of-zfs-datasets to allow multiple mounts

  This fix from Seth addresses this issue

  == Regression potential ==

  Like all backports, this has a potential to be incorrectly backported
  and break the ZFS mounting. However, any breakage should be picked up
  via the ZFS smoke tests that thoroughly exercise mounting/dismounting
  options.  At worst, the mounting won't work, but this has been tested,
  so I doubt this is a possibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1759848/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1759848] Re: Allow multiple mounts of zfs datasets

2018-06-25 Thread Colin Ian King
** Changed in: linux (Ubuntu Xenial)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** No longer affects: linux (Ubuntu Artful)

** No longer affects: zfs-linux (Ubuntu Artful)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1759848

Title:
  Allow multiple mounts of zfs datasets

Status in linux package in Ubuntu:
  Fix Released
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  New
Status in zfs-linux source package in Xenial:
  New
Status in linux source package in Bionic:
  Fix Released
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  Pull upstream fix https://trello.com/c/l89Ygj28/352-allow-multiple-
  mounts-of-zfs-datasets to allow multiple mounts

  An attempt to mount an already mounted zfs dataset should return a new
  mount referencing the existing super block, but instead it returns an
  error. Operations such as bind mounts and unsharing mount namespaces
  create new mounts for the sb, which can cause operations to fail which
  involve unmounting and remounting the dataset.  This fix from Seth
  addresses this issue

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1759848/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1759848] Re: Allow multiple mounts of zfs datasets

2018-06-25 Thread Colin Ian King
** Also affects: linux (Ubuntu Artful)
   Importance: Undecided
   Status: New

** Also affects: zfs-linux (Ubuntu Artful)
   Importance: Undecided
   Status: New

** Also affects: linux (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: zfs-linux (Ubuntu Xenial)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1759848

Title:
  Allow multiple mounts of zfs datasets

Status in linux package in Ubuntu:
  Fix Released
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  New
Status in zfs-linux source package in Xenial:
  New
Status in linux source package in Artful:
  New
Status in zfs-linux source package in Artful:
  New
Status in linux source package in Bionic:
  Fix Released
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  Pull upstream fix https://trello.com/c/l89Ygj28/352-allow-multiple-
  mounts-of-zfs-datasets to allow multiple mounts

  An attempt to mount an already mounted zfs dataset should return a new
  mount referencing the existing super block, but instead it returns an
  error. Operations such as bind mounts and unsharing mount namespaces
  create new mounts for the sb, which can cause operations to fail which
  involve unmounting and remounting the dataset.  This fix from Seth
  addresses this issue

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1759848/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1725339] Re: ZFS fails to import pools on reboot due to udev settle failed

2018-06-25 Thread Colin Ian King
OK, I'll mark that as fixed-released.

** Changed in: zfs-linux (Ubuntu)
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1725339

Title:
  ZFS fails to import pools on reboot due to udev settle failed

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  ZFS fails to import pools on reboot due to udev settle failed. I have
  buggy usb controller (or port) which seems to cause udev settle to
  fail, but nothing is plugged in usb, so it's not needed. Should it
  prevent zfs from mounting from hard drive on boot?

  Oct 20 18:08:52 mite systemd[1]: zfs-import-cache.service: Job zfs-
  import-cache.service/start failed with result 'dependency'.

  Oct 20 18:07:52 mite systemd-udevd[379]: seq 2916 
'/devices/pci:00/:00:14.0/usb1' is taking a long time   
   
  Oct 20 18:08:52 mite systemd[1]: systemd-udev-settle.service: Main process 
exited, code=exited, status=1/FAILURE   
  
  Oct 20 18:08:52 mite systemd[1]: Failed to start udev Wait for Complete 
Device Initialization.  
 
  Oct 20 18:08:52 mite systemd[1]: systemd-udev-settle.service: Unit entered 
failed state.   
  
  Oct 20 18:08:52 mite systemd[1]: systemd-udev-settle.service: Failed with 
result 'exit-code'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1725339/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1772024] Re: linux 4.13.0-42.47 ADT test failure with linux 4.13.0-42.47 (nbd-smoke-test)

2018-06-21 Thread Colin Ian King
** Changed in: linux (Ubuntu Cosmic)
 Assignee: Colin Ian King (colin-king) => (unassigned)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1772024

Title:
  linux 4.13.0-42.47 ADT test failure with linux 4.13.0-42.47 (nbd-
  smoke-test)

Status in linux package in Ubuntu:
  In Progress
Status in nbd package in Ubuntu:
  Fix Released
Status in linux source package in Trusty:
  New
Status in nbd source package in Trusty:
  Invalid
Status in linux source package in Xenial:
  New
Status in nbd source package in Xenial:
  Invalid
Status in linux source package in Artful:
  New
Status in nbd source package in Artful:
  Incomplete
Status in linux source package in Bionic:
  New
Status in nbd source package in Bionic:
  Fix Committed
Status in linux source package in Cosmic:
  In Progress
Status in nbd source package in Cosmic:
  Fix Released

Bug description:
  [Impact]
  nbd-server will crash when a client connects to it, if it was started without 
a server name. It may also leak fds if fork fails.

  [Test Case]
  Running the server with an empty config file and an image on the command 
line, and then starting a local client will crash the server. After the fix, it 
doesn't crash anymore, and a filesystem may be created and used after that.

  [Regression Potential]
  The fix also implies a small package change, to use quilt. It has built just 
fine and many tests have been run on top of those packages and no failure 
seemed to be the result of those userspace changes, but known failures in the 
kernel driver.

  
  Testing failed on:
  amd64: 
https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-artful/artful/amd64/l/linux/20180518_040741_b3b54@/log.gz
  i386: 
https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-artful/artful/i386/l/linux/20180518_050911_b3b54@/log.gz
  ppc64el: 
https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-artful/artful/ppc64el/l/linux/20180518_040734_b3b54@/log.gz
  s390x: 
https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-artful/artful/s390x/l/linux/20180518_035527_b3b54@/log.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1772024/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-06-15 Thread Colin Ian King
Tested with 0.6.5.6-0ubuntu23 inside a xenial lxc container and the 10
second delay is now fixed.

** Tags added: ve

** Changed in: zfs-linux (Ubuntu Bionic)
   Importance: Undecided => High

** Changed in: zfs-linux (Ubuntu Artful)
   Importance: Undecided => High

** Changed in: zfs-linux (Ubuntu Xenial)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed
Status in zfs-linux source package in Bionic:
  Fix Committed
Status in zfs-linux source package in Cosmic:
  Fix Released

Bug description:
  == SRU Justification, Xenial, Artful, Bionic ==

  When outside a lxd container with zfs storage, zfs list or zpool
  status either returns or reports what's going on.

  When inside a lxd container with zfs storage, zfs list or zpool status
  appears to hang, no output for 10 seconds.

  == Fix ==

  Inside a container we don't need the 10 seconds timeout, so check for
  this scenario and set the timeout to default to 0 seconds.

  == Regression Potential ==

  Minimal, this caters for a corner case inside a containerized
  environment, the fix will not alter the behaviour for other cases.

  -

  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: 0.6.5.6-0ubuntu19
    Candidate: 0.6.5.6-0ubuntu19
    Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-06-15 Thread Colin Ian King
Tested with 0.6.5.11-1ubuntu3.6 inside an artful lxc container and the
10 second delay is now fixed.

** Tags added: verification-done-artful

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed
Status in zfs-linux source package in Bionic:
  Fix Committed
Status in zfs-linux source package in Cosmic:
  Fix Released

Bug description:
  == SRU Justification, Xenial, Artful, Bionic ==

  When outside a lxd container with zfs storage, zfs list or zpool
  status either returns or reports what's going on.

  When inside a lxd container with zfs storage, zfs list or zpool status
  appears to hang, no output for 10 seconds.

  == Fix ==

  Inside a container we don't need the 10 seconds timeout, so check for
  this scenario and set the timeout to default to 0 seconds.

  == Regression Potential ==

  Minimal, this caters for a corner case inside a containerized
  environment, the fix will not alter the behaviour for other cases.

  -

  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: 0.6.5.6-0ubuntu19
    Candidate: 0.6.5.6-0ubuntu19
    Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-06-15 Thread Colin Ian King
Tested with 0.7.5-1ubuntu16.2 inside a bionic lxc container and the 10
second delay is now fixed.

** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed
Status in zfs-linux source package in Bionic:
  Fix Committed
Status in zfs-linux source package in Cosmic:
  Fix Released

Bug description:
  == SRU Justification, Xenial, Artful, Bionic ==

  When outside a lxd container with zfs storage, zfs list or zpool
  status either returns or reports what's going on.

  When inside a lxd container with zfs storage, zfs list or zpool status
  appears to hang, no output for 10 seconds.

  == Fix ==

  Inside a container we don't need the 10 seconds timeout, so check for
  this scenario and set the timeout to default to 0 seconds.

  == Regression Potential ==

  Minimal, this caters for a corner case inside a containerized
  environment, the fix will not alter the behaviour for other cases.

  -

  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: 0.6.5.6-0ubuntu19
    Candidate: 0.6.5.6-0ubuntu19
    Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-06-11 Thread Colin Ian King
** Changed in: zfs-linux (Ubuntu Bionic)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: zfs-linux (Ubuntu Xenial)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: zfs-linux (Ubuntu Artful)
 Assignee: (unassigned) => Colin Ian King (colin-king)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed
Status in zfs-linux source package in Bionic:
  Fix Committed
Status in zfs-linux source package in Cosmic:
  Fix Released

Bug description:
  == SRU Justification, Xenial, Artful, Bionic ==

  When outside a lxd container with zfs storage, zfs list or zpool
  status either returns or reports what's going on.

  When inside a lxd container with zfs storage, zfs list or zpool status
  appears to hang, no output for 10 seconds.

  == Fix ==

  Inside a container we don't need the 10 seconds timeout, so check for
  this scenario and set the timeout to default to 0 seconds.

  == Regression Potential ==

  Minimal, this caters for a corner case inside a containerized
  environment, the fix will not alter the behaviour for other cases.

  -

  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: 0.6.5.6-0ubuntu19
    Candidate: 0.6.5.6-0ubuntu19
    Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-06-08 Thread Colin Ian King
..plus it can be easily checked w/o any special privilege.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Triaged
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed
Status in zfs-linux source package in Bionic:
  Fix Committed
Status in zfs-linux source package in Cosmic:
  Triaged

Bug description:
  == SRU Justification, Xenial, Artful, Bionic ==

  When outside a lxd container with zfs storage, zfs list or zpool
  status either returns or reports what's going on.

  When inside a lxd container with zfs storage, zfs list or zpool status
  appears to hang, no output for 10 seconds.

  == Fix ==

  Inside a container we don't need the 10 seconds timeout, so check for
  this scenario and set the timeout to default to 0 seconds.

  == Regression Potential ==

  Minimal, this caters for a corner case inside a containerized
  environment, the fix will not alter the behaviour for other cases.

  -

  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: 0.6.5.6-0ubuntu19
    Candidate: 0.6.5.6-0ubuntu19
    Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-06-08 Thread Colin Ian King
So, I am pretty confident that systemd will create
"/run/systemd/container" if we're inside a container, so I'll check for
that

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Triaged
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed
Status in zfs-linux source package in Bionic:
  Fix Committed
Status in zfs-linux source package in Cosmic:
  Triaged

Bug description:
  == SRU Justification, Xenial, Artful, Bionic ==

  When outside a lxd container with zfs storage, zfs list or zpool
  status either returns or reports what's going on.

  When inside a lxd container with zfs storage, zfs list or zpool status
  appears to hang, no output for 10 seconds.

  == Fix ==

  Inside a container we don't need the 10 seconds timeout, so check for
  this scenario and set the timeout to default to 0 seconds.

  == Regression Potential ==

  Minimal, this caters for a corner case inside a containerized
  environment, the fix will not alter the behaviour for other cases.

  -

  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: 0.6.5.6-0ubuntu19
    Candidate: 0.6.5.6-0ubuntu19
    Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-06-07 Thread Colin Ian King
OK, I hadn't realized that, so much for my trivial testing.  So, is
there any simple way for any process inside a container to detect if one
is inside a containerized environment without any special root
privileges?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Triaged
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed
Status in zfs-linux source package in Bionic:
  Fix Committed
Status in zfs-linux source package in Cosmic:
  Triaged

Bug description:
  == SRU Justification, Xenial, Artful, Bionic ==

  When outside a lxd container with zfs storage, zfs list or zpool
  status either returns or reports what's going on.

  When inside a lxd container with zfs storage, zfs list or zpool status
  appears to hang, no output for 10 seconds.

  == Fix ==

  Inside a container we don't need the 10 seconds timeout, so check for
  this scenario and set the timeout to default to 0 seconds.

  == Regression Potential ==

  Minimal, this caters for a corner case inside a containerized
  environment, the fix will not alter the behaviour for other cases.

  -

  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: 0.6.5.6-0ubuntu19
    Candidate: 0.6.5.6-0ubuntu19
    Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-06-07 Thread Colin Ian King
I'm confused too then, how is that when I start a process inside the
container, such as from a shell it can access this environment variable
and my fix works?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Triaged
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed
Status in zfs-linux source package in Bionic:
  Fix Committed
Status in zfs-linux source package in Cosmic:
  Triaged

Bug description:
  == SRU Justification, Xenial, Artful, Bionic ==

  When outside a lxd container with zfs storage, zfs list or zpool
  status either returns or reports what's going on.

  When inside a lxd container with zfs storage, zfs list or zpool status
  appears to hang, no output for 10 seconds.

  == Fix ==

  Inside a container we don't need the 10 seconds timeout, so check for
  this scenario and set the timeout to default to 0 seconds.

  == Regression Potential ==

  Minimal, this caters for a corner case inside a containerized
  environment, the fix will not alter the behaviour for other cases.

  -

  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: 0.6.5.6-0ubuntu19
    Candidate: 0.6.5.6-0ubuntu19
    Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-06-07 Thread Colin Ian King
** Description changed:

+ == SRU Justification, Xenial, Artful, Bionic ==
+ 
+ When outside a lxd container with zfs storage, zfs list or zpool status
+ either returns or reports what's going on.
+ 
+ When inside a lxd container with zfs storage, zfs list or zpool status
+ appears to hang, no output for 10 seconds.
+ 
+ == Fix ==
+ 
+ Inside a container we don't need the 10 seconds timeout, so check for
+ this scenario and set the timeout to default to 0 seconds.
+ 
+ == Regression Potential ==
+ 
+ Minimal, this caters for a corner case inside a containerized
+ environment, the fix will not alter the behaviour for other cases.
+ 
+ -
+ 
  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04
  
  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
-   Installed: 0.6.5.6-0ubuntu19
-   Candidate: 0.6.5.6-0ubuntu19
-   Version table:
-  *** 0.6.5.6-0ubuntu19 500
- 500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
- 100 /var/lib/dpkg/status
+   Installed: 0.6.5.6-0ubuntu19
+   Candidate: 0.6.5.6-0ubuntu19
+   Version table:
+  *** 0.6.5.6-0ubuntu19 500
+ 500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
+ 100 /var/lib/dpkg/status
  
  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.
  
  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.
  
  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.
  
  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
-  TERM=xterm-256color
-  PATH=(custom, no user)
-  LANG=C.UTF-8
+  TERM=xterm-256color
+  PATH=(custom, no user)
+  LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Triaged
Status in zfs-linux source package in Xenial:
  New
Status in zfs-linux source package in Artful:
  New
Status in zfs-linux source package in Bionic:
  New
Status in zfs-linux source package in Cosmic:
  Triaged

Bug description:
  == SRU Justification, Xenial, Artful, Bionic ==

  When outside a lxd container with zfs storage, zfs list or zpool
  status either returns or reports what's going on.

  When inside a lxd container with zfs storage, zfs list or zpool status
  appears to hang, no output for 10 seconds.

  == Fix ==

  Inside a container we don't need the 10 seconds timeout, so check for
  this scenario and set the timeout to default to 0 seconds.

  == Regression Potential ==

  Minimal, this caters for a corner case inside a containerized
  environment, the fix will not alter the behaviour for other cases.

  -

  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: 0.6.5.6-0ubuntu19
    Candidate: 0.6.5.6-0ubuntu19
    Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-06-07 Thread Colin Ian King
I'd rather not change the default behaviour for non-container
environments as I want to be conservative here, so instead, I'm going to
check if $container exits and set the timeout to zero for this specific
case.

** Also affects: zfs-linux (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: lxd (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: zfs-linux (Ubuntu Artful)
   Importance: Undecided
   Status: New

** Also affects: lxd (Ubuntu Artful)
   Importance: Undecided
   Status: New

** Also affects: zfs-linux (Ubuntu Cosmic)
   Importance: High
 Assignee: Colin Ian King (colin-king)
   Status: Triaged

** Also affects: lxd (Ubuntu Cosmic)
   Importance: Undecided
   Status: Invalid

** Also affects: zfs-linux (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: lxd (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** No longer affects: lxd (Ubuntu Xenial)

** No longer affects: lxd (Ubuntu Artful)

** No longer affects: lxd (Ubuntu Bionic)

** No longer affects: lxd (Ubuntu Cosmic)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  Invalid
Status in zfs-linux package in Ubuntu:
  Triaged
Status in zfs-linux source package in Xenial:
  New
Status in zfs-linux source package in Artful:
  New
Status in zfs-linux source package in Bionic:
  New
Status in zfs-linux source package in Cosmic:
  Triaged

Bug description:
  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu19
Candidate: 0.6.5.6-0ubuntu19
Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1774571] Re: sigq test from stress-ng will get stuck with Artful kernel

2018-06-01 Thread Colin Ian King
** Also affects: stress-ng (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: stress-ng (Ubuntu)
   Status: New => Fix Released

** Changed in: stress-ng (Ubuntu)
   Status: Fix Released => Fix Committed

** Changed in: stress-ng (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: stress-ng (Ubuntu)
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1774571

Title:
  sigq test from stress-ng will get stuck with Artful kernel

Status in ubuntu-kernel-tests:
  Fix Released
Status in linux package in Ubuntu:
  Invalid
Status in stress-ng package in Ubuntu:
  Fix Committed

Bug description:
  This issue can be spotted on s390x, ARM64 and AMD64 Artful kernel.

  10:02:39 DEBUG| [stdout] sigpending STARTING
  10:02:44 DEBUG| [stdout] sigpending RETURNED 0
  10:02:44 DEBUG| [stdout] sigpending PASSED
  10:02:44 DEBUG| [stdout] sigpipe STARTING
  10:02:49 DEBUG| [stdout] sigpipe RETURNED 0
  10:02:49 DEBUG| [stdout] sigpipe PASSED
  10:02:49 DEBUG| [stdout] sigq STARTING
  ...

  This is the last entry related to stress-ng in syslog of a s390x node:
  May 31 10:46:39 s2lp6g004 stress-ng: system: 's2lp6g004' Linux 
4.13.0-45-generic #50-Ubuntu SMP Wed May 30 08:21:19 UTC 2018 s390x
  May 31 10:46:39 s2lp6g004 stress-ng: memory (MB): total 1805.91, free 
1409.97, shared 6.06, buffer 134.56, swap 2001.63, free swap 1890.19
  May 31 10:46:39 s2lp6g004 stress-ng: info:  [20125] dispatching hogs: 4 sigq

  No more stress-ng related message after this.

  ubuntu@s2lp6g004:/tmp$ cat stress-9234.log 
  stress-ng: debug: [20125] 2 processors online, 2 processors configured
  stress-ng: info:  [20125] dispatching hogs: 4 sigq
  stress-ng: debug: [20125] cache allocate: reducing cache level from L3 (too 
high) to L2
  stress-ng: debug: [20125] cache allocate: default cache size: 2048K
  stress-ng: debug: [20125] starting stressors
  stress-ng: debug: [20126] stress-ng-sigq: started [20126] (instance 0)
  stress-ng: debug: [20125] 4 stressors spawned
  stress-ng: debug: [20129] stress-ng-sigq: started [20129] (instance 3)
  stress-ng: debug: [20128] stress-ng-sigq: started [20128] (instance 2)
  stress-ng: debug: [20127] stress-ng-sigq: started [20127] (instance 1)
  stress-ng: debug: [20126] stress-ng-sigq: parent sent termination notice
  stress-ng: debug: [20130] stress-ng-sigq: child got termination notice

  ubuntu@s2lp6g004:/tmp$ ps aux | grep stress-ng
  root  5948  0.0  0.0  11456   712 ?SMay31   0:00 ./stress-ng 
-v -t 5 --enosys 4 --ignite-cpu --syslog --verbose --verify --oomable
  root  7654  0.0  0.0  11456   712 ?SMay31   0:00 ./stress-ng 
-v -t 5 --enosys 4 --ignite-cpu --syslog --verbose --verify --oomable
  root  7728  0.0  0.0  11456   712 ?SMay31   0:00 ./stress-ng 
-v -t 5 --enosys 4 --ignite-cpu --syslog --verbose --verify --oomable
  root  7740  0.0  0.0  11456   712 ?SMay31   0:00 ./stress-ng 
-v -t 5 --enosys 4 --ignite-cpu --syslog --verbose --verify --oomable
  root 20125  0.0  0.2  13612  5008 ?SL   May31   0:00 ./stress-ng 
-v -t 5 --sigq 4 --ignite-cpu --syslog --verbose --verify --oomable
  root 20126  1.0  0.1  13612  1852 ?SMay31  12:03 ./stress-ng 
-v -t 5 --sigq 4 --ignite-cpu --syslog --verbose --verify --oomable
  root 20127  1.0  0.1  13612  1852 ?SMay31  12:11 ./stress-ng 
-v -t 5 --sigq 4 --ignite-cpu --syslog --verbose --verify --oomable
  root 20128  1.0  0.1  13612  1852 ?SMay31  12:04 ./stress-ng 
-v -t 5 --sigq 4 --ignite-cpu --syslog --verbose --verify --oomable
  root 20129  1.0  0.1  13612  1852 ?SMay31  12:06 ./stress-ng 
-v -t 5 --sigq 4 --ignite-cpu --syslog --verbose --verify --oomable
  root 20131  0.0  0.0  13612   312 ?SMay31   0:00 ./stress-ng 
-v -t 5 --sigq 4 --ignite-cpu --syslog --verbose --verify --oomable
  root 20132  0.0  0.0  13612   312 ?SMay31   0:00 ./stress-ng 
-v -t 5 --sigq 4 --ignite-cpu --syslog --verbose --verify --oomable
  root 20133  0.0  0.0  13612   312 ?SMay31   0:00 ./stress-ng 
-v -t 5 --sigq 4 --ignite-cpu --syslog --verbose --verify --oomable

  ProblemType: Bug
  DistroRelease: Ubuntu 17.10
  Package: linux-image-4.13.0-45-generic 4.13.0-45.50
  ProcVersionSignature: Ubuntu 4.13.0-45.50-generic 4.13.16
  Uname: Linux 4.13.0-45-generic s390x
  NonfreeKernelModules: zfs zunicode zavl zcommon znvpair
  AlsaDevices: Error: command ['ls', '-l', '/dev/snd/'] failed with exit code 
2: ls: cannot access '/dev/snd/': No such file or directory
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay': 'aplay'
  ApportVersion: 2.20.7-0ubuntu3.9
  Architecture: s390x
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord': 
'arecord'
 

[Kernel-packages] [Bug 1751213] Re: kernel security test report that the lttng_probe_writeback module is tainted on Bionic s390x

2018-05-29 Thread Colin Ian King
That makes sense. We need to force remove that module then.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1751213

Title:
  kernel security test report that the lttng_probe_writeback module is
  tainted on Bionic s390x

Status in lttng-modules:
  New
Status in linux package in Ubuntu:
  In Progress

Bug description:
  This issue was only spotted on Bionic s390x instances.

    FAIL: test_140_kernel_modules_not_tainted (__main__.KernelSecurityTest)
    kernel modules are not marked with a taint flag (especially 'E' for 
TAINT_UNSIGNED_MODULE)
    --
    Traceback (most recent call last):
  File "./test-kernel-security.py", line 1727, in 
test_140_kernel_modules_not_tainted
    self.fail('Module \'%s\' is tainted: %s' % (fields[0], last_field))
    AssertionError: Module 'lttng_probe_writeback' is tainted: (OE)

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: linux-image-4.15.0-10-generic 4.15.0-10.11
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic s390x
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  AlsaDevices: Error: command ['ls', '-l', '/dev/snd/'] failed with exit code 
2: ls: cannot access '/dev/snd/': No such file or directory
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay': 'aplay'
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: s390x
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord': 
'arecord'
  CRDA: Error: command ['iw', 'reg', 'get'] failed with exit code 1: nl80211 
not found.
  CurrentDmesg:

  Date: Fri Feb 23 07:43:00 2018
  HibernationDevice: RESUME=UUID=caaee9b2-6bc1-4c8e-b26c-69038c092091
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig': 'iwconfig'
  Lspci:

  Lsusb: Error: command ['lsusb'] failed with exit code 1:
  PciMultimedia:

  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=C
   SHELL=/bin/bash
  ProcFB: Error: [Errno 2] No such file or directory: '/proc/fb'
  ProcKernelCmdLine: root=UUID=c7d7bbcb-a039-4ead-abfe-7672dea0add4 
crashkernel=196M
  RelatedPackageVersions:
   linux-restricted-modules-4.15.0-10-generic N/A
   linux-backports-modules-4.15.0-10-generic  N/A
   linux-firmware 1.171
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill': 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/lttng-modules/+bug/1751213/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1772617] Re: fix inverted boolean flag in arch_add_memory, reverts back to original behaviour

2018-05-22 Thread Colin Ian King
Test kernel packages have been uploaded ready for testing:

http://kernel.ubuntu.com/~cking/lp1772617/

Please test these and report if this fixes the issue.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1772617

Title:
  fix inverted boolean flag in arch_add_memory, reverts back to original
  behaviour

Status in linux package in Ubuntu:
  In Progress

Bug description:
  == SRU Justification, ARTFUL ==

  Bug fix #1761104 incorrectly inverted the flag in the call to 
arch_add_memory, it should be true instead of false.
  == Fix ==

  Fix partial backport from bug #1747069, remove can_online_high_movable
  and fix the incorrectly set boolean argument to arch_add_memory().
  NVIDIA are reported that this is was incorrectly flipped in the last
  SRU to fix their driver support on powerpc.

  == Testing ==

  run ADT memory hotplug test, should not regress this. Without the fix,
  the nvidia driver on powerpc will not work. With the fix it loads and
  works.

  == Regression Potential ==

  This fixes a regression in the original fix and hence the regression
  potential is the same as the previously SRU'd bug fix for #1747069,
  namely:

  "Reverting this commit does remove some functionality, however this
  does not regress the kernel compared to previous releases and having a
  working reliable memory hotplug is the preferred option. This fix does
  touch some memory hotplug, so there is a risk that this may break this
  functionality that is not covered by the kernel regression testing."

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1772617/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1772617] [NEW] fix inverted boolean flag in arch_add_memory, reverts back to original behaviour

2018-05-22 Thread Colin Ian King
Public bug reported:

== SRU Justification, ARTFUL ==

Bug fix #1761104 incorrectly inverted the flag in the call to arch_add_memory, 
it should be true instead of false.
== Fix ==

Fix partial backport from bug #1747069, remove can_online_high_movable
and fix the incorrectly set boolean argument to arch_add_memory().
NVIDIA are reported that this is was incorrectly flipped in the last SRU
to fix their driver support on powerpc.

== Testing ==

run ADT memory hotplug test, should not regress this. Without the fix,
the nvidia driver on powerpc will not work. With the fix it loads and
works.

== Regression Potential ==

This fixes a regression in the original fix and hence the regression
potential is the same as the previously SRU'd bug fix for #1747069,
namely:

"Reverting this commit does remove some functionality, however this does
not regress the kernel compared to previous releases and having a
working reliable memory hotplug is the preferred option. This fix does
touch some memory hotplug, so there is a risk that this may break this
functionality that is not covered by the kernel regression testing."

** Affects: linux (Ubuntu)
 Importance: High
 Assignee: Colin Ian King (colin-king)
 Status: In Progress

** Changed in: linux (Ubuntu)
   Status: New => In Progress

** Changed in: linux (Ubuntu)
   Importance: Undecided => High

** Changed in: linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1772617

Title:
  fix inverted boolean flag in arch_add_memory, reverts back to original
  behaviour

Status in linux package in Ubuntu:
  In Progress

Bug description:
  == SRU Justification, ARTFUL ==

  Bug fix #1761104 incorrectly inverted the flag in the call to 
arch_add_memory, it should be true instead of false.
  == Fix ==

  Fix partial backport from bug #1747069, remove can_online_high_movable
  and fix the incorrectly set boolean argument to arch_add_memory().
  NVIDIA are reported that this is was incorrectly flipped in the last
  SRU to fix their driver support on powerpc.

  == Testing ==

  run ADT memory hotplug test, should not regress this. Without the fix,
  the nvidia driver on powerpc will not work. With the fix it loads and
  works.

  == Regression Potential ==

  This fixes a regression in the original fix and hence the regression
  potential is the same as the previously SRU'd bug fix for #1747069,
  namely:

  "Reverting this commit does remove some functionality, however this
  does not regress the kernel compared to previous releases and having a
  working reliable memory hotplug is the preferred option. This fix does
  touch some memory hotplug, so there is a risk that this may break this
  functionality that is not covered by the kernel regression testing."

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1772617/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1766308] Re: inexplicably large file reported by zfs filesystem

2018-05-18 Thread Colin Ian King
** Changed in: zfs-linux (Ubuntu)
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766308

Title:
  inexplicably large file reported by zfs filesystem

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  I have a zfs filesystem containing a single qemu kvm disk image. The
  vm has been working normally. The image file is only allocated 10G,
  however today I became aware that the file, when examined from the ZFS
  host (hypervisor) is reporting an inexplicable, massive file size of
  around 12T. 12T is larger than the pool itself. Snapshots or other
  filesystem features should not be involved. I'm suspicious that the
  file(system?) has been corrupted.

  
  root@fusion:~# ls -l /data/store/vms/plexee/
  total 6164615
  -rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root
  ^^ !!

  root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root 
  image: /data/store/vms/plexee//plexee-root
  file format: qcow2
  virtual size: 10G (10737418240 bytes)
  disk size: 5.9G
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false

  
  root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee
  NAME USED  AVAIL  REFER  MOUNTPOINT
  rpool/DATA/fusion/store/vms/plexee  5.88G   484G  5.88G  
/data/store/vms/plexee

  
  root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee
  NAMEPROPERTY  VALUE   
SOURCE
  rpool/DATA/fusion/store/vms/plexee  type  filesystem  
-
  rpool/DATA/fusion/store/vms/plexee  creation  Mon Mar 26  9:50 
2018   -
  rpool/DATA/fusion/store/vms/plexee  used  5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  available 484G
-
  rpool/DATA/fusion/store/vms/plexee  referenced5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x   
-
  rpool/DATA/fusion/store/vms/plexee  mounted   yes 
-
  rpool/DATA/fusion/store/vms/plexee  quota none
default
  rpool/DATA/fusion/store/vms/plexee  reservation   none
default
  rpool/DATA/fusion/store/vms/plexee  recordsize128K
default
  rpool/DATA/fusion/store/vms/plexee  mountpoint
/data/store/vms/plexee  inherited from rpool/DATA/fusion
  rpool/DATA/fusion/store/vms/plexee  sharenfs  off 
default
  rpool/DATA/fusion/store/vms/plexee  checksum  on  
default
  rpool/DATA/fusion/store/vms/plexee  compression   lz4 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  atime off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  devices   off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  exec  on  
default
  rpool/DATA/fusion/store/vms/plexee  setuidon  
default
  rpool/DATA/fusion/store/vms/plexee  readonly  off 
default
  rpool/DATA/fusion/store/vms/plexee  zoned off 
default
  rpool/DATA/fusion/store/vms/plexee  snapdir   hidden  
default
  rpool/DATA/fusion/store/vms/plexee  aclinheritrestricted  
default
  rpool/DATA/fusion/store/vms/plexee  canmount  on  
default
  rpool/DATA/fusion/store/vms/plexee  xattr on  
default
  rpool/DATA/fusion/store/vms/plexee  copies1   
default
  rpool/DATA/fusion/store/vms/plexee  version   5   
-
  rpool/DATA/fusion/store/vms/plexee  utf8only  off 
-
  rpool/DATA/fusion/store/vms/plexee  normalization none
-
  rpool/DATA/fusion/store/vms/plexee  casesensitivity   sensitive   
-
  rpool/DATA/fusion/store/vms/plexee  vscan off 
default
  rpool/DATA/fusion/store/vms/plexee  nbmandoff 
default
  rpool/DATA/fusion/store/vms/plexee  sharesmb  off 
default
  rpool/DATA/fusion/store/vms/plexee  refquota  none
default
  rpool/DATA/fusion/store/vms/plexee  refreservationnone
default
  rpool/DATA/fusion/store/vms/plexee  primarycache  all 
default
  rpool/DATA/fusion/store/vms/plexee  

[Kernel-packages] [Bug 1770849] Re: Ubuntu 18.04 kernel crashed while in degraded mode

2018-05-18 Thread Colin Ian King
** Description changed:

  == SRU Justification ==
  IBM reports a kernel crash with Bionic while in degraded mode(Degraded
  cores).
  
  IBM created a patch to resolve this bug and has submitted it upstream:
  https://lists.ozlabs.org/pipermail/linuxppc-dev/2018-May/172835.html
  
- The patch has not laned in mainline as of yet, so it is being submitted
+ The patch has not landed in mainline as of yet, so it is being submitted
  as a SAUCE patch.
- 
  
  == Fix ==
  UBUNTU: SAUCE: powerpc/perf: Fix memory allocation for core-imc based on 
num_possible_cpus()
  
  == Regression Potential ==
  Low.  Limited to powerpc.
  
  == Test Case ==
  A test kernel was built with this patch and tested by the original bug 
reporter.
  The bug reporter states the test kernel resolved the bug.
- 
- 
  
  kernel crash
  
  The system is going down NOW!
  Sent SIGTERM to all processes
  Sent SIGKILL to all processes
  [   64.713154] kexec_core: Starting new kernel
  [  156.281504630,5] OPAL: Switch to big-endian OS
  [  158.440263459,5] OPAL: Switch to little-endian OS
  [1.889211] Unable to handle kernel paging request for data at address 
0x678e549df9e2878c
  [1.889289] Faulting instruction address: 0xc038aa30
  [1.889344] Oops: Kernel access of bad area, sig: 11 [#1]
  [1.889386] LE SMP NR_CPUS=2048 NUMA PowerNV
  [1.889432] Modules linked in:
  [1.889468] CPU: 3 PID: 1 Comm: swapper/0 Not tainted 4.15.0-20-generic 
#21-Ubuntu
  [1.889545] NIP:  c038aa30 LR: c038aa1c CTR: 

  [1.889608] REGS: c03fed193840 TRAP: 0380   Not tainted  
(4.15.0-20-generic)
  [1.889670] MSR:  90009033   CR: 28000884 
 XER: 2004
  [1.889742] CFAR: c0016e1c SOFTE: 1
  [1.889742] GPR00: c038a914 c03fed193ac0 c16eae00 
0001
  [1.889742] GPR04: c03fd754c7f8 002c 0001 
002b
  [1.889742] GPR08: 678e549df9e28874   
fffe
  [1.889742] GPR12: 28000888 cfa82100 c000d3b8 

  [1.889742] GPR16:    

  [1.889742] GPR20:    
a78e54a22eb64f8c
  [1.889742] GPR24: c03fd754c800 678e549df9e2878c 0300 
c02bd05c
  [1.889742] GPR28: c03fed01ea00 014080c0 c03fd754c800 
c03fed01ea00
  [1.890286] NIP [c038aa30] kmem_cache_alloc_trace+0x2d0/0x330
  [1.890340] LR [c038aa1c] kmem_cache_alloc_trace+0x2bc/0x330
  [1.890391] Call Trace:
  [1.890416] [c03fed193ac0] [c038a914] 
kmem_cache_alloc_trace+0x1b4/0x330 (unreliable)
  [1.890491] [c03fed193b30] [c02bd05c] pmu_dev_alloc+0x3c/0x170
  [1.890547] [c03fed193bb0] [c10e3210] 
perf_event_sysfs_init+0x8c/0xf0
  [1.890611] [c03fed193c40] [c000d144] 
do_one_initcall+0x64/0x1d0
  [1.890676] [c03fed193d00] [c10b4400] 
kernel_init_freeable+0x280/0x374
  [1.890740] [c03fed193dc0] [c000d3d4] kernel_init+0x24/0x160
  [1.890795] [c03fed193e30] [c000b528] 
ret_from_kernel_thread+0x5c/0xb4
  [1.890857] Instruction dump:
  [1.890909] 7c97ba78 fb210038 38a50001 7f19ba78 fb29 f8aa 4bc8c3f1 
6000
  [1.890978] 7fb8b840 419e0028 e93f0022 e91f0140 <7d59482a> 7d394a14 
7d4a4278 7fa95040
  [1.891050] ---[ end trace 41b3fe7a827f3888 ]---
  [2.900027]
  [3.900175] Kernel panic - not syncing: Attempted to kill init! 
exitcode=0x000b
  [3.900175]
  [4.71868[  175.340944355,5] OPAL: Reboot request...
  2] Rebooting in 10 seconds..
  
  This fix is needed to resolve the crash
  
  https://lists.ozlabs.org/pipermail/linuxppc-dev/2018-May/172835.html

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1770849

Title:
  Ubuntu 18.04 kernel crashed while in degraded mode

Status in The Ubuntu-power-systems project:
  In Progress
Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Bionic:
  In Progress

Bug description:
  == SRU Justification ==
  IBM reports a kernel crash with Bionic while in degraded mode(Degraded
  cores).

  IBM created a patch to resolve this bug and has submitted it upstream:
  https://lists.ozlabs.org/pipermail/linuxppc-dev/2018-May/172835.html

  The patch has not landed in mainline as of yet, so it is being submitted
  as a SAUCE patch.

  == Fix ==
  UBUNTU: SAUCE: powerpc/perf: Fix memory allocation for core-imc based on 
num_possible_cpus()

  == Regression Potential ==
  Low.  Limited to powerpc.

  == Test Case ==
  A test kernel was built with this patch and tested by the original bug 
reporter.
  The bug reporter states the test kernel 

[Kernel-packages] [Bug 1772024] Re: linux 4.13.0-42.47 ADT test failure with linux 4.13.0-42.47 (nbd-smoke-test)

2018-05-18 Thread Colin Ian King
** Changed in: linux (Ubuntu)
   Importance: Undecided => High

** Changed in: linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: linux (Ubuntu)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1772024

Title:
  linux 4.13.0-42.47 ADT test failure with linux 4.13.0-42.47 (nbd-
  smoke-test)

Status in linux package in Ubuntu:
  In Progress

Bug description:
  Testing failed on:
  amd64: 
https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-artful/artful/amd64/l/linux/20180518_040741_b3b54@/log.gz
  i386: 
https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-artful/artful/i386/l/linux/20180518_050911_b3b54@/log.gz
  ppc64el: 
https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-artful/artful/ppc64el/l/linux/20180518_040734_b3b54@/log.gz
  s390x: 
https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-artful/artful/s390x/l/linux/20180518_035527_b3b54@/log.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1772024/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1771091] Re: zpool freezes importing older ZFS pool, blocks shotdown and system does not boot

2018-05-14 Thread Colin Ian King
The zfs-dkms package should be removed as the kernel includes the ZFS +
SPL drivers. You just need to install zfsutils-linux.  Please try the
following:

sudo apt-get purge zfs-dkms
sudo apt-get install --reinstall zfsutils-linux


** Changed in: zfs-linux (Ubuntu)
   Status: New => Triaged

** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => Medium

** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1771091

Title:
  zpool freezes importing older ZFS pool, blocks shotdown and system
  does not boot

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  After fresh install of xubuntu 18.04 LTS 64-bit, 
  and the installation of zfs-dkms I tried to do 'zpool import' on an older ZFS 
pool, consisting of on partition on the separate PATA HDD.

  After issuing 'sudo zpool import ' , command freezes (as to zfs 
commands).
  System then fails to shutdown properly and seems locked and needs hard reboot 
(actually it waits up to half an hour to shutdown).
  After restarting, system displays Xubuntu splash screen and does not boot 
anymore (it actually resets itself if given again half an hour or so).

  When getting to rescue options, by pressing SHIFT key on keyboard and
  going to shell and remounting / read-write, I could do removing of ZFS
  Ubuntu packages and after that system could boot.

  Usefull message I got when trying to continue booting in shell was:
  "[ 40.811792] VERIFY3(0 == remove_reference(hdr,  ((void *)0), tag)) failed 
(0 = 0)  
  [ 40.811856] PANIC at arc.c:3084:arc_buf_destroy()"

  So it points to some ZFS bug with ARC.

  Previously, I was able to (unlike with 17.10) upgrade from 17.10 to 18.04 and 
to import and use a newer ZFS pool.  
  But this bug is about fresh 18.04 install and an older ZFS pool. (zpool 
import says pool can be upgraded)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1771091/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1768777] Re: libnvpair1linux doc contents have busted symlinks

2018-05-14 Thread Colin Ian King
** Changed in: zfs-linux (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1768777

Title:
  libnvpair1linux doc contents have busted symlinks

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  == SRU Justification [BIONIC] ==

  Upgrading from Xenial to Bionic causes broken symlinked changelog,
  COPYRIGHT and OPENSOLARIS.LICENSE.gz files because the zfs-doc file is
  missing.

  == Fix ==

  Add back the missing zfs-doc package, with this package the upgrade
  works correctly.

  == Regression Potential ==

  Minimal, this adds just a bunch of missing files that are missing from
  the default Bionic zfsutils-linux installation.

  ---

  % apt-cache policy libnvpair1linux
  libnvpair1linux:
    Installed: 0.7.5-1ubuntu15
    Candidate: 0.7.5-1ubuntu15
    Version table:
   *** 0.7.5-1ubuntu15 500
  500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

  % lsb_release -rd
  Description:Ubuntu 18.04 LTS
  Release:18.04

  The symlinks are busted:

  % ls -al
  total 108
  drwxr-xr-x3 root root  4096 Apr 23 08:33 ./
  drwxr-xr-x 1120 root root 36864 May  2 07:36 ../
  lrwxrwxrwx1 root root38 Apr 17 04:18 changelog.Debian.gz -> 
../libnvpair1linux/changelog.Debian.gz
  -rw-r--r--1 root root 53854 Oct  6  2017 copyright
  lrwxrwxrwx1 root root28 Apr 17 04:18 COPYRIGHT -> 
../libnvpair1linux/COPYRIGHT
  drwxr-xr-x2 root root  4096 Apr 23 08:33 examples/
  lrwxrwxrwx1 root root41 Apr 17 04:18 OPENSOLARIS.LICENSE.gz -> 
../libnvpair1linux/OPENSOLARIS.LICENSE.gz
  -rw-r--r--1 root root   684 Oct  6  2017 README.Debian

  % zcat changelog.Debian.gz
  gzip: changelog.Debian.gz: Too many levels of symbolic links

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: libnvpair1linux 0.7.5-1ubuntu15
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7
  Architecture: amd64
  Date: Thu May  3 03:47:57 2018
  Dependencies:
   gcc-8-base 8-20180414-1ubuntu2
   libc6 2.27-3ubuntu1
   libgcc1 1:8-20180414-1ubuntu2
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1768777/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1768777] Re: libnvpair1linux doc contents have busted symlinks

2018-05-09 Thread Colin Ian King
Installed a Xenial server and then zfsutils-linux.  Upgraded to Bionic
with -proposed enabled and the symlinks are not broken with zfsutils-
linux 0.7.5-1ubuntu16 - marking as verified

** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1768777

Title:
  libnvpair1linux doc contents have busted symlinks

Status in zfs-linux package in Ubuntu:
  In Progress
Status in zfs-linux source package in Bionic:
  Fix Committed

Bug description:
  == SRU Justification [BIONIC] ==

  Upgrading from Xenial to Bionic causes broken symlinked changelog,
  COPYRIGHT and OPENSOLARIS.LICENSE.gz files because the zfs-doc file is
  missing.

  == Fix ==

  Add back the missing zfs-doc package, with this package the upgrade
  works correctly.

  == Regression Potential ==

  Minimal, this adds just a bunch of missing files that are missing from
  the default Bionic zfsutils-linux installation.

  ---

  % apt-cache policy libnvpair1linux
  libnvpair1linux:
    Installed: 0.7.5-1ubuntu15
    Candidate: 0.7.5-1ubuntu15
    Version table:
   *** 0.7.5-1ubuntu15 500
  500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

  % lsb_release -rd
  Description:Ubuntu 18.04 LTS
  Release:18.04

  The symlinks are busted:

  % ls -al
  total 108
  drwxr-xr-x3 root root  4096 Apr 23 08:33 ./
  drwxr-xr-x 1120 root root 36864 May  2 07:36 ../
  lrwxrwxrwx1 root root38 Apr 17 04:18 changelog.Debian.gz -> 
../libnvpair1linux/changelog.Debian.gz
  -rw-r--r--1 root root 53854 Oct  6  2017 copyright
  lrwxrwxrwx1 root root28 Apr 17 04:18 COPYRIGHT -> 
../libnvpair1linux/COPYRIGHT
  drwxr-xr-x2 root root  4096 Apr 23 08:33 examples/
  lrwxrwxrwx1 root root41 Apr 17 04:18 OPENSOLARIS.LICENSE.gz -> 
../libnvpair1linux/OPENSOLARIS.LICENSE.gz
  -rw-r--r--1 root root   684 Oct  6  2017 README.Debian

  % zcat changelog.Debian.gz
  gzip: changelog.Debian.gz: Too many levels of symbolic links

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: libnvpair1linux 0.7.5-1ubuntu15
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7
  Architecture: amd64
  Date: Thu May  3 03:47:57 2018
  Dependencies:
   gcc-8-base 8-20180414-1ubuntu2
   libc6 2.27-3ubuntu1
   libgcc1 1:8-20180414-1ubuntu2
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1768777/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1768777] Re: libnvpair1linux doc contents have busted symlinks

2018-05-03 Thread Colin Ian King
** Description changed:

+ == SRU Justification ==
+ 
+ Upgrading from Xenial to Bionic causes broken symlinked changelog,
+ COPYRIGHT and OPENSOLARIS.LICENSE.gz files because the zfs-doc file is
+ missing.
+ 
+ == Fix ==
+ 
+ Add back the missing zfs-doc package, with this package the upgrade
+ works correctly.
+ 
+ == Regression Potential ==
+ 
+ Minimal, this adds just a bunch of missing files that are missing from
+ the default Bionic zfsutils-linux installation.
+ 
+ 
+ ---
+ 
  % apt-cache policy libnvpair1linux
  libnvpair1linux:
-   Installed: 0.7.5-1ubuntu15
-   Candidate: 0.7.5-1ubuntu15
-   Version table:
-  *** 0.7.5-1ubuntu15 500
- 500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
- 100 /var/lib/dpkg/status
+   Installed: 0.7.5-1ubuntu15
+   Candidate: 0.7.5-1ubuntu15
+   Version table:
+  *** 0.7.5-1ubuntu15 500
+ 500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
+ 100 /var/lib/dpkg/status
  
  % lsb_release -rd
  Description:Ubuntu 18.04 LTS
  Release:18.04
  
  The symlinks are busted:
  
  % ls -al
  total 108
  drwxr-xr-x3 root root  4096 Apr 23 08:33 ./
  drwxr-xr-x 1120 root root 36864 May  2 07:36 ../
  lrwxrwxrwx1 root root38 Apr 17 04:18 changelog.Debian.gz -> 
../libnvpair1linux/changelog.Debian.gz
  -rw-r--r--1 root root 53854 Oct  6  2017 copyright
  lrwxrwxrwx1 root root28 Apr 17 04:18 COPYRIGHT -> 
../libnvpair1linux/COPYRIGHT
  drwxr-xr-x2 root root  4096 Apr 23 08:33 examples/
  lrwxrwxrwx1 root root41 Apr 17 04:18 OPENSOLARIS.LICENSE.gz -> 
../libnvpair1linux/OPENSOLARIS.LICENSE.gz
  -rw-r--r--1 root root   684 Oct  6  2017 README.Debian
  
- % zcat changelog.Debian.gz 
+ % zcat changelog.Debian.gz
  gzip: changelog.Debian.gz: Too many levels of symbolic links
  
  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: libnvpair1linux 0.7.5-1ubuntu15
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7
  Architecture: amd64
  Date: Thu May  3 03:47:57 2018
  Dependencies:
-  gcc-8-base 8-20180414-1ubuntu2
-  libc6 2.27-3ubuntu1
-  libgcc1 1:8-20180414-1ubuntu2
+  gcc-8-base 8-20180414-1ubuntu2
+  libc6 2.27-3ubuntu1
+  libgcc1 1:8-20180414-1ubuntu2
  ProcEnviron:
-  TERM=xterm
-  PATH=(custom, no user)
-  XDG_RUNTIME_DIR=
-  LANG=en_US.UTF-8
-  SHELL=/bin/bash
+  TERM=xterm
+  PATH=(custom, no user)
+  XDG_RUNTIME_DIR=
+  LANG=en_US.UTF-8
+  SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

** Description changed:

- == SRU Justification ==
+ == SRU Justification [BIONIC] ==
  
  Upgrading from Xenial to Bionic causes broken symlinked changelog,
  COPYRIGHT and OPENSOLARIS.LICENSE.gz files because the zfs-doc file is
  missing.
  
  == Fix ==
  
  Add back the missing zfs-doc package, with this package the upgrade
  works correctly.
  
  == Regression Potential ==
  
  Minimal, this adds just a bunch of missing files that are missing from
  the default Bionic zfsutils-linux installation.
- 
  
  ---
  
  % apt-cache policy libnvpair1linux
  libnvpair1linux:
    Installed: 0.7.5-1ubuntu15
    Candidate: 0.7.5-1ubuntu15
    Version table:
   *** 0.7.5-1ubuntu15 500
  500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status
  
  % lsb_release -rd
  Description:Ubuntu 18.04 LTS
  Release:18.04
  
  The symlinks are busted:
  
  % ls -al
  total 108
  drwxr-xr-x3 root root  4096 Apr 23 08:33 ./
  drwxr-xr-x 1120 root root 36864 May  2 07:36 ../
  lrwxrwxrwx1 root root38 Apr 17 04:18 changelog.Debian.gz -> 
../libnvpair1linux/changelog.Debian.gz
  -rw-r--r--1 root root 53854 Oct  6  2017 copyright
  lrwxrwxrwx1 root root28 Apr 17 04:18 COPYRIGHT -> 
../libnvpair1linux/COPYRIGHT
  drwxr-xr-x2 root root  4096 Apr 23 08:33 examples/
  lrwxrwxrwx1 root root41 Apr 17 04:18 OPENSOLARIS.LICENSE.gz -> 
../libnvpair1linux/OPENSOLARIS.LICENSE.gz
  -rw-r--r--1 root root   684 Oct  6  2017 README.Debian
  
  % zcat changelog.Debian.gz
  gzip: changelog.Debian.gz: Too many levels of symbolic links
  
  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: libnvpair1linux 0.7.5-1ubuntu15
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7
  Architecture: amd64
  Date: Thu May  3 03:47:57 2018
  Dependencies:
   gcc-8-base 8-20180414-1ubuntu2
   libc6 2.27-3ubuntu1
   libgcc1 1:8-20180414-1ubuntu2
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably 

[Kernel-packages] [Bug 1768777] Re: libnvpair1linux doc contents have busted symlinks

2018-05-03 Thread Colin Ian King
This seems to only occur with a Xenial -> Artful or Xenial -> Bionic
upgrades and not Artful -> Bionic.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1768777

Title:
  libnvpair1linux doc contents have busted symlinks

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  % apt-cache policy libnvpair1linux
  libnvpair1linux:
Installed: 0.7.5-1ubuntu15
Candidate: 0.7.5-1ubuntu15
Version table:
   *** 0.7.5-1ubuntu15 500
  500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

  % lsb_release -rd
  Description:Ubuntu 18.04 LTS
  Release:18.04

  The symlinks are busted:

  % ls -al
  total 108
  drwxr-xr-x3 root root  4096 Apr 23 08:33 ./
  drwxr-xr-x 1120 root root 36864 May  2 07:36 ../
  lrwxrwxrwx1 root root38 Apr 17 04:18 changelog.Debian.gz -> 
../libnvpair1linux/changelog.Debian.gz
  -rw-r--r--1 root root 53854 Oct  6  2017 copyright
  lrwxrwxrwx1 root root28 Apr 17 04:18 COPYRIGHT -> 
../libnvpair1linux/COPYRIGHT
  drwxr-xr-x2 root root  4096 Apr 23 08:33 examples/
  lrwxrwxrwx1 root root41 Apr 17 04:18 OPENSOLARIS.LICENSE.gz -> 
../libnvpair1linux/OPENSOLARIS.LICENSE.gz
  -rw-r--r--1 root root   684 Oct  6  2017 README.Debian

  % zcat changelog.Debian.gz 
  gzip: changelog.Debian.gz: Too many levels of symbolic links

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: libnvpair1linux 0.7.5-1ubuntu15
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7
  Architecture: amd64
  Date: Thu May  3 03:47:57 2018
  Dependencies:
   gcc-8-base 8-20180414-1ubuntu2
   libc6 2.27-3ubuntu1
   libgcc1 1:8-20180414-1ubuntu2
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1768777/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1766308] Re: inexplicably large file reported by zfs filesystem

2018-05-03 Thread Colin Ian King
I meant to say "just close this bug against ZFS"...

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766308

Title:
  inexplicably large file reported by zfs filesystem

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  I have a zfs filesystem containing a single qemu kvm disk image. The
  vm has been working normally. The image file is only allocated 10G,
  however today I became aware that the file, when examined from the ZFS
  host (hypervisor) is reporting an inexplicable, massive file size of
  around 12T. 12T is larger than the pool itself. Snapshots or other
  filesystem features should not be involved. I'm suspicious that the
  file(system?) has been corrupted.

  
  root@fusion:~# ls -l /data/store/vms/plexee/
  total 6164615
  -rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root
  ^^ !!

  root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root 
  image: /data/store/vms/plexee//plexee-root
  file format: qcow2
  virtual size: 10G (10737418240 bytes)
  disk size: 5.9G
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false

  
  root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee
  NAME USED  AVAIL  REFER  MOUNTPOINT
  rpool/DATA/fusion/store/vms/plexee  5.88G   484G  5.88G  
/data/store/vms/plexee

  
  root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee
  NAMEPROPERTY  VALUE   
SOURCE
  rpool/DATA/fusion/store/vms/plexee  type  filesystem  
-
  rpool/DATA/fusion/store/vms/plexee  creation  Mon Mar 26  9:50 
2018   -
  rpool/DATA/fusion/store/vms/plexee  used  5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  available 484G
-
  rpool/DATA/fusion/store/vms/plexee  referenced5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x   
-
  rpool/DATA/fusion/store/vms/plexee  mounted   yes 
-
  rpool/DATA/fusion/store/vms/plexee  quota none
default
  rpool/DATA/fusion/store/vms/plexee  reservation   none
default
  rpool/DATA/fusion/store/vms/plexee  recordsize128K
default
  rpool/DATA/fusion/store/vms/plexee  mountpoint
/data/store/vms/plexee  inherited from rpool/DATA/fusion
  rpool/DATA/fusion/store/vms/plexee  sharenfs  off 
default
  rpool/DATA/fusion/store/vms/plexee  checksum  on  
default
  rpool/DATA/fusion/store/vms/plexee  compression   lz4 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  atime off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  devices   off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  exec  on  
default
  rpool/DATA/fusion/store/vms/plexee  setuidon  
default
  rpool/DATA/fusion/store/vms/plexee  readonly  off 
default
  rpool/DATA/fusion/store/vms/plexee  zoned off 
default
  rpool/DATA/fusion/store/vms/plexee  snapdir   hidden  
default
  rpool/DATA/fusion/store/vms/plexee  aclinheritrestricted  
default
  rpool/DATA/fusion/store/vms/plexee  canmount  on  
default
  rpool/DATA/fusion/store/vms/plexee  xattr on  
default
  rpool/DATA/fusion/store/vms/plexee  copies1   
default
  rpool/DATA/fusion/store/vms/plexee  version   5   
-
  rpool/DATA/fusion/store/vms/plexee  utf8only  off 
-
  rpool/DATA/fusion/store/vms/plexee  normalization none
-
  rpool/DATA/fusion/store/vms/plexee  casesensitivity   sensitive   
-
  rpool/DATA/fusion/store/vms/plexee  vscan off 
default
  rpool/DATA/fusion/store/vms/plexee  nbmandoff 
default
  rpool/DATA/fusion/store/vms/plexee  sharesmb  off 
default
  rpool/DATA/fusion/store/vms/plexee  refquota  none
default
  rpool/DATA/fusion/store/vms/plexee  refreservationnone
default
  rpool/DATA/fusion/store/vms/plexee  primarycache  all 
default
  rpool/DATA/fusion/store/vms/plexee  secondarycache  

[Kernel-packages] [Bug 1766308] Re: inexplicably large file reported by zfs filesystem

2018-05-03 Thread Colin Ian King
Thanks Matt for the update. Shall we just this bug against ZFS then?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766308

Title:
  inexplicably large file reported by zfs filesystem

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  I have a zfs filesystem containing a single qemu kvm disk image. The
  vm has been working normally. The image file is only allocated 10G,
  however today I became aware that the file, when examined from the ZFS
  host (hypervisor) is reporting an inexplicable, massive file size of
  around 12T. 12T is larger than the pool itself. Snapshots or other
  filesystem features should not be involved. I'm suspicious that the
  file(system?) has been corrupted.

  
  root@fusion:~# ls -l /data/store/vms/plexee/
  total 6164615
  -rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root
  ^^ !!

  root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root 
  image: /data/store/vms/plexee//plexee-root
  file format: qcow2
  virtual size: 10G (10737418240 bytes)
  disk size: 5.9G
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false

  
  root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee
  NAME USED  AVAIL  REFER  MOUNTPOINT
  rpool/DATA/fusion/store/vms/plexee  5.88G   484G  5.88G  
/data/store/vms/plexee

  
  root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee
  NAMEPROPERTY  VALUE   
SOURCE
  rpool/DATA/fusion/store/vms/plexee  type  filesystem  
-
  rpool/DATA/fusion/store/vms/plexee  creation  Mon Mar 26  9:50 
2018   -
  rpool/DATA/fusion/store/vms/plexee  used  5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  available 484G
-
  rpool/DATA/fusion/store/vms/plexee  referenced5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x   
-
  rpool/DATA/fusion/store/vms/plexee  mounted   yes 
-
  rpool/DATA/fusion/store/vms/plexee  quota none
default
  rpool/DATA/fusion/store/vms/plexee  reservation   none
default
  rpool/DATA/fusion/store/vms/plexee  recordsize128K
default
  rpool/DATA/fusion/store/vms/plexee  mountpoint
/data/store/vms/plexee  inherited from rpool/DATA/fusion
  rpool/DATA/fusion/store/vms/plexee  sharenfs  off 
default
  rpool/DATA/fusion/store/vms/plexee  checksum  on  
default
  rpool/DATA/fusion/store/vms/plexee  compression   lz4 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  atime off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  devices   off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  exec  on  
default
  rpool/DATA/fusion/store/vms/plexee  setuidon  
default
  rpool/DATA/fusion/store/vms/plexee  readonly  off 
default
  rpool/DATA/fusion/store/vms/plexee  zoned off 
default
  rpool/DATA/fusion/store/vms/plexee  snapdir   hidden  
default
  rpool/DATA/fusion/store/vms/plexee  aclinheritrestricted  
default
  rpool/DATA/fusion/store/vms/plexee  canmount  on  
default
  rpool/DATA/fusion/store/vms/plexee  xattr on  
default
  rpool/DATA/fusion/store/vms/plexee  copies1   
default
  rpool/DATA/fusion/store/vms/plexee  version   5   
-
  rpool/DATA/fusion/store/vms/plexee  utf8only  off 
-
  rpool/DATA/fusion/store/vms/plexee  normalization none
-
  rpool/DATA/fusion/store/vms/plexee  casesensitivity   sensitive   
-
  rpool/DATA/fusion/store/vms/plexee  vscan off 
default
  rpool/DATA/fusion/store/vms/plexee  nbmandoff 
default
  rpool/DATA/fusion/store/vms/plexee  sharesmb  off 
default
  rpool/DATA/fusion/store/vms/plexee  refquota  none
default
  rpool/DATA/fusion/store/vms/plexee  refreservationnone
default
  rpool/DATA/fusion/store/vms/plexee  primarycache  all 
default
  rpool/DATA/fusion/store/vms/plexee 

[Kernel-packages] [Bug 1768777] Re: libnvpair1linux doc contents have busted symlinks

2018-05-03 Thread Colin Ian King
Couple of notes:

1. This is not an issue with clean install of zfsutils-linux
2. Removing zfsutils-linux and an autoremove with a re-install fixes this
3. This occurs on an upgrade from a previous release if I understand the issue 
correctly. 

** Changed in: zfs-linux (Ubuntu)
   Importance: Medium => Low

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1768777

Title:
  libnvpair1linux doc contents have busted symlinks

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  % apt-cache policy libnvpair1linux
  libnvpair1linux:
Installed: 0.7.5-1ubuntu15
Candidate: 0.7.5-1ubuntu15
Version table:
   *** 0.7.5-1ubuntu15 500
  500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

  % lsb_release -rd
  Description:Ubuntu 18.04 LTS
  Release:18.04

  The symlinks are busted:

  % ls -al
  total 108
  drwxr-xr-x3 root root  4096 Apr 23 08:33 ./
  drwxr-xr-x 1120 root root 36864 May  2 07:36 ../
  lrwxrwxrwx1 root root38 Apr 17 04:18 changelog.Debian.gz -> 
../libnvpair1linux/changelog.Debian.gz
  -rw-r--r--1 root root 53854 Oct  6  2017 copyright
  lrwxrwxrwx1 root root28 Apr 17 04:18 COPYRIGHT -> 
../libnvpair1linux/COPYRIGHT
  drwxr-xr-x2 root root  4096 Apr 23 08:33 examples/
  lrwxrwxrwx1 root root41 Apr 17 04:18 OPENSOLARIS.LICENSE.gz -> 
../libnvpair1linux/OPENSOLARIS.LICENSE.gz
  -rw-r--r--1 root root   684 Oct  6  2017 README.Debian

  % zcat changelog.Debian.gz 
  gzip: changelog.Debian.gz: Too many levels of symbolic links

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: libnvpair1linux 0.7.5-1ubuntu15
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7
  Architecture: amd64
  Date: Thu May  3 03:47:57 2018
  Dependencies:
   gcc-8-base 8-20180414-1ubuntu2
   libc6 2.27-3ubuntu1
   libgcc1 1:8-20180414-1ubuntu2
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1768777/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1764810] Re: Xenial: rfkill: fix missing return on rfkill_init

2018-05-03 Thread Colin Ian King
This is fix to a regression from a previous SRU, I don't have the H/W at
hand to test this, so I can't verify it.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1764810

Title:
  Xenial: rfkill: fix missing return on rfkill_init

Status in linux package in Ubuntu:
  Invalid
Status in linux source package in Xenial:
  Fix Committed

Bug description:
  == SRU Justification ==

  A previous backport to bug LP: #1745130 overlooked adding in
  an error return that was introduced by commit 6124c53edeea. Fix
  this by adding in the missing return.

  Detected by CoverityScan, CID#1467925 ("Missing return statement)

  Fixes: b9a5fffbaee6 ("rfkill: Add rfkill-any LED trigger")

  == Fix ==

  Add missing return error code

  == Test ==

  N/A

  == Regression Potential ==

  Minimal, this fixes the broken backport, so the change is small and
  restores the original error handling behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1764810/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1768777] Re: libnvpair1linux doc contents have busted symlinks

2018-05-03 Thread Colin Ian King
** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1768777

Title:
  libnvpair1linux doc contents have busted symlinks

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  % apt-cache policy libnvpair1linux
  libnvpair1linux:
Installed: 0.7.5-1ubuntu15
Candidate: 0.7.5-1ubuntu15
Version table:
   *** 0.7.5-1ubuntu15 500
  500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

  % lsb_release -rd
  Description:Ubuntu 18.04 LTS
  Release:18.04

  The symlinks are busted:

  % ls -al
  total 108
  drwxr-xr-x3 root root  4096 Apr 23 08:33 ./
  drwxr-xr-x 1120 root root 36864 May  2 07:36 ../
  lrwxrwxrwx1 root root38 Apr 17 04:18 changelog.Debian.gz -> 
../libnvpair1linux/changelog.Debian.gz
  -rw-r--r--1 root root 53854 Oct  6  2017 copyright
  lrwxrwxrwx1 root root28 Apr 17 04:18 COPYRIGHT -> 
../libnvpair1linux/COPYRIGHT
  drwxr-xr-x2 root root  4096 Apr 23 08:33 examples/
  lrwxrwxrwx1 root root41 Apr 17 04:18 OPENSOLARIS.LICENSE.gz -> 
../libnvpair1linux/OPENSOLARIS.LICENSE.gz
  -rw-r--r--1 root root   684 Oct  6  2017 README.Debian

  % zcat changelog.Debian.gz 
  gzip: changelog.Debian.gz: Too many levels of symbolic links

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: libnvpair1linux 0.7.5-1ubuntu15
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7
  Architecture: amd64
  Date: Thu May  3 03:47:57 2018
  Dependencies:
   gcc-8-base 8-20180414-1ubuntu2
   libc6 2.27-3ubuntu1
   libgcc1 1:8-20180414-1ubuntu2
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1768777/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1768777] Re: libnvpair1linux doc contents have busted symlinks

2018-05-03 Thread Colin Ian King
** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => Medium

** Changed in: zfs-linux (Ubuntu)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1768777

Title:
  libnvpair1linux doc contents have busted symlinks

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  % apt-cache policy libnvpair1linux
  libnvpair1linux:
Installed: 0.7.5-1ubuntu15
Candidate: 0.7.5-1ubuntu15
Version table:
   *** 0.7.5-1ubuntu15 500
  500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

  % lsb_release -rd
  Description:Ubuntu 18.04 LTS
  Release:18.04

  The symlinks are busted:

  % ls -al
  total 108
  drwxr-xr-x3 root root  4096 Apr 23 08:33 ./
  drwxr-xr-x 1120 root root 36864 May  2 07:36 ../
  lrwxrwxrwx1 root root38 Apr 17 04:18 changelog.Debian.gz -> 
../libnvpair1linux/changelog.Debian.gz
  -rw-r--r--1 root root 53854 Oct  6  2017 copyright
  lrwxrwxrwx1 root root28 Apr 17 04:18 COPYRIGHT -> 
../libnvpair1linux/COPYRIGHT
  drwxr-xr-x2 root root  4096 Apr 23 08:33 examples/
  lrwxrwxrwx1 root root41 Apr 17 04:18 OPENSOLARIS.LICENSE.gz -> 
../libnvpair1linux/OPENSOLARIS.LICENSE.gz
  -rw-r--r--1 root root   684 Oct  6  2017 README.Debian

  % zcat changelog.Debian.gz 
  gzip: changelog.Debian.gz: Too many levels of symbolic links

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: libnvpair1linux 0.7.5-1ubuntu15
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7
  Architecture: amd64
  Date: Thu May  3 03:47:57 2018
  Dependencies:
   gcc-8-base 8-20180414-1ubuntu2
   libc6 2.27-3ubuntu1
   libgcc1 1:8-20180414-1ubuntu2
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1768777/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1725859] Re: zfs frequently hangs (up to 30 secs) during sequential read

2018-05-03 Thread Colin Ian King
And if anyone is wondering why this is not the default, the answer is
here: https://github.com/zfsonlinux/zfs/issues/443

** Bug watch added: Github Issue Tracker for ZFS #443
   https://github.com/zfsonlinux/zfs/issues/443

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1725859

Title:
  zfs frequently hangs (up to 30 secs) during sequential read

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  Updated to artful (17.10) yesterday. Trying to read (play video) from 
mirrored ZFS disks from an external USB 3 enclosure. Zpool is defined as:
   
  root@hetty:/home/crlb# zpool status
pool: storage
   state: ONLINE
scan: resilvered 20K in 0h0m with 0 errors on Fri Oct 20 18:38:49 2017
  config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
sdb ONLINE   0 0 0
sdc ONLINE   0 0 0

  errors: No known data errors
  root@hetty:/home/crlb#

  Found that I could re-create the problem with:

   rsync -av --progress  

  Also found that:

dd if=/dev/sdX of=/dev/null status=progress bs=1024 count=1000

  Where "X" is either "b" or "c" does not hang.

  Installed:

  root@hetty:/home/crlb# apt list --installed | grep -i zfs
  libzfs2linux/artful,now 0.6.5.11-1ubuntu3 amd64 [installed,automatic]
  zfs-zed/artful,now 0.6.5.11-1ubuntu3 amd64 [installed,automatic]
  zfsutils-linux/artful,now 0.6.5.11-1ubuntu3 amd64 [installed]
  root@hetty:/home/crlb#

  Help please.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1725859/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1766308] Re: inexplicably large file reported by zfs filesystem

2018-05-03 Thread Colin Ian King
You can get an idea of how "full" a file is using zdb, e.g:

sudo zdb  -O pool-ssd/virt ubuntu17.10-amd64-desktop.qcow2

Object  lvl   iblk   dblk  dsize  dnsize  lsize   %full  type
1147373   128K   128K  14.0G 512  20.0G   84.49  ZFS plain file

in the above example, pool-ssd/virt was the pool + zfs file system and
ubuntu17.10-amd64-desktop.qcow2 was the name of the VM image.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766308

Title:
  inexplicably large file reported by zfs filesystem

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  I have a zfs filesystem containing a single qemu kvm disk image. The
  vm has been working normally. The image file is only allocated 10G,
  however today I became aware that the file, when examined from the ZFS
  host (hypervisor) is reporting an inexplicable, massive file size of
  around 12T. 12T is larger than the pool itself. Snapshots or other
  filesystem features should not be involved. I'm suspicious that the
  file(system?) has been corrupted.

  
  root@fusion:~# ls -l /data/store/vms/plexee/
  total 6164615
  -rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root
  ^^ !!

  root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root 
  image: /data/store/vms/plexee//plexee-root
  file format: qcow2
  virtual size: 10G (10737418240 bytes)
  disk size: 5.9G
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false

  
  root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee
  NAME USED  AVAIL  REFER  MOUNTPOINT
  rpool/DATA/fusion/store/vms/plexee  5.88G   484G  5.88G  
/data/store/vms/plexee

  
  root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee
  NAMEPROPERTY  VALUE   
SOURCE
  rpool/DATA/fusion/store/vms/plexee  type  filesystem  
-
  rpool/DATA/fusion/store/vms/plexee  creation  Mon Mar 26  9:50 
2018   -
  rpool/DATA/fusion/store/vms/plexee  used  5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  available 484G
-
  rpool/DATA/fusion/store/vms/plexee  referenced5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x   
-
  rpool/DATA/fusion/store/vms/plexee  mounted   yes 
-
  rpool/DATA/fusion/store/vms/plexee  quota none
default
  rpool/DATA/fusion/store/vms/plexee  reservation   none
default
  rpool/DATA/fusion/store/vms/plexee  recordsize128K
default
  rpool/DATA/fusion/store/vms/plexee  mountpoint
/data/store/vms/plexee  inherited from rpool/DATA/fusion
  rpool/DATA/fusion/store/vms/plexee  sharenfs  off 
default
  rpool/DATA/fusion/store/vms/plexee  checksum  on  
default
  rpool/DATA/fusion/store/vms/plexee  compression   lz4 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  atime off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  devices   off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  exec  on  
default
  rpool/DATA/fusion/store/vms/plexee  setuidon  
default
  rpool/DATA/fusion/store/vms/plexee  readonly  off 
default
  rpool/DATA/fusion/store/vms/plexee  zoned off 
default
  rpool/DATA/fusion/store/vms/plexee  snapdir   hidden  
default
  rpool/DATA/fusion/store/vms/plexee  aclinheritrestricted  
default
  rpool/DATA/fusion/store/vms/plexee  canmount  on  
default
  rpool/DATA/fusion/store/vms/plexee  xattr on  
default
  rpool/DATA/fusion/store/vms/plexee  copies1   
default
  rpool/DATA/fusion/store/vms/plexee  version   5   
-
  rpool/DATA/fusion/store/vms/plexee  utf8only  off 
-
  rpool/DATA/fusion/store/vms/plexee  normalization none
-
  rpool/DATA/fusion/store/vms/plexee  casesensitivity   sensitive   
-
  rpool/DATA/fusion/store/vms/plexee  vscan off 
default
  rpool/DATA/fusion/store/vms/plexee  nbmandoff 
default
  rpool/DATA/fusion/store/vms/plexee  sharesmb  off 

[Kernel-packages] [Bug 1725859] Re: zfs frequently hangs (up to 30 secs) during sequential read

2018-05-03 Thread Colin Ian King
Ah, that is a good solution.  By default, ZFS on Linux will store xattrs
in a hidden folder, so there are multiple seeks per xattr and it's
poorly cached, so it will impact on performance especially on the slower
block devices.  The zfs set xattr=sa will store this in the inodes and
will really help.

Let's mark this as a suitable workaround and hence a fix.

** Changed in: zfs-linux (Ubuntu)
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1725859

Title:
  zfs frequently hangs (up to 30 secs) during sequential read

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  Updated to artful (17.10) yesterday. Trying to read (play video) from 
mirrored ZFS disks from an external USB 3 enclosure. Zpool is defined as:
   
  root@hetty:/home/crlb# zpool status
pool: storage
   state: ONLINE
scan: resilvered 20K in 0h0m with 0 errors on Fri Oct 20 18:38:49 2017
  config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
sdb ONLINE   0 0 0
sdc ONLINE   0 0 0

  errors: No known data errors
  root@hetty:/home/crlb#

  Found that I could re-create the problem with:

   rsync -av --progress  

  Also found that:

dd if=/dev/sdX of=/dev/null status=progress bs=1024 count=1000

  Where "X" is either "b" or "c" does not hang.

  Installed:

  root@hetty:/home/crlb# apt list --installed | grep -i zfs
  libzfs2linux/artful,now 0.6.5.11-1ubuntu3 amd64 [installed,automatic]
  zfs-zed/artful,now 0.6.5.11-1ubuntu3 amd64 [installed,automatic]
  zfsutils-linux/artful,now 0.6.5.11-1ubuntu3 amd64 [installed]
  root@hetty:/home/crlb#

  Help please.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1725859/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1725859] Re: zfs frequently hangs (up to 30 secs) during sequential read

2018-05-02 Thread Colin Ian King
I've tried to simulate this with very slow media and can't yet reproduce
this issue.  Can you play the DVD image and also run the following
command in another terminal:

iostat 1 300 > iostat.log

this will capture 5 minutes of activity. please then attach the log to
the bug report.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1725859

Title:
  zfs frequently hangs (up to 30 secs) during sequential read

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  Updated to artful (17.10) yesterday. Trying to read (play video) from 
mirrored ZFS disks from an external USB 3 enclosure. Zpool is defined as:
   
  root@hetty:/home/crlb# zpool status
pool: storage
   state: ONLINE
scan: resilvered 20K in 0h0m with 0 errors on Fri Oct 20 18:38:49 2017
  config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
sdb ONLINE   0 0 0
sdc ONLINE   0 0 0

  errors: No known data errors
  root@hetty:/home/crlb#

  Found that I could re-create the problem with:

   rsync -av --progress  

  Also found that:

dd if=/dev/sdX of=/dev/null status=progress bs=1024 count=1000

  Where "X" is either "b" or "c" does not hang.

  Installed:

  root@hetty:/home/crlb# apt list --installed | grep -i zfs
  libzfs2linux/artful,now 0.6.5.11-1ubuntu3 amd64 [installed,automatic]
  zfs-zed/artful,now 0.6.5.11-1ubuntu3 amd64 [installed,automatic]
  zfsutils-linux/artful,now 0.6.5.11-1ubuntu3 amd64 [installed]
  root@hetty:/home/crlb#

  Help please.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1725859/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1766308] Re: inexplicably large file reported by zfs filesystem

2018-05-02 Thread Colin Ian King
@Matt, did the above help?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766308

Title:
  inexplicably large file reported by zfs filesystem

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  I have a zfs filesystem containing a single qemu kvm disk image. The
  vm has been working normally. The image file is only allocated 10G,
  however today I became aware that the file, when examined from the ZFS
  host (hypervisor) is reporting an inexplicable, massive file size of
  around 12T. 12T is larger than the pool itself. Snapshots or other
  filesystem features should not be involved. I'm suspicious that the
  file(system?) has been corrupted.

  
  root@fusion:~# ls -l /data/store/vms/plexee/
  total 6164615
  -rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root
  ^^ !!

  root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root 
  image: /data/store/vms/plexee//plexee-root
  file format: qcow2
  virtual size: 10G (10737418240 bytes)
  disk size: 5.9G
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false

  
  root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee
  NAME USED  AVAIL  REFER  MOUNTPOINT
  rpool/DATA/fusion/store/vms/plexee  5.88G   484G  5.88G  
/data/store/vms/plexee

  
  root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee
  NAMEPROPERTY  VALUE   
SOURCE
  rpool/DATA/fusion/store/vms/plexee  type  filesystem  
-
  rpool/DATA/fusion/store/vms/plexee  creation  Mon Mar 26  9:50 
2018   -
  rpool/DATA/fusion/store/vms/plexee  used  5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  available 484G
-
  rpool/DATA/fusion/store/vms/plexee  referenced5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x   
-
  rpool/DATA/fusion/store/vms/plexee  mounted   yes 
-
  rpool/DATA/fusion/store/vms/plexee  quota none
default
  rpool/DATA/fusion/store/vms/plexee  reservation   none
default
  rpool/DATA/fusion/store/vms/plexee  recordsize128K
default
  rpool/DATA/fusion/store/vms/plexee  mountpoint
/data/store/vms/plexee  inherited from rpool/DATA/fusion
  rpool/DATA/fusion/store/vms/plexee  sharenfs  off 
default
  rpool/DATA/fusion/store/vms/plexee  checksum  on  
default
  rpool/DATA/fusion/store/vms/plexee  compression   lz4 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  atime off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  devices   off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  exec  on  
default
  rpool/DATA/fusion/store/vms/plexee  setuidon  
default
  rpool/DATA/fusion/store/vms/plexee  readonly  off 
default
  rpool/DATA/fusion/store/vms/plexee  zoned off 
default
  rpool/DATA/fusion/store/vms/plexee  snapdir   hidden  
default
  rpool/DATA/fusion/store/vms/plexee  aclinheritrestricted  
default
  rpool/DATA/fusion/store/vms/plexee  canmount  on  
default
  rpool/DATA/fusion/store/vms/plexee  xattr on  
default
  rpool/DATA/fusion/store/vms/plexee  copies1   
default
  rpool/DATA/fusion/store/vms/plexee  version   5   
-
  rpool/DATA/fusion/store/vms/plexee  utf8only  off 
-
  rpool/DATA/fusion/store/vms/plexee  normalization none
-
  rpool/DATA/fusion/store/vms/plexee  casesensitivity   sensitive   
-
  rpool/DATA/fusion/store/vms/plexee  vscan off 
default
  rpool/DATA/fusion/store/vms/plexee  nbmandoff 
default
  rpool/DATA/fusion/store/vms/plexee  sharesmb  off 
default
  rpool/DATA/fusion/store/vms/plexee  refquota  none
default
  rpool/DATA/fusion/store/vms/plexee  refreservationnone
default
  rpool/DATA/fusion/store/vms/plexee  primarycache  all 
default
  rpool/DATA/fusion/store/vms/plexee  secondarycacheall

[Kernel-packages] [Bug 1688890] Re: initramfs-zfs should support misc /dev dirs

2018-04-30 Thread Colin Ian King
** Also affects: zfs-linux (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: zfs-linux (Ubuntu Bionic)
   Importance: Undecided
   Status: Confirmed

** Changed in: zfs-linux (Ubuntu Bionic)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1688890

Title:
  initramfs-zfs should support misc /dev dirs

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Xenial:
  New
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  Right now 'zfs-initramfs', i.e. /usr/share/initramfs-tools/scripts/zfs
  does not support any other directory than /dev for "zpool import ...".
  Therefore even if a pool gets created from a different directory like
  /dev, say /dev/disk/by-id or /dev/chassis/SYS on next reboot /dev will
  be used and thus zpool status will show the /dev/sd* etc. on
  successful import. Beside that now a user does not see the original
  names used in "zpool create ..." the unstable names like "/dev/sd*"
  are shown, which is explicitly NOT recommended.

  The following patch introduces the "pseudo" kernel param named "zdirs"
  - a comma separated list of dev dirs to scan on import - which gets
  used by /usr/share/initramfs-tools/scripts/zfs to honor it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1688890/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1766964] Re: zpool status -v aborts with SIGABRT with and without arguments

2018-04-30 Thread Colin Ian King
** Changed in: zfs-linux (Ubuntu)
   Status: Triaged => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766964

Title:
  zpool status -v aborts with SIGABRT with and without arguments

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  I am currently running Ubuntu 16.04 with ZFS 0.6.5.

  Distributor ID: Ubuntu
  Description:Ubuntu 16.04.4 LTS
  Release:16.04
  Codename:   xenial

  christian@kepler ~ $ apt-cache policy zfsutils-linux
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu20
Candidate: 0.6.5.6-0ubuntu20
Version table:
   *** 0.6.5.6-0ubuntu20 500
  500 http://mirror.math.ucdavis.edu/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.6.5.6-0ubuntu8 500
  500 http://mirror.math.ucdavis.edu/ubuntu xenial/universe amd64 
Packages

  
  Here is the package listing:

  (standard input):ii  libzfs2linux  0.6.5.6-0ubuntu20  
 amd64Native OpenZFS filesystem library 
for Linux
  (standard input):ii  zfs-dkms  0.6.5.6-0ubuntu20  
 amd64Native OpenZFS filesystem kernel 
modules for Linux
  (standard input):ii  zfs-doc   0.6.5.6-0ubuntu20  
 all  Native OpenZFS filesystem 
documentation and examples.
  (standard input):ii  zfs-zed   0.6.5.6-0ubuntu20  
 amd64OpenZFS Event Daemon (zed)
  (standard input):ii  zfsutils-linux0.6.5.6-0ubuntu20  
 amd64Native OpenZFS management 
utilities for Linux

  
  I try to run status on my zpool using the command `zpool status zkepler` and 
get this result:

pool: zkepler
   state: ONLINE
scan: scrub in progress since Wed Apr 25 13:10:24 2018
  802G scanned out of 2.28T at 217M/s, 2h0m to go
  0 repaired, 34.32% done
  Aborted

  I would expect an extended report of status but it just aborts with
  SIGABRT when run through gdb.

  (gdb) run status -v
  Starting program: /sbin/zpool status -v
  [Thread debugging using libthread_db enabled]
  Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
pool: zkepler
   state: ONLINE
scan: scrub in progress since Wed Apr 25 13:10:24 2018
  825G scanned out of 2.28T at 211M/s, 2h2m to go
  0 repaired, 35.32% done

  Program received signal SIGABRT, Aborted.
  0x768d6428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
  54  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.

  I have upgraded this machine from 14.04 LTS within the last few months
  but I purged all the ZFS packages and the ZFS PPA and reinstalled all
  the packages. My kernel version is 4.4.0-121-generic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1766964/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-04-30 Thread Colin Ian King
Hrm.  I'm, included to have the container environment inform zfs that
the timeout should be zero rather adding in more logic to zfs that tries
to determine which environment it is in to determine the timeout time.
The container environment just needs to set the 1 environment variable
that ZFS already checks, which should be trivial to add to the container
environment.  I'm concerned about adding container detection to ZFS that
may need to change over time and becomes an effort to support over time.
It just seems more straight forward to me for the container environment
to inform ZFS not to wait for 10 seconds rather than ZFS (since the
mechanism is already provided for in ZFS with the environment variable)
than for ZFS to intuit if its in a special lxc container.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu19
Candidate: 0.6.5.6-0ubuntu19
Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1725339] Re: ZFS fails to import pools on reboot due to udev settle failed

2018-04-30 Thread Colin Ian King
It does seem that the failing udev settle dependency is the cause of the
zfs import to fail. Can you add to the bug report the output from:

systemd --test --system --unit=multi-user.target

and also the output from:

sudo journalctl


** Changed in: zfs-linux (Ubuntu)
   Status: New => Triaged

** Changed in: zfs-linux (Ubuntu)
   Status: Triaged => Incomplete

** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1725339

Title:
  ZFS fails to import pools on reboot due to udev settle failed

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  ZFS fails to import pools on reboot due to udev settle failed. I have
  buggy usb controller (or port) which seems to cause udev settle to
  fail, but nothing is plugged in usb, so it's not needed. Should it
  prevent zfs from mounting from hard drive on boot?

  Oct 20 18:08:52 mite systemd[1]: zfs-import-cache.service: Job zfs-
  import-cache.service/start failed with result 'dependency'.

  Oct 20 18:07:52 mite systemd-udevd[379]: seq 2916 
'/devices/pci:00/:00:14.0/usb1' is taking a long time   
   
  Oct 20 18:08:52 mite systemd[1]: systemd-udev-settle.service: Main process 
exited, code=exited, status=1/FAILURE   
  
  Oct 20 18:08:52 mite systemd[1]: Failed to start udev Wait for Complete 
Device Initialization.  
 
  Oct 20 18:08:52 mite systemd[1]: systemd-udev-settle.service: Unit entered 
failed state.   
  
  Oct 20 18:08:52 mite systemd[1]: systemd-udev-settle.service: Failed with 
result 'exit-code'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1725339/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1724165] Re: receive_freeobjects() skips freeing some objects

2018-04-30 Thread Colin Ian King
** Changed in: zfs-linux (Ubuntu)
   Status: In Progress => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1724165

Title:
  receive_freeobjects() skips freeing some objects

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  When receiving a FREEOBJECTS record, receive_freeobjects()
  incorrectly skips a freed object in some cases. Specifically, this
  happens when the first object in the range to be freed doesn't exist,
  but the second object does. This leaves an object allocated on disk
  on the receiving side which is unallocated on the sending side, which
  may cause receiving subsequent incremental streams to fail.

  The bug was caused by an incorrect increment of the object index
  variable when current object being freed doesn't exist.  The
  increment is incorrect because incrementing the object index is
  handled by a call to dmu_object_next() in the increment portion of
  the for loop statement.

  Affects ZFS send.
  Upstream fix
  
https://github.com/zfsonlinux/zfs/pull/6695/commits/d79b4b3e4ad722bf457efe9401d7f267a8dfcc6c

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1724165/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1766964] Re: zpool status -v aborts with SIGABRT with and without arguments

2018-04-30 Thread Colin Ian King
if you can catch this with gdb again and run the gdb command "where" so
we can capture some traceback then that would be really useful.


** Changed in: zfs-linux (Ubuntu)
   Status: New => Triaged

** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => Medium

** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766964

Title:
  zpool status -v aborts with SIGABRT with and without arguments

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  I am currently running Ubuntu 16.04 with ZFS 0.6.5.

  Distributor ID: Ubuntu
  Description:Ubuntu 16.04.4 LTS
  Release:16.04
  Codename:   xenial

  christian@kepler ~ $ apt-cache policy zfsutils-linux
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu20
Candidate: 0.6.5.6-0ubuntu20
Version table:
   *** 0.6.5.6-0ubuntu20 500
  500 http://mirror.math.ucdavis.edu/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.6.5.6-0ubuntu8 500
  500 http://mirror.math.ucdavis.edu/ubuntu xenial/universe amd64 
Packages

  
  Here is the package listing:

  (standard input):ii  libzfs2linux  0.6.5.6-0ubuntu20  
 amd64Native OpenZFS filesystem library 
for Linux
  (standard input):ii  zfs-dkms  0.6.5.6-0ubuntu20  
 amd64Native OpenZFS filesystem kernel 
modules for Linux
  (standard input):ii  zfs-doc   0.6.5.6-0ubuntu20  
 all  Native OpenZFS filesystem 
documentation and examples.
  (standard input):ii  zfs-zed   0.6.5.6-0ubuntu20  
 amd64OpenZFS Event Daemon (zed)
  (standard input):ii  zfsutils-linux0.6.5.6-0ubuntu20  
 amd64Native OpenZFS management 
utilities for Linux

  
  I try to run status on my zpool using the command `zpool status zkepler` and 
get this result:

pool: zkepler
   state: ONLINE
scan: scrub in progress since Wed Apr 25 13:10:24 2018
  802G scanned out of 2.28T at 217M/s, 2h0m to go
  0 repaired, 34.32% done
  Aborted

  I would expect an extended report of status but it just aborts with
  SIGABRT when run through gdb.

  (gdb) run status -v
  Starting program: /sbin/zpool status -v
  [Thread debugging using libthread_db enabled]
  Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
pool: zkepler
   state: ONLINE
scan: scrub in progress since Wed Apr 25 13:10:24 2018
  825G scanned out of 2.28T at 211M/s, 2h2m to go
  0 repaired, 35.32% done

  Program received signal SIGABRT, Aborted.
  0x768d6428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
  54  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.

  I have upgraded this machine from 14.04 LTS within the last few months
  but I purged all the ZFS packages and the ZFS PPA and reinstalled all
  the packages. My kernel version is 4.4.0-121-generic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1766964/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-04-30 Thread Colin Ian King
The code actually polls /dev/zfs until it appears. The issue here is
that it does not appear after 10 seconds, and then it gives up.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu19
Candidate: 0.6.5.6-0ubuntu19
Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-04-30 Thread Colin Ian King
accessing /dev/zfs is dependent on loading the zfs module which may take
some time for udev to do it's work to create /dev/zfs.  However, ZFS
does allow this to be tweaked with the ZFS_MODULE_TIMEOUT environment
variable:

>From lib/libzfs/libzfs_util.c, libzfs_load_module:

/*
 * Device creation by udev is asynchronous and waiting may be
 * required.  Busy wait for 10ms and then fall back to polling every
 * 10ms for the allowed timeout (default 10s, max 10m).  This is
 * done to optimize for the common case where the device is
 * immediately available and to avoid penalizing the possible
 * case where udev is slow or unable to create the device.
 */

timeout_str = getenv("ZFS_MODULE_TIMEOUT");
...

so export ZFS_MODULE_TIMEOUT=0 may be a useful workaround for the
moment.  I wonder if that could be set automagically to zero lxc
environment rather than hacking around zfs to detect if it is in a lxc
container. The former does seem the best way forward.


** Also affects: lxd (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in lxd package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu19
Candidate: 0.6.5.6-0ubuntu19
Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1767350] Re: zfs-dkms 0.6.5.6-0ubuntu20: zfs kernel module failed to build

2018-04-27 Thread Colin Ian King
*** This bug is a duplicate of bug 1742698 ***
https://bugs.launchpad.net/bugs/1742698

** This bug has been marked a duplicate of bug 1742698
   zfs-dkms 0.6.5.6-0ubuntu18: zfs kernel module failed to build

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1767350

Title:
  zfs-dkms 0.6.5.6-0ubuntu20: zfs kernel module failed to build

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Version 16.04 Ubunutu - bug appears to be back for newer zfs/ubuntu
  update (not rolling to 18.04 - just installing updates to 16.04)

  ProblemType: Package
  DistroRelease: Ubuntu 16.04
  Package: zfs-dkms 0.6.5.6-0ubuntu20
  ProcVersionSignature: Ubuntu 4.4.0-119.143-generic 4.4.114
  Uname: Linux 4.4.0-119-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl
  ApportVersion: 2.20.1-0ubuntu2.16
  Architecture: amd64
  DKMSBuildLog:
   DKMS make.log for zfs-0.6.5.6 for kernel 4.4.0-122-generic (x86_64)
   Fri Apr 27 07:08:03 CDT 2018
   make: *** No targets specified and no makefile found.  Stop.
  DKMSKernelVersion: 4.4.0-122-generic
  Date: Fri Apr 27 07:08:07 2018
  InstallationDate: Installed on 2014-10-19 (1286 days ago)
  InstallationMedia: Xubuntu 14.04.1 LTS "Trusty Tahr" - Release amd64 
(20140723)
  PackageVersion: 0.6.5.6-0ubuntu20
  RelatedPackageVersions:
   dpkg 1.18.4ubuntu1.4
   apt  1.2.26
  SourcePackage: zfs-linux
  Title: zfs-dkms 0.6.5.6-0ubuntu20: zfs kernel module failed to build
  UpgradeStatus: Upgraded to xenial on 2016-04-22 (735 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1767350/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1722261] Re: deadlock in mount umount and sync

2018-04-24 Thread Colin Ian King
OK, lets keep this bug open for a few more months and if you see the
problem again, please update the bug report.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1722261

Title:
  deadlock in mount umount and sync

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  I use zfs vesion  0.6.5.6  on  Ubuntu 16.04.2 LTS . I have  many zombie 
process on auto-mount of snapshot and on sync !
  903 process are   in deadlock.  i can't mount a new file system or snapshot . 
Partial output of ps alx |grep 'call_r D'  below .

  what is the cause? what can I do ?

  0 0   2371  1  20   0   6016   752 call_r D?  0:00 
/bin/sync
  0 0  15290  1  20   0   6016   676 call_r D?  0:00 
/bin/sync
  0 0  18919  1  20   0   6016   708 call_r D?  0:00 
/bin/sync
  0 0  27076  1  20   0   6016   808 call_r D?  0:00 
/bin/sync
  4 0  31976  1  20   0  22084  1344 call_r D?  0:00 umount 
-t zfs -n /samba/shares/Aat/.zfs/snapshot/2017-10-04_09.00.05--5d

  error in kern.log:
  9 13:20:28 zfs-cis kernel: [5368563.592834] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:20:28 zfs-cis kernel: [5368563.597868] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:20:28 zfs-cis kernel: [5368563.601730] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.187001] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.13] WARNING: Unable to automount 
/samba/shares/Cardiologia2/.zfs/snapshot/2017-10-03_12.00.03--5d/pool_z2_samba/shares/Cardiologia2@2017-10-03_12.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.15] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.189005] WARNING: Unable to automount 
/samba/shares/Aat/.zfs/snapshot/2017-10-03_20.00.04--5d/pool_z2_samba/shares/Aat@2017-10-03_20.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.190105] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.192847] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.193617] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.198096] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256


  
  in syslog :

  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dChtrapianti_2emount 
interface=org.freedesktop.DBus.Properties member=GetAll cookie=155 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dLaboratorio_5fTrapianti_2emount
 interface=org.freedesktop.DBus.Properties member=GetAll cookie=202 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dProgettoTrapianti_2emount
 interface=org.freedesktop.DBus.Properties member=GetAll cookie=260 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dTrapianti_2emount 
interface=org.freedesktop.DBus.Properties member=GetAll cookie=291 
reply_cookie=0 error=n/a
  Oct  9 12:22:13 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dChtrapianti_2emount 
interface=org.freedesktop.DBus.Properties member=GetAll cookie=155 
reply_cookie=0 error=n/a
  Oct  9 12:22:13 zfs-cis systemd[1]: Got message type=method_call sender=n/a 

[Kernel-packages] [Bug 1766308] Re: inexplicably large file reported by zfs filesystem

2018-04-24 Thread Colin Ian King
** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => High

** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: zfs-linux (Ubuntu)
   Status: New => Incomplete

** Changed in: zfs-linux (Ubuntu)
   Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1766308

Title:
  inexplicably large file reported by zfs filesystem

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  I have a zfs filesystem containing a single qemu kvm disk image. The
  vm has been working normally. The image file is only allocated 10G,
  however today I became aware that the file, when examined from the ZFS
  host (hypervisor) is reporting an inexplicable, massive file size of
  around 12T. 12T is larger than the pool itself. Snapshots or other
  filesystem features should not be involved. I'm suspicious that the
  file(system?) has been corrupted.

  
  root@fusion:~# ls -l /data/store/vms/plexee/
  total 6164615
  -rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root
  ^^ !!

  root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root 
  image: /data/store/vms/plexee//plexee-root
  file format: qcow2
  virtual size: 10G (10737418240 bytes)
  disk size: 5.9G
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false

  
  root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee
  NAME USED  AVAIL  REFER  MOUNTPOINT
  rpool/DATA/fusion/store/vms/plexee  5.88G   484G  5.88G  
/data/store/vms/plexee

  
  root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee
  NAMEPROPERTY  VALUE   
SOURCE
  rpool/DATA/fusion/store/vms/plexee  type  filesystem  
-
  rpool/DATA/fusion/store/vms/plexee  creation  Mon Mar 26  9:50 
2018   -
  rpool/DATA/fusion/store/vms/plexee  used  5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  available 484G
-
  rpool/DATA/fusion/store/vms/plexee  referenced5.88G   
-
  rpool/DATA/fusion/store/vms/plexee  compressratio 1.37x   
-
  rpool/DATA/fusion/store/vms/plexee  mounted   yes 
-
  rpool/DATA/fusion/store/vms/plexee  quota none
default
  rpool/DATA/fusion/store/vms/plexee  reservation   none
default
  rpool/DATA/fusion/store/vms/plexee  recordsize128K
default
  rpool/DATA/fusion/store/vms/plexee  mountpoint
/data/store/vms/plexee  inherited from rpool/DATA/fusion
  rpool/DATA/fusion/store/vms/plexee  sharenfs  off 
default
  rpool/DATA/fusion/store/vms/plexee  checksum  on  
default
  rpool/DATA/fusion/store/vms/plexee  compression   lz4 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  atime off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  devices   off 
inherited from rpool
  rpool/DATA/fusion/store/vms/plexee  exec  on  
default
  rpool/DATA/fusion/store/vms/plexee  setuidon  
default
  rpool/DATA/fusion/store/vms/plexee  readonly  off 
default
  rpool/DATA/fusion/store/vms/plexee  zoned off 
default
  rpool/DATA/fusion/store/vms/plexee  snapdir   hidden  
default
  rpool/DATA/fusion/store/vms/plexee  aclinheritrestricted  
default
  rpool/DATA/fusion/store/vms/plexee  canmount  on  
default
  rpool/DATA/fusion/store/vms/plexee  xattr on  
default
  rpool/DATA/fusion/store/vms/plexee  copies1   
default
  rpool/DATA/fusion/store/vms/plexee  version   5   
-
  rpool/DATA/fusion/store/vms/plexee  utf8only  off 
-
  rpool/DATA/fusion/store/vms/plexee  normalization none
-
  rpool/DATA/fusion/store/vms/plexee  casesensitivity   sensitive   
-
  rpool/DATA/fusion/store/vms/plexee  vscan off 
default
  rpool/DATA/fusion/store/vms/plexee  nbmandoff 
default
  rpool/DATA/fusion/store/vms/plexee  sharesmb  off 
default
  rpool/DATA/fusion/store/vms/plexee  refquota  

[Kernel-packages] [Bug 1749715] Re: general protection fault in zfs module

2018-04-24 Thread Colin Ian King
After debugging the object code, I can see that the error occurs because
of corruption in the internal AVL tree; the bug occurs during an
insertion into the AVL tree in avl_insert(), namely when nullifying
node->avl_child[0]:

node->avl_child[0] = NULL;
node->avl_child[1] = NULL;

>From what I gather, it looks like there is some internal memory
corruption probably causing this issue. Without a full kernel core I
can't track this back much further, so my current hunch is that this may
not be a software error after all.  I've had an extensive hunt around
and cannot find similar breakage patterns, so I'm fairly confident this
may be a one-off memory issue.  I'm going to close this as Won't Fix,
but if it happens again, please feel free to re-open the bug.


** Changed in: zfs-linux (Ubuntu)
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1749715

Title:
   general protection fault in zfs module

Status in zfs-linux package in Ubuntu:
  Won't Fix

Bug description:
  Got this call trace during a rsync backup of a machine using ZFS:

  general protection fault:  [#1] SMP 
  Modules linked in: ip6table_filter ip6_tables xt_tcpudp xt_conntrack 
iptable_filter ip_tables x_tables zfs(PO) zunicode(PO) zcommon(PO) znvpair(PO) 
spl(O) zavl(PO) input_leds sch_fq_codel nf_conntrack_ipv6 nf_defrag_ipv6 
nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack virtio_scsi
  CPU: 0 PID: 4238 Comm: rsync Tainted: P   O4.4.0-112-generic 
#135-Ubuntu
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Ubuntu-1.8.2-1ubuntu1 04/01/2014
  task: 880078a4f2c0 ti: 880047c28000 task.ti: 880047c28000
  RIP: 0010:[]  [] avl_insert+0x33/0xe0 
[zavl]
  RSP: 0018:880047c2bc20  EFLAGS: 00010246
  RAX: 0001 RBX: 880043b46200 RCX: 0001
  RDX:  RSI: 001f880043b46208 RDI: 88005aa0c9a8
  RBP: 880047c2bc20 R08:  R09: 88007d001700
  R10: 880043b46200 R11: 0246 R12: 88005aa0c9a8
  R13: 880043b46200 R14:  R15: 88005aa0c9a8
  FS:  7f04124ec700() GS:88007fc0() knlGS:
  CS:  0010 DS:  ES:  CR0: 80050033
  CR2: 7ffd25c1cb8c CR3: 47cb CR4: 0670
  Stack:
   880047c2bc68 c0313721  0028
   880043b46200 88005aa0c8c8 6b34 
   88005aa0c9a8 880047c2bcc8 c04609ee 
  Call Trace:
   [] avl_add+0x71/0xa0 [zavl]
   [] zfs_range_lock+0x3ee/0x5e0 [zfs]
   [] ? rrw_enter_read_impl+0xbc/0x160 [zfs]
   [] zfs_read+0xd0/0x3c0 [zfs]
   [] ? profile_path_perm.part.7+0x7d/0xa0
   [] zpl_read_common_iovec+0x80/0xd0 [zfs]
   [] zpl_iter_read+0xa0/0xd0 [zfs]
   [] new_sync_read+0x94/0xd0
   [] __vfs_read+0x26/0x40
   [] vfs_read+0x86/0x130
   [] SyS_read+0x55/0xc0
   [] ? entry_SYSCALL_64_after_swapgs+0xd1/0x18c
   [] entry_SYSCALL_64_fastpath+0x2b/0xe7
  Code: 83 e2 01 48 03 77 10 49 83 e0 fe 8d 04 95 00 00 00 00 55 4c 89 c1 48 83 
47 18 01 83 e0 04 48 83 c9 01 48 89 e5 48 09 c8 4d 85 c0 <48> c7 06 00 00 00 00 
48 c7 46 08 00 00 00 00 48 89 46 10 0f 84 
  RIP  [] avl_insert+0x33/0xe0 [zavl]
   RSP 
  ---[ end trace c4ba4478b6002697 ]---

  
  This is the first time it happens but I'll report any future occurrence in 
here.

  Additional info:

  $ lsb_release -rd
  Description:  Ubuntu 16.04.3 LTS
  Release:  16.04

  $ apt-cache policy linux-image-4.4.0-112-generic zfsutils-linux
  linux-image-4.4.0-112-generic:
Installed: 4.4.0-112.135
Candidate: 4.4.0-112.135
Version table:
   *** 4.4.0-112.135 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 
Packages
  100 /var/lib/dpkg/status
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu18
Candidate: 0.6.5.6-0ubuntu18
Version table:
   *** 0.6.5.6-0ubuntu18 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.6.5.6-0ubuntu8 500
  500 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: linux-image-4.4.0-112-generic 4.4.0-112.135
  ProcVersionSignature: Ubuntu 4.4.0-112.135-generic 4.4.98
  Uname: Linux 4.4.0-112-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Feb 14 16:19 seq
   crw-rw 1 root audio 116, 33 Feb 14 16:19 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: [Errno 

[Kernel-packages] [Bug 1749715] Re: general protection fault in zfs module

2018-04-24 Thread Colin Ian King
** No longer affects: zfs

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1749715

Title:
   general protection fault in zfs module

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  Got this call trace during a rsync backup of a machine using ZFS:

  general protection fault:  [#1] SMP 
  Modules linked in: ip6table_filter ip6_tables xt_tcpudp xt_conntrack 
iptable_filter ip_tables x_tables zfs(PO) zunicode(PO) zcommon(PO) znvpair(PO) 
spl(O) zavl(PO) input_leds sch_fq_codel nf_conntrack_ipv6 nf_defrag_ipv6 
nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack virtio_scsi
  CPU: 0 PID: 4238 Comm: rsync Tainted: P   O4.4.0-112-generic 
#135-Ubuntu
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Ubuntu-1.8.2-1ubuntu1 04/01/2014
  task: 880078a4f2c0 ti: 880047c28000 task.ti: 880047c28000
  RIP: 0010:[]  [] avl_insert+0x33/0xe0 
[zavl]
  RSP: 0018:880047c2bc20  EFLAGS: 00010246
  RAX: 0001 RBX: 880043b46200 RCX: 0001
  RDX:  RSI: 001f880043b46208 RDI: 88005aa0c9a8
  RBP: 880047c2bc20 R08:  R09: 88007d001700
  R10: 880043b46200 R11: 0246 R12: 88005aa0c9a8
  R13: 880043b46200 R14:  R15: 88005aa0c9a8
  FS:  7f04124ec700() GS:88007fc0() knlGS:
  CS:  0010 DS:  ES:  CR0: 80050033
  CR2: 7ffd25c1cb8c CR3: 47cb CR4: 0670
  Stack:
   880047c2bc68 c0313721  0028
   880043b46200 88005aa0c8c8 6b34 
   88005aa0c9a8 880047c2bcc8 c04609ee 
  Call Trace:
   [] avl_add+0x71/0xa0 [zavl]
   [] zfs_range_lock+0x3ee/0x5e0 [zfs]
   [] ? rrw_enter_read_impl+0xbc/0x160 [zfs]
   [] zfs_read+0xd0/0x3c0 [zfs]
   [] ? profile_path_perm.part.7+0x7d/0xa0
   [] zpl_read_common_iovec+0x80/0xd0 [zfs]
   [] zpl_iter_read+0xa0/0xd0 [zfs]
   [] new_sync_read+0x94/0xd0
   [] __vfs_read+0x26/0x40
   [] vfs_read+0x86/0x130
   [] SyS_read+0x55/0xc0
   [] ? entry_SYSCALL_64_after_swapgs+0xd1/0x18c
   [] entry_SYSCALL_64_fastpath+0x2b/0xe7
  Code: 83 e2 01 48 03 77 10 49 83 e0 fe 8d 04 95 00 00 00 00 55 4c 89 c1 48 83 
47 18 01 83 e0 04 48 83 c9 01 48 89 e5 48 09 c8 4d 85 c0 <48> c7 06 00 00 00 00 
48 c7 46 08 00 00 00 00 48 89 46 10 0f 84 
  RIP  [] avl_insert+0x33/0xe0 [zavl]
   RSP 
  ---[ end trace c4ba4478b6002697 ]---

  
  This is the first time it happens but I'll report any future occurrence in 
here.

  Additional info:

  $ lsb_release -rd
  Description:  Ubuntu 16.04.3 LTS
  Release:  16.04

  $ apt-cache policy linux-image-4.4.0-112-generic zfsutils-linux
  linux-image-4.4.0-112-generic:
Installed: 4.4.0-112.135
Candidate: 4.4.0-112.135
Version table:
   *** 4.4.0-112.135 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 
Packages
  100 /var/lib/dpkg/status
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu18
Candidate: 0.6.5.6-0ubuntu18
Version table:
   *** 0.6.5.6-0ubuntu18 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.6.5.6-0ubuntu8 500
  500 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: linux-image-4.4.0-112-generic 4.4.0-112.135
  ProcVersionSignature: Ubuntu 4.4.0-112.135-generic 4.4.98
  Uname: Linux 4.4.0-112-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Feb 14 16:19 seq
   crw-rw 1 root audio 116, 33 Feb 14 16:19 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: [Errno 2] No such file or directory: 'fuser'
  CRDA: N/A
  CurrentDmesg: Error: command ['dmesg'] failed with exit code 1: dmesg: read 
kernel buffer failed: Operation not permitted
  Date: Thu Feb 15 08:45:07 2018
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  Lspci: Error: [Errno 2] No such file or directory: 'lspci'
  Lsusb: Error: [Errno 2] No such file or directory: 'lsusb'
  MachineType: QEMU Standard PC (i440FX + PIIX, 1996)
  PciMultimedia:
   
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 EFI VGA
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.4.0-112-generic 
root=UUID=db4864d4-cc2e-40c7-bc2b-a14bc0f09c9f ro console=ttyS0 net.ifnames=0 
kaslr nmi_watchdog=0 possible_cpus=1 vsyscall=none pti=on
  

[Kernel-packages] [Bug 1721589] Re: same mount-point after rename dataset

2018-04-24 Thread Colin Ian King
Without more specific information I'm unable to reproduce this issue.
I'm going to mark it as Won't Fix.

** Changed in: zfs-linux (Ubuntu)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1721589

Title:
  same mount-point after rename dataset

Status in zfs-linux package in Ubuntu:
  Won't Fix

Bug description:
  we renamed a lot of datasets and some of them are still mounted on the
  old mount-point.

  reboot or zfs-mountservice restart solves it.

  ---

  Ubuntu: 16.04.3 LTS
  Kernel: 4.4.0-93-generic
  zfsutils-linux: 0.6.5.6-0ubuntu18

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1721589/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1722261] Re: deadlock in mount umount and sync

2018-04-24 Thread Colin Ian King
@Aleberto, any feedback from comment #5 would be appreciated.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1722261

Title:
  deadlock in mount umount and sync

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  I use zfs vesion  0.6.5.6  on  Ubuntu 16.04.2 LTS . I have  many zombie 
process on auto-mount of snapshot and on sync !
  903 process are   in deadlock.  i can't mount a new file system or snapshot . 
Partial output of ps alx |grep 'call_r D'  below .

  what is the cause? what can I do ?

  0 0   2371  1  20   0   6016   752 call_r D?  0:00 
/bin/sync
  0 0  15290  1  20   0   6016   676 call_r D?  0:00 
/bin/sync
  0 0  18919  1  20   0   6016   708 call_r D?  0:00 
/bin/sync
  0 0  27076  1  20   0   6016   808 call_r D?  0:00 
/bin/sync
  4 0  31976  1  20   0  22084  1344 call_r D?  0:00 umount 
-t zfs -n /samba/shares/Aat/.zfs/snapshot/2017-10-04_09.00.05--5d

  error in kern.log:
  9 13:20:28 zfs-cis kernel: [5368563.592834] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:20:28 zfs-cis kernel: [5368563.597868] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:20:28 zfs-cis kernel: [5368563.601730] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.187001] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.13] WARNING: Unable to automount 
/samba/shares/Cardiologia2/.zfs/snapshot/2017-10-03_12.00.03--5d/pool_z2_samba/shares/Cardiologia2@2017-10-03_12.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.15] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.189005] WARNING: Unable to automount 
/samba/shares/Aat/.zfs/snapshot/2017-10-03_20.00.04--5d/pool_z2_samba/shares/Aat@2017-10-03_20.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.190105] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.192847] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.193617] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.198096] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256


  
  in syslog :

  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dChtrapianti_2emount 
interface=org.freedesktop.DBus.Properties member=GetAll cookie=155 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dLaboratorio_5fTrapianti_2emount
 interface=org.freedesktop.DBus.Properties member=GetAll cookie=202 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dProgettoTrapianti_2emount
 interface=org.freedesktop.DBus.Properties member=GetAll cookie=260 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dTrapianti_2emount 
interface=org.freedesktop.DBus.Properties member=GetAll cookie=291 
reply_cookie=0 error=n/a
  Oct  9 12:22:13 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dChtrapianti_2emount 
interface=org.freedesktop.DBus.Properties member=GetAll cookie=155 
reply_cookie=0 error=n/a
  Oct  9 12:22:13 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 

[Kernel-packages] [Bug 1749715] Re: general protection fault in zfs module

2018-04-24 Thread Colin Ian King
@Simon, the log in comment #8 contains different crashes so I'm going to
focus on the original crash report on comment #1.  Has this problem re-
occurred with more recent kernels?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1749715

Title:
   general protection fault in zfs module

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  Got this call trace during a rsync backup of a machine using ZFS:

  general protection fault:  [#1] SMP 
  Modules linked in: ip6table_filter ip6_tables xt_tcpudp xt_conntrack 
iptable_filter ip_tables x_tables zfs(PO) zunicode(PO) zcommon(PO) znvpair(PO) 
spl(O) zavl(PO) input_leds sch_fq_codel nf_conntrack_ipv6 nf_defrag_ipv6 
nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack virtio_scsi
  CPU: 0 PID: 4238 Comm: rsync Tainted: P   O4.4.0-112-generic 
#135-Ubuntu
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Ubuntu-1.8.2-1ubuntu1 04/01/2014
  task: 880078a4f2c0 ti: 880047c28000 task.ti: 880047c28000
  RIP: 0010:[]  [] avl_insert+0x33/0xe0 
[zavl]
  RSP: 0018:880047c2bc20  EFLAGS: 00010246
  RAX: 0001 RBX: 880043b46200 RCX: 0001
  RDX:  RSI: 001f880043b46208 RDI: 88005aa0c9a8
  RBP: 880047c2bc20 R08:  R09: 88007d001700
  R10: 880043b46200 R11: 0246 R12: 88005aa0c9a8
  R13: 880043b46200 R14:  R15: 88005aa0c9a8
  FS:  7f04124ec700() GS:88007fc0() knlGS:
  CS:  0010 DS:  ES:  CR0: 80050033
  CR2: 7ffd25c1cb8c CR3: 47cb CR4: 0670
  Stack:
   880047c2bc68 c0313721  0028
   880043b46200 88005aa0c8c8 6b34 
   88005aa0c9a8 880047c2bcc8 c04609ee 
  Call Trace:
   [] avl_add+0x71/0xa0 [zavl]
   [] zfs_range_lock+0x3ee/0x5e0 [zfs]
   [] ? rrw_enter_read_impl+0xbc/0x160 [zfs]
   [] zfs_read+0xd0/0x3c0 [zfs]
   [] ? profile_path_perm.part.7+0x7d/0xa0
   [] zpl_read_common_iovec+0x80/0xd0 [zfs]
   [] zpl_iter_read+0xa0/0xd0 [zfs]
   [] new_sync_read+0x94/0xd0
   [] __vfs_read+0x26/0x40
   [] vfs_read+0x86/0x130
   [] SyS_read+0x55/0xc0
   [] ? entry_SYSCALL_64_after_swapgs+0xd1/0x18c
   [] entry_SYSCALL_64_fastpath+0x2b/0xe7
  Code: 83 e2 01 48 03 77 10 49 83 e0 fe 8d 04 95 00 00 00 00 55 4c 89 c1 48 83 
47 18 01 83 e0 04 48 83 c9 01 48 89 e5 48 09 c8 4d 85 c0 <48> c7 06 00 00 00 00 
48 c7 46 08 00 00 00 00 48 89 46 10 0f 84 
  RIP  [] avl_insert+0x33/0xe0 [zavl]
   RSP 
  ---[ end trace c4ba4478b6002697 ]---

  
  This is the first time it happens but I'll report any future occurrence in 
here.

  Additional info:

  $ lsb_release -rd
  Description:  Ubuntu 16.04.3 LTS
  Release:  16.04

  $ apt-cache policy linux-image-4.4.0-112-generic zfsutils-linux
  linux-image-4.4.0-112-generic:
Installed: 4.4.0-112.135
Candidate: 4.4.0-112.135
Version table:
   *** 4.4.0-112.135 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 
Packages
  100 /var/lib/dpkg/status
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu18
Candidate: 0.6.5.6-0ubuntu18
Version table:
   *** 0.6.5.6-0ubuntu18 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.6.5.6-0ubuntu8 500
  500 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: linux-image-4.4.0-112-generic 4.4.0-112.135
  ProcVersionSignature: Ubuntu 4.4.0-112.135-generic 4.4.98
  Uname: Linux 4.4.0-112-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Feb 14 16:19 seq
   crw-rw 1 root audio 116, 33 Feb 14 16:19 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: [Errno 2] No such file or directory: 'fuser'
  CRDA: N/A
  CurrentDmesg: Error: command ['dmesg'] failed with exit code 1: dmesg: read 
kernel buffer failed: Operation not permitted
  Date: Thu Feb 15 08:45:07 2018
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  Lspci: Error: [Errno 2] No such file or directory: 'lspci'
  Lsusb: Error: [Errno 2] No such file or directory: 'lsusb'
  MachineType: QEMU Standard PC (i440FX + PIIX, 1996)
  PciMultimedia:
   
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 EFI VGA
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.4.0-112-generic 

[Kernel-packages] [Bug 1725859] Re: zfs frequently hangs (up to 30 secs) during sequential read

2018-04-24 Thread Colin Ian King
Just one more check, what is the block size of the raw devices?  One can
check this using:

sudo blockdev --getpbsz /dev/sdb
sudo blockdev --getpbsz /dev/sdc

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1725859

Title:
  zfs frequently hangs (up to 30 secs) during sequential read

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  Updated to artful (17.10) yesterday. Trying to read (play video) from 
mirrored ZFS disks from an external USB 3 enclosure. Zpool is defined as:
   
  root@hetty:/home/crlb# zpool status
pool: storage
   state: ONLINE
scan: resilvered 20K in 0h0m with 0 errors on Fri Oct 20 18:38:49 2017
  config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
sdb ONLINE   0 0 0
sdc ONLINE   0 0 0

  errors: No known data errors
  root@hetty:/home/crlb#

  Found that I could re-create the problem with:

   rsync -av --progress  

  Also found that:

dd if=/dev/sdX of=/dev/null status=progress bs=1024 count=1000

  Where "X" is either "b" or "c" does not hang.

  Installed:

  root@hetty:/home/crlb# apt list --installed | grep -i zfs
  libzfs2linux/artful,now 0.6.5.11-1ubuntu3 amd64 [installed,automatic]
  zfs-zed/artful,now 0.6.5.11-1ubuntu3 amd64 [installed,automatic]
  zfsutils-linux/artful,now 0.6.5.11-1ubuntu3 amd64 [installed]
  root@hetty:/home/crlb#

  Help please.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1725859/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1763067] Re: zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

2018-04-24 Thread Colin Ian King
Tested on Artful with 4.15.0-20, works fine now with this fix too.

** Description changed:

  == SRU Justification [XENIAL][ARTFUL] ==
  
  Symlinks in /dev/zvol are not because /lib/udev/zvol_id exits with
  ioctl_get_msg failed:Inappropriate ioctl for device.  The core reason is
  that the BLKZNAME with non-HWE userspace zfsutils is 1 char different is
  size than the HWE kernel drivers.
  
  == Fix ==
  
  Change the userland zvol_id tool to use the V0.6.x ZFS_MAXNAMELEN size
  and if that fails with ENOTTY retry with the V0.07.0 ZFS_MAXNAMELEN size
  as a fallback compatibility call.
  
  == Regression Potential ==
  
  Very small, this changes zvol_id and keeps the original functionality as
  well as adding V7 ZFS functionality as a fallback.  At worse, zvol_id
  will not work, but that's the same as the current broken state.
  
  == Testing ==
  
  create a volume, e.g. zfs create -V 8M ${POOL}/testvol
  
- run /lib/udev/vol_id on /dev/zd0 on Xenial with a 4.15 Xenial HWE
+ run /lib/udev/zvol_id on /dev/zd0 on Xenial with a 4.15 Xenial HWE
  kernel.  Without the fix one gets an ENOTTY errno failure. With the fix
  zvol_id returns the zvol id and symlinks in /dev/zvol work.
  
  ---
  
  Symlinks in /dev/zvol are not created because /lib/udev/zvol_id crashes
  with ioctl_get_msg failed:Inappropriate ioctl for device.
  
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04
  
  Running  linux-generic-hwe-16.04-edge
  4.15.0-13-generic
  
  zfsutils-linux 0.6.5.6-0ubuntu19
  
  The version of zvol_id in zfsutils-linux_0.7.5-1ubuntu14_amd64 works
  without error.

** Tags added: verification-done-artful

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1763067

Title:
  zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

Status in zfs-linux package in Ubuntu:
  In Progress
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed

Bug description:
  == SRU Justification [XENIAL][ARTFUL] ==

  Symlinks in /dev/zvol are not because /lib/udev/zvol_id exits with
  ioctl_get_msg failed:Inappropriate ioctl for device.  The core reason
  is that the BLKZNAME with non-HWE userspace zfsutils is 1 char
  different is size than the HWE kernel drivers.

  == Fix ==

  Change the userland zvol_id tool to use the V0.6.x ZFS_MAXNAMELEN size
  and if that fails with ENOTTY retry with the V0.07.0 ZFS_MAXNAMELEN
  size as a fallback compatibility call.

  == Regression Potential ==

  Very small, this changes zvol_id and keeps the original functionality
  as well as adding V7 ZFS functionality as a fallback.  At worse,
  zvol_id will not work, but that's the same as the current broken
  state.

  == Testing ==

  create a volume, e.g. zfs create -V 8M ${POOL}/testvol

  run /lib/udev/zvol_id on /dev/zd0 on Xenial with a 4.15 Xenial HWE
  kernel.  Without the fix one gets an ENOTTY errno failure. With the
  fix zvol_id returns the zvol id and symlinks in /dev/zvol work.

  ---

  Symlinks in /dev/zvol are not created because /lib/udev/zvol_id
  crashes with ioctl_get_msg failed:Inappropriate ioctl for device.

  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  Running  linux-generic-hwe-16.04-edge
  4.15.0-13-generic

  zfsutils-linux 0.6.5.6-0ubuntu19

  The version of zvol_id in zfsutils-linux_0.7.5-1ubuntu14_amd64 works
  without error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1763067/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1763067] Re: zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

2018-04-24 Thread Colin Ian King
Note: this does not need fixing for Bionic as the ZFS 0.7.x userspace
tools work fine with the 4.15 bionic kernel.

** Changed in: zfs-linux (Ubuntu Xenial)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: zfs-linux (Ubuntu Artful)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: zfs-linux (Ubuntu)
 Assignee: Colin Ian King (colin-king) => (unassigned)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1763067

Title:
  zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

Status in zfs-linux package in Ubuntu:
  In Progress
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed

Bug description:
  == SRU Justification [XENIAL][ARTFUL] ==

  Symlinks in /dev/zvol are not because /lib/udev/zvol_id exits with
  ioctl_get_msg failed:Inappropriate ioctl for device.  The core reason
  is that the BLKZNAME with non-HWE userspace zfsutils is 1 char
  different is size than the HWE kernel drivers.

  == Fix ==

  Change the userland zvol_id tool to use the V0.6.x ZFS_MAXNAMELEN size
  and if that fails with ENOTTY retry with the V0.07.0 ZFS_MAXNAMELEN
  size as a fallback compatibility call.

  == Regression Potential ==

  Very small, this changes zvol_id and keeps the original functionality
  as well as adding V7 ZFS functionality as a fallback.  At worse,
  zvol_id will not work, but that's the same as the current broken
  state.

  == Testing ==

  create a volume, e.g. zfs create -V 8M ${POOL}/testvol

  run /lib/udev/zvol_id on /dev/zd0 on Xenial with a 4.15 Xenial HWE
  kernel.  Without the fix one gets an ENOTTY errno failure. With the
  fix zvol_id returns the zvol id and symlinks in /dev/zvol work.

  ---

  Symlinks in /dev/zvol are not created because /lib/udev/zvol_id
  crashes with ioctl_get_msg failed:Inappropriate ioctl for device.

  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  Running  linux-generic-hwe-16.04-edge
  4.15.0-13-generic

  zfsutils-linux 0.6.5.6-0ubuntu19

  The version of zvol_id in zfsutils-linux_0.7.5-1ubuntu14_amd64 works
  without error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1763067/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1763067] Re: zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

2018-04-24 Thread Colin Ian King
Tested on Xenial with HWE kernel 4.15.0-15 (#16~16.04.1), works fine
with this fix.

** Description changed:

  == SRU Justification [XENIAL][ARTFUL] ==
  
  Symlinks in /dev/zvol are not because /lib/udev/zvol_id exits with
  ioctl_get_msg failed:Inappropriate ioctl for device.  The core reason is
  that the BLKZNAME with non-HWE userspace zfsutils is 1 char different is
  size than the HWE kernel drivers.
  
  == Fix ==
  
  Change the userland zvol_id tool to use the V0.6.x ZFS_MAXNAMELEN size
  and if that fails with ENOTTY retry with the V0.07.0 ZFS_MAXNAMELEN size
  as a fallback compatibility call.
  
  == Regression Potential ==
  
  Very small, this changes zvol_id and keeps the original functionality as
  well as adding V7 ZFS functionality as a fallback.  At worse, zvol_id
  will not work, but that's the same as the current broken state.
  
  == Testing ==
  
- run /lib/udev/vol_id on /dev/zvol on Xenial with a 4.15 Xenial HWE
+ create a volume, e.g. zfs create -V 8M ${POOL}/testvol
+ 
+ run /lib/udev/vol_id on /dev/zd0 on Xenial with a 4.15 Xenial HWE
  kernel.  Without the fix one gets an ENOTTY errno failure. With the fix
  zvol_id returns the zvol id and symlinks in /dev/zvol work.
  
  ---
  
  Symlinks in /dev/zvol are not created because /lib/udev/zvol_id crashes
  with ioctl_get_msg failed:Inappropriate ioctl for device.
  
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04
  
  Running  linux-generic-hwe-16.04-edge
  4.15.0-13-generic
  
  zfsutils-linux 0.6.5.6-0ubuntu19
  
  The version of zvol_id in zfsutils-linux_0.7.5-1ubuntu14_amd64 works
  without error.

** Tags added: verification-done-xenial

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1763067

Title:
  zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

Status in zfs-linux package in Ubuntu:
  In Progress
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed

Bug description:
  == SRU Justification [XENIAL][ARTFUL] ==

  Symlinks in /dev/zvol are not because /lib/udev/zvol_id exits with
  ioctl_get_msg failed:Inappropriate ioctl for device.  The core reason
  is that the BLKZNAME with non-HWE userspace zfsutils is 1 char
  different is size than the HWE kernel drivers.

  == Fix ==

  Change the userland zvol_id tool to use the V0.6.x ZFS_MAXNAMELEN size
  and if that fails with ENOTTY retry with the V0.07.0 ZFS_MAXNAMELEN
  size as a fallback compatibility call.

  == Regression Potential ==

  Very small, this changes zvol_id and keeps the original functionality
  as well as adding V7 ZFS functionality as a fallback.  At worse,
  zvol_id will not work, but that's the same as the current broken
  state.

  == Testing ==

  create a volume, e.g. zfs create -V 8M ${POOL}/testvol

  run /lib/udev/vol_id on /dev/zd0 on Xenial with a 4.15 Xenial HWE
  kernel.  Without the fix one gets an ENOTTY errno failure. With the
  fix zvol_id returns the zvol id and symlinks in /dev/zvol work.

  ---

  Symlinks in /dev/zvol are not created because /lib/udev/zvol_id
  crashes with ioctl_get_msg failed:Inappropriate ioctl for device.

  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  Running  linux-generic-hwe-16.04-edge
  4.15.0-13-generic

  zfsutils-linux 0.6.5.6-0ubuntu19

  The version of zvol_id in zfsutils-linux_0.7.5-1ubuntu14_amd64 works
  without error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1763067/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1763067] Re: zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

2018-04-23 Thread Colin Ian King
Just to note, this is a compat fix for older versions of the user space
tools to work with newer ZFS 0.7.x, so the fix is not required in
Bionic.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1763067

Title:
  zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

Status in zfs-linux package in Ubuntu:
  In Progress
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed

Bug description:
  == SRU Justification [XENIAL][ARTFUL] ==

  Symlinks in /dev/zvol are not because /lib/udev/zvol_id exits with
  ioctl_get_msg failed:Inappropriate ioctl for device.  The core reason
  is that the BLKZNAME with non-HWE userspace zfsutils is 1 char
  different is size than the HWE kernel drivers.

  == Fix ==

  Change the userland zvol_id tool to use the V0.6.x ZFS_MAXNAMELEN size
  and if that fails with ENOTTY retry with the V0.07.0 ZFS_MAXNAMELEN
  size as a fallback compatibility call.

  == Regression Potential ==

  Very small, this changes zvol_id and keeps the original functionality
  as well as adding V7 ZFS functionality as a fallback.  At worse,
  zvol_id will not work, but that's the same as the current broken
  state.

  == Testing ==

  run /lib/udev/vol_id on /dev/zvol on Xenial with a 4.15 Xenial HWE
  kernel.  Without the fix one gets an ENOTTY errno failure. With the
  fix zvol_id returns the zvol id and symlinks in /dev/zvol work.

  ---

  Symlinks in /dev/zvol are not created because /lib/udev/zvol_id
  crashes with ioctl_get_msg failed:Inappropriate ioctl for device.

  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  Running  linux-generic-hwe-16.04-edge
  4.15.0-13-generic

  zfsutils-linux 0.6.5.6-0ubuntu19

  The version of zvol_id in zfsutils-linux_0.7.5-1ubuntu14_amd64 works
  without error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1763067/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1763067] Re: zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

2018-04-20 Thread Colin Ian King
** Description changed:

- == SRU Justifcation [XENIAL][ARTFUL] ==
+ == SRU Justification [XENIAL][ARTFUL] ==
  
  Symlinks in /dev/zvol are not because /lib/udev/zvol_id exits with
  ioctl_get_msg failed:Inappropriate ioctl for device.  The core reason is
  that the BLKZNAME with non-HWE userspace zfsutils is 1 char different is
  size than the HWE kernel drivers.
  
  == Fix ==
  
  Change the userland zvol_id tool to use the V0.6.x ZFS_MAXNAMELEN size
  and if that fails with ENOTTY retry with the V0.07.0 ZFS_MAXNAMELEN size
  as a fallback compatibility call.
  
  == Regression Potential ==
  
  Very small, this changes zvol_id and keeps the original functionality as
  well as adding V7 ZFS functionality as a fallback.  At worse, zvol_id
  will not work, but that's the same as the current broken state.
  
  == Testing ==
  
  run /lib/udev/vol_id on /dev/zvol on Xenial with a 4.15 Xenial HWE
  kernel.  Without the fix one gets an ENOTTY errno failure. With the fix
  zvol_id returns the zvol id and symlinks in /dev/zvol work.
  
  ---
  
  Symlinks in /dev/zvol are not created because /lib/udev/zvol_id crashes
  with ioctl_get_msg failed:Inappropriate ioctl for device.
  
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04
  
  Running  linux-generic-hwe-16.04-edge
  4.15.0-13-generic
  
  zfsutils-linux 0.6.5.6-0ubuntu19
  
  The version of zvol_id in zfsutils-linux_0.7.5-1ubuntu14_amd64 works
  without error.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1763067

Title:
  zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  == SRU Justification [XENIAL][ARTFUL] ==

  Symlinks in /dev/zvol are not because /lib/udev/zvol_id exits with
  ioctl_get_msg failed:Inappropriate ioctl for device.  The core reason
  is that the BLKZNAME with non-HWE userspace zfsutils is 1 char
  different is size than the HWE kernel drivers.

  == Fix ==

  Change the userland zvol_id tool to use the V0.6.x ZFS_MAXNAMELEN size
  and if that fails with ENOTTY retry with the V0.07.0 ZFS_MAXNAMELEN
  size as a fallback compatibility call.

  == Regression Potential ==

  Very small, this changes zvol_id and keeps the original functionality
  as well as adding V7 ZFS functionality as a fallback.  At worse,
  zvol_id will not work, but that's the same as the current broken
  state.

  == Testing ==

  run /lib/udev/vol_id on /dev/zvol on Xenial with a 4.15 Xenial HWE
  kernel.  Without the fix one gets an ENOTTY errno failure. With the
  fix zvol_id returns the zvol id and symlinks in /dev/zvol work.

  ---

  Symlinks in /dev/zvol are not created because /lib/udev/zvol_id
  crashes with ioctl_get_msg failed:Inappropriate ioctl for device.

  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  Running  linux-generic-hwe-16.04-edge
  4.15.0-13-generic

  zfsutils-linux 0.6.5.6-0ubuntu19

  The version of zvol_id in zfsutils-linux_0.7.5-1ubuntu14_amd64 works
  without error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1763067/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1763067] Re: zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

2018-04-20 Thread Colin Ian King
** Description changed:

+ == SRU Justifcation [XENIAL][ARTFUL] ==
+ 
+ Symlinks in /dev/zvol are not because /lib/udev/zvol_id exits with
+ ioctl_get_msg failed:Inappropriate ioctl for device.  The core reason is
+ that the BLKZNAME with non-HWE userspace zfsutils is 1 char different is
+ size than the HWE kernel drivers.
+ 
+ == Fix ==
+ 
+ Change the userland zvol_id tool to use the V0.6.x ZFS_MAXNAMELEN size
+ and if that fails with ENOTTY retry with the V0.07.0 ZFS_MAXNAMELEN size
+ as a fallback compatibility call.
+ 
+ == Regression Potential ==
+ 
+ Very small, this changes zvol_id and keeps the original functionality as
+ well as adding V7 ZFS functionality as a fallback.  At worse, zvol_id
+ will not work, but that's the same as the current broken state.
+ 
+ == Testing ==
+ 
+ run /lib/udev/vol_id on /dev/zvol on Xenial with a 4.15 Xenial HWE
+ kernel.  Without the fix one gets an ENOTTY errno failure. With the fix
+ zvol_id returns the zvol id and symlinks in /dev/zvol work.
+ 
+ ---
+ 
  Symlinks in /dev/zvol are not created because /lib/udev/zvol_id crashes
  with ioctl_get_msg failed:Inappropriate ioctl for device.
  
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04
  
  Running  linux-generic-hwe-16.04-edge
  4.15.0-13-generic
  
  zfsutils-linux 0.6.5.6-0ubuntu19
  
  The version of zvol_id in zfsutils-linux_0.7.5-1ubuntu14_amd64 works
  without error.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1763067

Title:
  zvol_id throws ioctl_get_msg failed:Inappropriate ioctl for device

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  == SRU Justification [XENIAL][ARTFUL] ==

  Symlinks in /dev/zvol are not because /lib/udev/zvol_id exits with
  ioctl_get_msg failed:Inappropriate ioctl for device.  The core reason
  is that the BLKZNAME with non-HWE userspace zfsutils is 1 char
  different is size than the HWE kernel drivers.

  == Fix ==

  Change the userland zvol_id tool to use the V0.6.x ZFS_MAXNAMELEN size
  and if that fails with ENOTTY retry with the V0.07.0 ZFS_MAXNAMELEN
  size as a fallback compatibility call.

  == Regression Potential ==

  Very small, this changes zvol_id and keeps the original functionality
  as well as adding V7 ZFS functionality as a fallback.  At worse,
  zvol_id will not work, but that's the same as the current broken
  state.

  == Testing ==

  run /lib/udev/vol_id on /dev/zvol on Xenial with a 4.15 Xenial HWE
  kernel.  Without the fix one gets an ENOTTY errno failure. With the
  fix zvol_id returns the zvol id and symlinks in /dev/zvol work.

  ---

  Symlinks in /dev/zvol are not created because /lib/udev/zvol_id
  crashes with ioctl_get_msg failed:Inappropriate ioctl for device.

  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  Running  linux-generic-hwe-16.04-edge
  4.15.0-13-generic

  zfsutils-linux 0.6.5.6-0ubuntu19

  The version of zvol_id in zfsutils-linux_0.7.5-1ubuntu14_amd64 works
  without error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1763067/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1764320] Re: Thermald sysfs read failed /sys/class/thermal/thermal_zoneX/temp

2018-04-20 Thread Colin Ian King
I'm tracking the fix in Ubuntu with bug report 1765572

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1764320

Title:
  Thermald  sysfs read failed /sys/class/thermal/thermal_zoneX/temp

Status in linux package in Ubuntu:
  Confirmed
Status in thermald package in Ubuntu:
  Fix Released

Bug description:
  My /var/log/syslog contains a lot of "sysfs read failed
  /sys/class/thermal/thermal_zone4/temp".

  What is the problem and how to fix it (ubutuntu 18.04 / kernel
  4.15.0-13-generic / thermald 1.7.0-3)?

  lsb_release -rd
  Description:  Ubuntu Bionic Beaver (development branch)
  Release:  18.04

  apt-cache policy thermald
  thermald:
Installed: 1.7.0-3
Candidate: 1.7.0-3
Version table:
   *** 1.7.0-3 500
  500 http://ch.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

  cat /proc/version
  Linux version 4.15.0-13-generic (buildd@lgw01-amd64-023) (gcc version 7.3.0 
(Ubuntu 7.3.0-11ubuntu1)) #14-Ubuntu SMP Sat Mar 17 13:44:27 UTC 2018
  --- 
  ApportVersion: 2.20.9-0ubuntu5
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  pim1581 F pulseaudio
  DistroRelease: Ubuntu 18.04
  InstallationDate: Installed on 2018-04-13 (4 days ago)
  InstallationMedia: Xubuntu 18.04 LTS "Bionic Beaver" - Alpha amd64 (20180413)
  Lsusb:
   Bus 002 Device 002: ID 0b95:1790 ASIX Electronics Corp. AX88179 Gigabit 
Ethernet
   Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
   Bus 001 Device 003: ID 04f2:b56d Chicony Electronics Co., Ltd 
   Bus 001 Device 002: ID 8087:0a2a Intel Corp. 
   Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  MachineType: HP HP Pavilion x360 Convertible
  Package: thermald 1.7.0-3
  PackageArchitecture: amd64
  ProcFB: 0 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.15.0-15-generic.efi.signed 
root=UUID=40ad8820-11eb-4e0b-8556-906bb965f2d1 ro rootflags=subvol=@ quiet 
splash vt.handoff=1
  ProcVersionSignature: Ubuntu 4.15.0-15.16-generic 4.15.15
  RelatedPackageVersions:
   linux-restricted-modules-4.15.0-15-generic N/A
   linux-backports-modules-4.15.0-15-generic  N/A
   linux-firmware 1.173
  Tags:  bionic
  Uname: Linux 4.15.0-15-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm cdrom dialout dip libvirt lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 11/10/2016
  dmi.bios.vendor: Insyde
  dmi.bios.version: F.21
  dmi.board.asset.tag: Type2 - Board Asset Tag
  dmi.board.name: 81A9
  dmi.board.vendor: HP
  dmi.board.version: 57.52
  dmi.chassis.type: 10
  dmi.chassis.vendor: HP
  dmi.chassis.version: Chassis Version
  dmi.modalias: 
dmi:bvnInsyde:bvrF.21:bd11/10/2016:svnHP:pnHPPavilionx360Convertible:pvrType1ProductConfigId:rvnHP:rn81A9:rvr57.52:cvnHP:ct10:cvrChassisVersion:
  dmi.product.family: 103C_5335KV G=N L=CON B=HP S=PAV
  dmi.product.name: HP Pavilion x360 Convertible
  dmi.product.version: Type1ProductConfigId
  dmi.sys.vendor: HP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1764320/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1764320] Re: Thermald sysfs read failed /sys/class/thermal/thermal_zoneX/temp

2018-04-19 Thread Colin Ian King
Thanks Ben for spotting the bug and for the fix, I'm uploading that to
Debian right now and will sync it into Ubuntu as soon as it is ready for
syncing. Much appreciated.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1764320

Title:
  Thermald  sysfs read failed /sys/class/thermal/thermal_zoneX/temp

Status in linux package in Ubuntu:
  Confirmed
Status in thermald package in Ubuntu:
  Fix Released

Bug description:
  My /var/log/syslog contains a lot of "sysfs read failed
  /sys/class/thermal/thermal_zone4/temp".

  What is the problem and how to fix it (ubutuntu 18.04 / kernel
  4.15.0-13-generic / thermald 1.7.0-3)?

  lsb_release -rd
  Description:  Ubuntu Bionic Beaver (development branch)
  Release:  18.04

  apt-cache policy thermald
  thermald:
Installed: 1.7.0-3
Candidate: 1.7.0-3
Version table:
   *** 1.7.0-3 500
  500 http://ch.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

  cat /proc/version
  Linux version 4.15.0-13-generic (buildd@lgw01-amd64-023) (gcc version 7.3.0 
(Ubuntu 7.3.0-11ubuntu1)) #14-Ubuntu SMP Sat Mar 17 13:44:27 UTC 2018
  --- 
  ApportVersion: 2.20.9-0ubuntu5
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  pim1581 F pulseaudio
  DistroRelease: Ubuntu 18.04
  InstallationDate: Installed on 2018-04-13 (4 days ago)
  InstallationMedia: Xubuntu 18.04 LTS "Bionic Beaver" - Alpha amd64 (20180413)
  Lsusb:
   Bus 002 Device 002: ID 0b95:1790 ASIX Electronics Corp. AX88179 Gigabit 
Ethernet
   Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
   Bus 001 Device 003: ID 04f2:b56d Chicony Electronics Co., Ltd 
   Bus 001 Device 002: ID 8087:0a2a Intel Corp. 
   Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  MachineType: HP HP Pavilion x360 Convertible
  Package: thermald 1.7.0-3
  PackageArchitecture: amd64
  ProcFB: 0 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.15.0-15-generic.efi.signed 
root=UUID=40ad8820-11eb-4e0b-8556-906bb965f2d1 ro rootflags=subvol=@ quiet 
splash vt.handoff=1
  ProcVersionSignature: Ubuntu 4.15.0-15.16-generic 4.15.15
  RelatedPackageVersions:
   linux-restricted-modules-4.15.0-15-generic N/A
   linux-backports-modules-4.15.0-15-generic  N/A
   linux-firmware 1.173
  Tags:  bionic
  Uname: Linux 4.15.0-15-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm cdrom dialout dip libvirt lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 11/10/2016
  dmi.bios.vendor: Insyde
  dmi.bios.version: F.21
  dmi.board.asset.tag: Type2 - Board Asset Tag
  dmi.board.name: 81A9
  dmi.board.vendor: HP
  dmi.board.version: 57.52
  dmi.chassis.type: 10
  dmi.chassis.vendor: HP
  dmi.chassis.version: Chassis Version
  dmi.modalias: 
dmi:bvnInsyde:bvrF.21:bd11/10/2016:svnHP:pnHPPavilionx360Convertible:pvrType1ProductConfigId:rvnHP:rn81A9:rvr57.52:cvnHP:ct10:cvrChassisVersion:
  dmi.product.family: 103C_5335KV G=N L=CON B=HP S=PAV
  dmi.product.name: HP Pavilion x360 Convertible
  dmi.product.version: Type1ProductConfigId
  dmi.sys.vendor: HP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1764320/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1723948] Re: error -lock in zfs send

2018-04-18 Thread Colin Ian King
Not sure why there is little activity on the upstream bug report.

If you are using ZFS intent log, it may be worth trying to set it to a
fairly large size, such as 128 or 256MB using the zil_slog_limit
setting.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1723948

Title:
  error -lock in zfs send

Status in Native ZFS for Linux:
  New
Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  zfs send stop working. the process did not produce output. checked by
  mbuffer log

  in kernel.log: 
  Oct 15 07:25:53 zfs-cis kernel: [479439.151281] INFO: task zfs:8708 blocked 
for more than 120 seconds.
  Oct 15 07:25:53 zfs-cis kernel: [479439.156980]   Tainted: P   OE 
  4.4.0-96-generic #119-Ubuntu
  Oct 15 07:25:53 zfs-cis kernel: [479439.162688] "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
  Oct 15 07:25:53 zfs-cis kernel: [479439.173974] zfs D 
88197bd77318 0  8708   8141 0x
  Oct 15 07:25:53 zfs-cis kernel: [479439.173981]  88197bd77318 
810c3dc2 8820374cf000 881b645ff000
  Oct 15 07:25:53 zfs-cis kernel: [479439.173985]  88197bd78000 
00792c6d 882030aa4ac8 
  Oct 15 07:25:53 zfs-cis kernel: [479439.173989]  88101dac1840 
88197bd77330 8183f165 882030aa4a00
  Oct 15 07:25:53 zfs-cis kernel: [479439.173993] Call Trace:
  Oct 15 07:25:53 zfs-cis kernel: [479439.174006]  [] ? 
__wake_up_common+0x52/0x90
  Oct 15 07:25:53 zfs-cis kernel: [479439.174023]  [] 
schedule+0x35/0x80
  Oct 15 07:25:53 zfs-cis kernel: [479439.174045]  [] 
taskq_wait_id+0x60/0xb0 [spl]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174051]  [] ? 
wake_atomic_t_function+0x60/0x60
  Oct 15 07:25:53 zfs-cis kernel: [479439.174115]  [] ? 
dump_write+0x230/0x230 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174178]  [] 
spa_taskq_dispatch_sync+0x92/0xd0 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174223]  [] 
dump_bytes+0x51/0x70 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174267]  [] 
dump_write+0x11e/0x230 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174311]  [] 
backup_cb+0x633/0x850 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174356]  [] 
traverse_visitbp+0x47a/0x960 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174365]  [] ? 
spl_kmem_alloc+0xaf/0x190 [spl]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174409]  [] 
traverse_visitbp+0x5c0/0x960 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174462]  [] 
traverse_visitbp+0x5c0/0x960 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174503]  [] 
traverse_dnode+0x7f/0xe0 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174542]  [] 
traverse_visitbp+0x6cc/0x960 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174579]  [] 
traverse_visitbp+0x5c0/0x960 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174616]  [] 
traverse_visitbp+0x5c0/0x960 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174653]  [] 
traverse_visitbp+0x5c0/0x960 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174690]  [] 
traverse_visitbp+0x5c0/0x960 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174726]  [] 
traverse_visitbp+0x5c0/0x960 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174762]  [] 
traverse_visitbp+0x5c0/0x960 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174799]  [] 
traverse_dnode+0x7f/0xe0 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174835]  [] 
traverse_visitbp+0x865/0x960 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174871]  [] 
traverse_impl+0x1ae/0x410 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174908]  [] ? 
dmu_recv_end_check+0x210/0x210 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174944]  [] 
traverse_dataset+0x52/0x60 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.174981]  [] ? 
dmu_recv_end_check+0x210/0x210 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.175017]  [] 
dmu_send_impl+0x409/0x560 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.175060]  [] 
dmu_send_obj+0x172/0x1e0 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.175129]  [] 
zfs_ioc_send+0xe9/0x2c0 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.175143]  [] ? 
strdup+0x3b/0x60 [spl]
  Oct 15 07:25:53 zfs-cis kernel: [479439.175207]  [] 
zfsdev_ioctl+0x44b/0x4e0 [zfs]
  Oct 15 07:25:53 zfs-cis kernel: [479439.175218]  [] 
do_vfs_ioctl+0x29f/0x490
  Oct 15 07:25:53 zfs-cis kernel: [479439.175225]  [] ? 
_do_fork+0xec/0x360
  Oct 15 07:25:53 zfs-cis kernel: [479439.175232]  [] 
SyS_ioctl+0x79/0x90
  Oct 15 07:25:53 zfs-cis kernel: [479439.175242]  [] 
entry_SYSCALL_64_fastpath+0x16/0x71

To manage notifications about this bug go to:
https://bugs.launchpad.net/zfs/+bug/1723948/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1764320] Re: Thermald sysfs read failed /sys/class/thermal/thermal_zoneX/temp

2018-04-18 Thread Colin Ian King
I'm adding a small change to thermald so it does not continually spam
the log.

** Also affects: linux (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: thermald (Ubuntu)
   Status: Incomplete => In Progress

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1764320

Title:
  Thermald  sysfs read failed /sys/class/thermal/thermal_zoneX/temp

Status in linux package in Ubuntu:
  Incomplete
Status in thermald package in Ubuntu:
  In Progress

Bug description:
  My /var/log/syslog contains a lot of "sysfs read failed
  /sys/class/thermal/thermal_zone4/temp".

  What is the problem and how to fix it (ubutuntu 18.04 / kernel
  4.15.0-13-generic / thermald 1.7.0-3)?

  lsb_release -rd
  Description:  Ubuntu Bionic Beaver (development branch)
  Release:  18.04

  apt-cache policy thermald
  thermald:
Installed: 1.7.0-3
Candidate: 1.7.0-3
Version table:
   *** 1.7.0-3 500
  500 http://ch.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status

  cat /proc/version
  Linux version 4.15.0-13-generic (buildd@lgw01-amd64-023) (gcc version 7.3.0 
(Ubuntu 7.3.0-11ubuntu1)) #14-Ubuntu SMP Sat Mar 17 13:44:27 UTC 2018

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1764320/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1764810] Re: Xenial: rfkill: fix missing return on rfkill_init

2018-04-17 Thread Colin Ian King
** Description changed:

+ == SRU Justification ==
+ 
  A previous backport to bug LP: #1745130 overlooked adding in
  an error return that was introduced by commit 6124c53edeea. Fix
  this by adding in the missing return.
  
  Detected by CoverityScan, CID#1467925 ("Missing return statement)
  
  Fixes: b9a5fffbaee6 ("rfkill: Add rfkill-any LED trigger")
+ 
+ == Fix ==
+ 
+ Add missing return error code
+ 
+ == Test ==
+ 
+ N/A
+ 
+ == Regression Potential ==
+ 
+ Minimal, this fixes the broken backport, so the change is small and
+ restores the original error handling behaviour.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1764810

Title:
  Xenial: rfkill: fix missing return on rfkill_init

Status in linux package in Ubuntu:
  In Progress

Bug description:
  == SRU Justification ==

  A previous backport to bug LP: #1745130 overlooked adding in
  an error return that was introduced by commit 6124c53edeea. Fix
  this by adding in the missing return.

  Detected by CoverityScan, CID#1467925 ("Missing return statement)

  Fixes: b9a5fffbaee6 ("rfkill: Add rfkill-any LED trigger")

  == Fix ==

  Add missing return error code

  == Test ==

  N/A

  == Regression Potential ==

  Minimal, this fixes the broken backport, so the change is small and
  restores the original error handling behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1764810/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1722261] Re: deadlock in mount umount and sync

2018-04-17 Thread Colin Ian King
Couple of things:

1. Next time this occurs, can you run the following command and paste
the output into the bug report:

sudo zpool events -v

That may provide some additional information on what state the IOs are
in.

2. If you are using ZFS intent log, it may be worth trying to set it to
a fairly large size, such as 128 or 256MB using the zil_slog_limit
setting

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1722261

Title:
  deadlock in mount umount and sync

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  I use zfs vesion  0.6.5.6  on  Ubuntu 16.04.2 LTS . I have  many zombie 
process on auto-mount of snapshot and on sync !
  903 process are   in deadlock.  i can't mount a new file system or snapshot . 
Partial output of ps alx |grep 'call_r D'  below .

  what is the cause? what can I do ?

  0 0   2371  1  20   0   6016   752 call_r D?  0:00 
/bin/sync
  0 0  15290  1  20   0   6016   676 call_r D?  0:00 
/bin/sync
  0 0  18919  1  20   0   6016   708 call_r D?  0:00 
/bin/sync
  0 0  27076  1  20   0   6016   808 call_r D?  0:00 
/bin/sync
  4 0  31976  1  20   0  22084  1344 call_r D?  0:00 umount 
-t zfs -n /samba/shares/Aat/.zfs/snapshot/2017-10-04_09.00.05--5d

  error in kern.log:
  9 13:20:28 zfs-cis kernel: [5368563.592834] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:20:28 zfs-cis kernel: [5368563.597868] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:20:28 zfs-cis kernel: [5368563.601730] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_08.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_08.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.187001] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.13] WARNING: Unable to automount 
/samba/shares/Cardiologia2/.zfs/snapshot/2017-10-03_12.00.03--5d/pool_z2_samba/shares/Cardiologia2@2017-10-03_12.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.15] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.189005] WARNING: Unable to automount 
/samba/shares/Aat/.zfs/snapshot/2017-10-03_20.00.04--5d/pool_z2_samba/shares/Aat@2017-10-03_20.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.190105] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.192847] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_10.00.03--5d/pool_z2_samba/shares/Trapianti@2017-10-03_10.00.03--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.193617] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256
  Oct  9 13:22:57 zfs-cis kernel: [5368713.198096] WARNING: Unable to automount 
/samba/shares/Trapianti/.zfs/snapshot/2017-10-03_14.00.04--5d/pool_z2_samba/shares/Trapianti@2017-10-03_14.00.04--5d:
 256


  
  in syslog :

  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dChtrapianti_2emount 
interface=org.freedesktop.DBus.Properties member=GetAll cookie=155 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dLaboratorio_5fTrapianti_2emount
 interface=org.freedesktop.DBus.Properties member=GetAll cookie=202 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dProgettoTrapianti_2emount
 interface=org.freedesktop.DBus.Properties member=GetAll cookie=260 
reply_cookie=0 error=n/a
  Oct  9 12:22:12 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 
object=/org/freedesktop/systemd1/unit/samba_2dshares_2dTrapianti_2emount 
interface=org.freedesktop.DBus.Properties member=GetAll cookie=291 
reply_cookie=0 error=n/a
  Oct  9 12:22:13 zfs-cis systemd[1]: Got message type=method_call sender=n/a 
destination=org.freedesktop.systemd1 

[Kernel-packages] [Bug 1761104] Re: fix regression in mm/hotplug, allows NVIDIA driver to work

2018-04-17 Thread Colin Ian King
** Description changed:

  == SRU Justification, ARTFUL ==
  
  Bug fix #1747069 causes an issue for NVIDIA drivers on ppc64el
  platforms.  According to Will Davis at NVIDIA:
  
  "- The original patch 3d79a728f9b2e6ddcce4e02c91c4de1076548a4c changed
  the call to arch_add_memory in mm/memory_hotplug.c to call with the
  boolean argument set to true instead of false, and inverted the
  semantics of that argument in the arch layers.
  
  - The revert patch 4fe85d5a7c50f003fe4863a1a87f5d8cc121c75c reverted the
  semantic change in the arch layers, but didn't revert the change to the
  arch_add_memory call in mm/memory_hotplug.c"
  
  And also:
  
  "It looks like the problem here is that the online_type is _MOVABLE but
  can_online_high_movable(nid=255) is returning false:
  
- if ((zone_idx(zone) > ZONE_NORMAL ||
- online_type == MMOP_ONLINE_MOVABLE) &&
- !can_online_high_movable(pfn_to_nid(pfn)))
+ if ((zone_idx(zone) > ZONE_NORMAL ||
+ online_type == MMOP_ONLINE_MOVABLE) &&
+ !can_online_high_movable(pfn_to_nid(pfn)))
  
  This check was removed by upstream commit
  57c0a17238e22395428248c53f8e390c051c88b8, and I've verified that if I apply
  that commit (partially) to the 4.13.0-37.42 tree along with the previous
  arch_add_memory patch to make the probe work, I can fully online the GPU 
device
  memory as expected.
  
  Commit 57c0a172.. implies that the can_online_high_movable() checks weren't
  useful anyway, so in addition to the arch_add_memory fix, does it make sense 
to
  revert the pieces of 4fe85d5a7c50f003fe4863a1a87f5d8cc121c75c that added back
  the can_online_high_movable() check?"
  
  == Fix ==
  
  Fix partial backport from bug #1747069, remove can_online_high_movable
  and fix the incorrectly set boolean argument to arch_add_memory().
  
+ == Testing ==
+ 
+ run ADT memory hotplug test, should not regress this. Without the fix,
+ the nvidia driver on powerpc will not load because it cannot map memory
+ for the device. With the fix it loads.
+ 
  == Regression Potential ==
  
  This fixes a regression in the original fix and hence the regression
  potential is the same as the previously SRU'd bug fix for #1747069,
  namely:
  
  "Reverting this commit does remove some functionality, however this does
  not regress the kernel compared to previous releases and having a
  working reliable memory hotplug is the preferred option. This fix does
  touch some memory hotplug, so there is a risk that this may break this
  functionality that is not covered by the kernel regression testing."

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1761104

Title:
  fix regression in mm/hotplug, allows NVIDIA driver to work

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Artful:
  In Progress

Bug description:
  == SRU Justification, ARTFUL ==

  Bug fix #1747069 causes an issue for NVIDIA drivers on ppc64el
  platforms.  According to Will Davis at NVIDIA:

  "- The original patch 3d79a728f9b2e6ddcce4e02c91c4de1076548a4c changed
  the call to arch_add_memory in mm/memory_hotplug.c to call with the
  boolean argument set to true instead of false, and inverted the
  semantics of that argument in the arch layers.

  - The revert patch 4fe85d5a7c50f003fe4863a1a87f5d8cc121c75c reverted
  the semantic change in the arch layers, but didn't revert the change
  to the arch_add_memory call in mm/memory_hotplug.c"

  And also:

  "It looks like the problem here is that the online_type is _MOVABLE but
  can_online_high_movable(nid=255) is returning false:

  if ((zone_idx(zone) > ZONE_NORMAL ||
  online_type == MMOP_ONLINE_MOVABLE) &&
  !can_online_high_movable(pfn_to_nid(pfn)))

  This check was removed by upstream commit
  57c0a17238e22395428248c53f8e390c051c88b8, and I've verified that if I apply
  that commit (partially) to the 4.13.0-37.42 tree along with the previous
  arch_add_memory patch to make the probe work, I can fully online the GPU 
device
  memory as expected.

  Commit 57c0a172.. implies that the can_online_high_movable() checks weren't
  useful anyway, so in addition to the arch_add_memory fix, does it make sense 
to
  revert the pieces of 4fe85d5a7c50f003fe4863a1a87f5d8cc121c75c that added back
  the can_online_high_movable() check?"

  == Fix ==

  Fix partial backport from bug #1747069, remove can_online_high_movable
  and fix the incorrectly set boolean argument to arch_add_memory().

  == Testing ==

  run ADT memory hotplug test, should not regress this. Without the fix,
  the nvidia driver on powerpc will not load because it cannot map
  memory for the device. With the fix it loads.

  == Regression Potential ==

  This fixes a regression in the original fix and hence the regression
  potential is the same as the previously SRU'd bug fix for #1747069,
  

[Kernel-packages] [Bug 1754584] Re: zfs system process hung on container stop/delete

2018-04-17 Thread Colin Ian King
Thanks for verifying Joshua. Much appreciated!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1754584

Title:
  zfs system process hung on container stop/delete

Status in Native ZFS for Linux:
  New
Status in linux package in Ubuntu:
  Fix Released
Status in zfs-linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  Fix Committed
Status in zfs-linux source package in Xenial:
  Fix Committed
Status in linux source package in Artful:
  Fix Committed
Status in zfs-linux source package in Artful:
  Fix Committed
Status in linux source package in Bionic:
  Fix Released
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  == SRU Request [Xenial][Artful] ==

  == Justification ==

  It is possible to hang zfs asynchronous reads if a read to a page that
  is mmap'd onto the the file being read is the same offset in the
  mapping as in the file. This is caused by two lock operations on the
  page.

  == Fix ==

  Upstream ZFS fix to ensure the page is not double-locked during async
  I/O of one or more pages.

  == Testing ==

  Create a zfs pool + zfs file system, run the reproducer program in
  comment #28 on the zfs filesystem.  Without the fix this can lock up,
  with the fix this runs to completion.

  == Regression Potential ==

  Minimal, the locking fix addresses a fundamental bug in the locking
  and this should not affect ZFS read/write I/O with this fix.

  --

  Summary:
  On a Bionic system running 4.15.0-10-generic, after attempting to build 
libaio in a Bionic daily container I cannot stop or delete the container. dmesg 
shows a variety of hung tasks

  Steps to Reproduce:
  Use the following script and watch for the the hang. At that point attempt to 
stop or delete the container: http://paste.ubuntu.com/p/SxfgbxM8v7/

  Originally filed against LXD: https://github.com/lxc/lxd/issues/4314

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: linux-image-4.15.0-10-generic 4.15.0-10.11
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC1:  powersj2414 F pulseaudio
   /dev/snd/controlC0:  powersj2414 F pulseaudio
  CurrentDesktop: ubuntu:GNOME
  Date: Fri Mar  9 09:19:11 2018
  HibernationDevice: RESUME=UUID=40a4eb28-4454-44f0-a377-ea611ce685bb
  InstallationDate: Installed on 2018-02-19 (17 days ago)
  InstallationMedia: Ubuntu 18.04 LTS "Bionic Beaver" - Alpha amd64 (20180214)
  Lsusb:
   Bus 001 Device 002: ID 8087:8001 Intel Corp.
   Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
   Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
   Bus 002 Device 002: ID 04f2:b45d Chicony Electronics Co., Ltd
   Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  MachineType: LENOVO 20BSCTO1WW
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.15.0-10-generic 
root=/dev/mapper/ubuntu--vg-root ro
  RelatedPackageVersions:
   linux-restricted-modules-4.15.0-10-generic N/A
   linux-backports-modules-4.15.0-10-generic  N/A
   linux-firmware 1.172
  RfKill:
   0: phy0: Wireless LAN
    Soft blocked: no
    Hard blocked: no
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 09/13/2017
  dmi.bios.vendor: LENOVO
  dmi.bios.version: N14ET42W (1.20 )
  dmi.board.asset.tag: Not Available
  dmi.board.name: 20BSCTO1WW
  dmi.board.vendor: LENOVO
  dmi.board.version: SDK0E50512 STD
  dmi.chassis.asset.tag: No Asset Information
  dmi.chassis.type: 10
  dmi.chassis.vendor: LENOVO
  dmi.chassis.version: None
  dmi.modalias: 
dmi:bvnLENOVO:bvrN14ET42W(1.20):bd09/13/2017:svnLENOVO:pn20BSCTO1WW:pvrThinkPadX1Carbon3rd:rvnLENOVO:rn20BSCTO1WW:rvrSDK0E50512STD:cvnLENOVO:ct10:cvrNone:
  dmi.product.family: ThinkPad X1 Carbon 3rd
  dmi.product.name: 20BSCTO1WW
  dmi.product.version: ThinkPad X1 Carbon 3rd
  dmi.sys.vendor: LENOVO
  ---
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC1:  powersj1878 F pulseaudio
   /dev/snd/controlC0:  powersj1878 F pulseaudio
  CurrentDesktop: ubuntu:GNOME
  DistroRelease: Ubuntu 18.04
  HibernationDevice: RESUME=UUID=40a4eb28-4454-44f0-a377-ea611ce685bb
  InstallationDate: Installed on 2018-02-19 (17 days ago)
  InstallationMedia: Ubuntu 18.04 LTS "Bionic Beaver" - Alpha amd64 (20180214)
  Lsusb:
   Bus 001 Device 002: ID 8087:8001 Intel Corp.
   Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 

[Kernel-packages] [Bug 1764690] Re: SRU: bionic: apply 50 ZFS upstream bugfixes

2018-04-17 Thread Colin Ian King
** Also affects: linux (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: linux (Ubuntu)
   Importance: Undecided => Medium

** Changed in: linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: linux (Ubuntu)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1764690

Title:
  SRU: bionic: apply 50 ZFS upstream bugfixes

Status in linux package in Ubuntu:
  In Progress
Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  SRU Justification, bionic

  Apply the first round of SRU bugfixes for ZFS from 0.7.5 onwards from
  upstream ZFS repository. This round fixes the following ZFS bugs:

  - OpenZFS 8373 - TXG_WAIT in ZIL commit path
    Closes zfsonlinux #6403
  - zfs promote|rename .../%recv should be an error
    Closes zfsonlinux #4843, #6339
  - Fix parsable 'zfs get' for compressratios
    Closes zfsonlinux #6436, #6449
  - Fix zpool events scripted mode tab separator
    Closes zfsonlinux #6444, #6445
  - zv_suspend_lock in zvol_open()/zvol_release()
    Closes zfsonlinux #6342
  - Allow longer SPA names in stats, allows bigger pool names
    Closes zfsonlinux #6481
  - vdev_mirror: load balancing fixes
    Closes zfsonlinux #6461
  - Fix zfs_ioc_pool_sync should not use fnvlist
    Closes zfsonlinux #6529
  - OpenZFS 8375 - Kernel memory leak in nvpair code
    Closes zfsonlinux #6578
  - OpenZFS 7261 - nvlist code should enforce name length limit
    Closes zfsonlinux #6579
  - OpenZFS 5778 - nvpair_type_is_array() does not recognize
    DATA_TYPE_INT8_ARRAY
    Closes zfsonlinux #6580
  - dmu_objset: release bonus buffer in failure path
    Closes zfsonlinux #6575
  - Fix false config_cache_write events
    Closes zfsonlinux #6617
  - Fix printk() calls missing log level
    Closes zfsonlinux #6672
  - Fix abdstats kstat on 32-bit systems
    Closes zfsonlinux #6721
  - Relax ASSERT for #6526
    Closes zfsonlinux #6526
  - Fix coverity defects: 147480, 147584 (Logically dead code)
    Closes zfsonlinux #6745
  - Fix coverity defects: CID 161388 (Resource Leak)
    Closes zfsonlinux #6755
  - Use ashift=12 by default on SSDSC2BW48 disks
    Closes zfsonlinux #6774
  - OpenZFS 8558, 8602 - lwp_create() returns EAGAIN
    Closes zfsonlinux #6779
  - ZFS send fails to dump objects larger than 128PiB
    Closes zfsonlinux #6760
  - Sort output of tunables in arc_summary.py
    Closes zfsonlinux #6828
  - Fix data on evict_skips in arc_summary.py
    Closes zfsonlinux #6882, #6883
  - Fix segfault in zpool iostat when adding VDEVs
    Closes zfsonlinux #6748, #6872
  - ZTS: Fix create-o_ashift test case
    Closes zfsonlinux #6924, #6877
  - Handle invalid options in arc_summary
    Closes zfsonlinux #6983
  - Call commit callbacks from the tail of the list
    Closes zfsonlinux #6986
  - Fix 'zpool add' handling of nested interior VDEVs
    Closes zfsonlinux #6678, #6996
  - Fix -fsanitize=address memory leak
    kmem_alloc(0, ...) in userspace returns a leakable pointer.
    Closes zfsonlinux #6941
  - Revert raidz_map and _col structure types
    Closes zfsonlinux #6981, #7023
  - Use zap_count instead of cached z_size for unlink
    Closes zfsonlinux #7019
  - OpenZFS 8897 - zpool online -e fails assertion when run on non-leaf
    vdevs
    Closes zfsonlinux #7030
  - OpenZFS 8898 - creating fs with checksum=skein on the boot pools
    fails ungracefully
    Closes zfsonlinux #7031
  - Emit an error message before MMP suspends pool
    Closes zfsonlinux #7048
  - OpenZFS 8641 - "zpool clear" and "zinject" don't work on "spare"
    or "replacing" vdevs
    Closes zfsonlinux #7060
  - OpenZFS 8835 - Speculative prefetch in ZFS not working for
    misaligned reads
    Closes zfsonlinux #7062
  - OpenZFS 8972 - zfs holds: In scripted mode, do not pad columns with
    spaces
    Closes zfsonlinux #7063
  - Revert "Remove wrong ASSERT in annotate_ecksum"
    Closes zfsonlinux #7079
  - OpenZFS 8731 - ASSERT3U(nui64s, <=, UINT16_MAX) fails for large
    blocks
    Closes zfsonlinux #7079
  - Prevent zdb(8) from occasionally hanging on I/O
    Closes zfsonlinux #6999
  - Fix 'zfs receive -o' when used with '-e|-d'
    Closes zfsonlinux #7088
  - Change movaps to movups in AES-NI code
    Closes zfsonlinux #7065, #7108
  - tx_waited -> tx_dirty_delayed in trace_dmu.h
    Closes zfsonlinux #7096
  - OpenZFS 8966 - Source file zfs_acl.c, function
    Cl

[Kernel-packages] [Bug 1764690] Re: SRU: bionic: apply 50 ZFS upstream bugfixes

2018-04-17 Thread Colin Ian King
** Description changed:

  SRU Justification, bionic
  
  Apply the first round of SRU bugfixes for ZFS from 0.7.5 onwards from
  upstream ZFS repository. This round fixes the following ZFS bugs:
  
- - OpenZFS 8373 - TXG_WAIT in ZIL commit path
-   Closes zfsonlinux #6403
- - zfs promote|rename .../%recv should be an error
-   Closes zfsonlinux #4843, #6339
- - Fix parsable 'zfs get' for compressratios
-   Closes zfsonlinux #6436, #6449
- - Fix zpool events scripted mode tab separator
-   Closes zfsonlinux #6444, #6445
- - zv_suspend_lock in zvol_open()/zvol_release()
-   Closes zfsonlinux #6342
- - Allow longer SPA names in stats, allows bigger pool names
-   Closes zfsonlinux #6481
- - vdev_mirror: load balancing fixes
-   Closes zfsonlinux #6461
- - Fix zfs_ioc_pool_sync should not use fnvlist
-   Closes zfsonlinux #6529
- - OpenZFS 8375 - Kernel memory leak in nvpair code
-   Closes zfsonlinux #6578
- - OpenZFS 7261 - nvlist code should enforce name length limit
-   Closes zfsonlinux #6579
- - OpenZFS 5778 - nvpair_type_is_array() does not recognize
-   DATA_TYPE_INT8_ARRAY
-   Closes zfsonlinux #6580
- - dmu_objset: release bonus buffer in failure path
-   Closes zfsonlinux #6575
- - Fix false config_cache_write events
-   Closes zfsonlinux #6617
- - Fix printk() calls missing log level
-   Closes zfsonlinux #6672
- - Fix abdstats kstat on 32-bit systems
-   Closes zfsonlinux #6721
- - Relax ASSERT for #6526
-   Closes zfsonlinux #6526
- - Fix coverity defects: 147480, 147584 (Logically dead code)
-   Closes zfsonlinux #6745
- - Fix coverity defects: CID 161388 (Resource Leak)
-   Closes zfsonlinux #6755
- - Use ashift=12 by default on SSDSC2BW48 disks
-   Closes zfsonlinux #6774
- - OpenZFS 8558, 8602 - lwp_create() returns EAGAIN
-   Closes zfsonlinux #6779
- - ZFS send fails to dump objects larger than 128PiB
-   Closes zfsonlinux #6760
- - Sort output of tunables in arc_summary.py
-   Closes zfsonlinux #6828
- - Fix data on evict_skips in arc_summary.py
-   Closes zfsonlinux #6882, #6883
- - Fix segfault in zpool iostat when adding VDEVs
-   Closes zfsonlinux #6748, #6872
- - ZTS: Fix create-o_ashift test case
-   Closes zfsonlinux #6924, #6877
- - Handle invalid options in arc_summary
-   Closes zfsonlinux #6983
- - Call commit callbacks from the tail of the list
-   Closes zfsonlinux #6986
- - Fix 'zpool add' handling of nested interior VDEVs
-   Closes zfsonlinux #6678, #6996
- - Fix -fsanitize=address memory leak
-   kmem_alloc(0, ...) in userspace returns a leakable pointer.
-   Closes zfsonlinux #6941
- - Revert raidz_map and _col structure types
-   Closes zfsonlinux #6981, #7023
- - Use zap_count instead of cached z_size for unlink
-   Closes zfsonlinux #7019
- - OpenZFS 8897 - zpool online -e fails assertion when run on non-leaf
-   vdevs
-   Closes zfsonlinux #7030
- - OpenZFS 8898 - creating fs with checksum=skein on the boot pools
-   fails ungracefully
-   Closes zfsonlinux #7031
- - Emit an error message before MMP suspends pool
-   Closes zfsonlinux #7048
- - OpenZFS 8641 - "zpool clear" and "zinject" don't work on "spare"
-   or "replacing" vdevs
-   Closes zfsonlinux #7060
- - OpenZFS 8835 - Speculative prefetch in ZFS not working for
-   misaligned reads
-   Closes zfsonlinux #7062
- - OpenZFS 8972 - zfs holds: In scripted mode, do not pad columns with
-   spaces
-   Closes zfsonlinux #7063
- - Revert "Remove wrong ASSERT in annotate_ecksum"
-   Closes zfsonlinux #7079
- - OpenZFS 8731 - ASSERT3U(nui64s, <=, UINT16_MAX) fails for large
-   blocks
-   Closes zfsonlinux #7079
- - Prevent zdb(8) from occasionally hanging on I/O
-   Closes zfsonlinux #6999
- - Fix 'zfs receive -o' when used with '-e|-d'
-   Closes zfsonlinux #7088
- - Change movaps to movups in AES-NI code
-   Closes zfsonlinux #7065, #7108
- - tx_waited -> tx_dirty_delayed in trace_dmu.h
-   Closes zfsonlinux #7096
- - OpenZFS 8966 - Source file zfs_acl.c, function
-   Closes zfsonlinux #7141
- - Fix zdb -c traverse stop on damaged objset root
-   Closes zfsonlinux #7099
- - Fix zle_decompress out of bound access
-   Closes zfsonlinux #7099
- - Fix racy assignment of zcb.zcb_haderrors
-   Closes zfsonlinux #7099
- - Fix zdb -R decompression
-   Closes zfsonlinux #7099, #4984
- - Fix zdb -E segfault
-   Closes zfsonlinux #7099
- - Fix zdb -ed on objset for exported pool
-   Closes zfsonlinux #7099, #6464
+ - OpenZFS 8373 - TXG_WAIT in ZIL commit path
+   Closes zfsonlinux #6403
+ - zfs promote|rename .../%recv should be an error
+   Closes zfsonlinux #4843, #6339
+ - Fix 

[Kernel-packages] [Bug 1764690] [NEW] SRU: bionic: apply 50 ZFS upstream bugfixes

2018-04-17 Thread Colin Ian King
ith the fixes, none of this should fail, hang or regress.

** Affects: zfs-linux (Ubuntu)
 Importance: Medium
 Assignee: Colin Ian King (colin-king)
 Status: In Progress

** Changed in: zfs-linux (Ubuntu)
   Importance: Undecided => Medium

** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Colin Ian King (colin-king)

** Changed in: zfs-linux (Ubuntu)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1764690

Title:
  SRU: bionic: apply 50 ZFS upstream bugfixes

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  SRU Justification, bionic

  Apply the first round of SRU bugfixes for ZFS from 0.7.5 onwards from
  upstream ZFS repository. This round fixes the following ZFS bugs:

  - OpenZFS 8373 - TXG_WAIT in ZIL commit path
Closes zfsonlinux #6403
  - zfs promote|rename .../%recv should be an error
Closes zfsonlinux #4843, #6339
  - Fix parsable 'zfs get' for compressratios
Closes zfsonlinux #6436, #6449
  - Fix zpool events scripted mode tab separator
Closes zfsonlinux #6444, #6445
  - zv_suspend_lock in zvol_open()/zvol_release()
Closes zfsonlinux #6342
  - Allow longer SPA names in stats, allows bigger pool names
Closes zfsonlinux #6481
  - vdev_mirror: load balancing fixes
Closes zfsonlinux #6461
  - Fix zfs_ioc_pool_sync should not use fnvlist
Closes zfsonlinux #6529
  - OpenZFS 8375 - Kernel memory leak in nvpair code
Closes zfsonlinux #6578
  - OpenZFS 7261 - nvlist code should enforce name length limit
Closes zfsonlinux #6579
  - OpenZFS 5778 - nvpair_type_is_array() does not recognize
DATA_TYPE_INT8_ARRAY
Closes zfsonlinux #6580
  - dmu_objset: release bonus buffer in failure path
Closes zfsonlinux #6575
  - Fix false config_cache_write events
Closes zfsonlinux #6617
  - Fix printk() calls missing log level
Closes zfsonlinux #6672
  - Fix abdstats kstat on 32-bit systems
Closes zfsonlinux #6721
  - Relax ASSERT for #6526
Closes zfsonlinux #6526
  - Fix coverity defects: 147480, 147584 (Logically dead code)
Closes zfsonlinux #6745
  - Fix coverity defects: CID 161388 (Resource Leak)
Closes zfsonlinux #6755
  - Use ashift=12 by default on SSDSC2BW48 disks
Closes zfsonlinux #6774
  - OpenZFS 8558, 8602 - lwp_create() returns EAGAIN
Closes zfsonlinux #6779
  - ZFS send fails to dump objects larger than 128PiB
Closes zfsonlinux #6760
  - Sort output of tunables in arc_summary.py
Closes zfsonlinux #6828
  - Fix data on evict_skips in arc_summary.py
Closes zfsonlinux #6882, #6883
  - Fix segfault in zpool iostat when adding VDEVs
Closes zfsonlinux #6748, #6872
  - ZTS: Fix create-o_ashift test case
Closes zfsonlinux #6924, #6877
  - Handle invalid options in arc_summary
Closes zfsonlinux #6983
  - Call commit callbacks from the tail of the list
Closes zfsonlinux #6986
  - Fix 'zpool add' handling of nested interior VDEVs
Closes zfsonlinux #6678, #6996
  - Fix -fsanitize=address memory leak
kmem_alloc(0, ...) in userspace returns a leakable pointer.
Closes zfsonlinux #6941
  - Revert raidz_map and _col structure types
Closes zfsonlinux #6981, #7023
  - Use zap_count instead of cached z_size for unlink
Closes zfsonlinux #7019
  - OpenZFS 8897 - zpool online -e fails assertion when run on non-leaf
vdevs
Closes zfsonlinux #7030
  - OpenZFS 8898 - creating fs with checksum=skein on the boot pools
fails ungracefully
Closes zfsonlinux #7031
  - Emit an error message before MMP suspends pool
Closes zfsonlinux #7048
  - OpenZFS 8641 - "zpool clear" and "zinject" don't work on "spare"
or "replacing" vdevs
Closes zfsonlinux #7060
  - OpenZFS 8835 - Speculative prefetch in ZFS not working for
misaligned reads
Closes zfsonlinux #7062
  - OpenZFS 8972 - zfs holds: In scripted mode, do not pad columns with
spaces
Closes zfsonlinux #7063
  - Revert "Remove wrong ASSERT in annotate_ecksum"
Closes zfsonlinux #7079
  - OpenZFS 8731 - ASSERT3U(nui64s, <=, UINT16_MAX) fails for large
blocks
Closes zfsonlinux #7079
  - Prevent zdb(8) from occasionally hanging on I/O
Closes zfsonlinux #6999
  - Fix 'zfs receive -o' when used with '-e|-d'
Closes zfsonlinux #7088
  - Change movaps to movups in AES-NI code
Closes zfsonlinux #7065, #7108
  - tx_waited -> tx_dirty_delayed in trace_dmu.h
Closes zfsonlinux #709

[Kernel-packages] [Bug 1749715] Re: general protection fault in zfs module

2018-04-13 Thread Colin Ian King
The logs would be useful, maybe I can figure out something from them.
Thanks!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1749715

Title:
   general protection fault in zfs module

Status in Native ZFS for Linux:
  New
Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  Got this call trace during a rsync backup of a machine using ZFS:

  general protection fault:  [#1] SMP 
  Modules linked in: ip6table_filter ip6_tables xt_tcpudp xt_conntrack 
iptable_filter ip_tables x_tables zfs(PO) zunicode(PO) zcommon(PO) znvpair(PO) 
spl(O) zavl(PO) input_leds sch_fq_codel nf_conntrack_ipv6 nf_defrag_ipv6 
nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack virtio_scsi
  CPU: 0 PID: 4238 Comm: rsync Tainted: P   O4.4.0-112-generic 
#135-Ubuntu
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Ubuntu-1.8.2-1ubuntu1 04/01/2014
  task: 880078a4f2c0 ti: 880047c28000 task.ti: 880047c28000
  RIP: 0010:[]  [] avl_insert+0x33/0xe0 
[zavl]
  RSP: 0018:880047c2bc20  EFLAGS: 00010246
  RAX: 0001 RBX: 880043b46200 RCX: 0001
  RDX:  RSI: 001f880043b46208 RDI: 88005aa0c9a8
  RBP: 880047c2bc20 R08:  R09: 88007d001700
  R10: 880043b46200 R11: 0246 R12: 88005aa0c9a8
  R13: 880043b46200 R14:  R15: 88005aa0c9a8
  FS:  7f04124ec700() GS:88007fc0() knlGS:
  CS:  0010 DS:  ES:  CR0: 80050033
  CR2: 7ffd25c1cb8c CR3: 47cb CR4: 0670
  Stack:
   880047c2bc68 c0313721  0028
   880043b46200 88005aa0c8c8 6b34 
   88005aa0c9a8 880047c2bcc8 c04609ee 
  Call Trace:
   [] avl_add+0x71/0xa0 [zavl]
   [] zfs_range_lock+0x3ee/0x5e0 [zfs]
   [] ? rrw_enter_read_impl+0xbc/0x160 [zfs]
   [] zfs_read+0xd0/0x3c0 [zfs]
   [] ? profile_path_perm.part.7+0x7d/0xa0
   [] zpl_read_common_iovec+0x80/0xd0 [zfs]
   [] zpl_iter_read+0xa0/0xd0 [zfs]
   [] new_sync_read+0x94/0xd0
   [] __vfs_read+0x26/0x40
   [] vfs_read+0x86/0x130
   [] SyS_read+0x55/0xc0
   [] ? entry_SYSCALL_64_after_swapgs+0xd1/0x18c
   [] entry_SYSCALL_64_fastpath+0x2b/0xe7
  Code: 83 e2 01 48 03 77 10 49 83 e0 fe 8d 04 95 00 00 00 00 55 4c 89 c1 48 83 
47 18 01 83 e0 04 48 83 c9 01 48 89 e5 48 09 c8 4d 85 c0 <48> c7 06 00 00 00 00 
48 c7 46 08 00 00 00 00 48 89 46 10 0f 84 
  RIP  [] avl_insert+0x33/0xe0 [zavl]
   RSP 
  ---[ end trace c4ba4478b6002697 ]---

  
  This is the first time it happens but I'll report any future occurrence in 
here.

  Additional info:

  $ lsb_release -rd
  Description:  Ubuntu 16.04.3 LTS
  Release:  16.04

  $ apt-cache policy linux-image-4.4.0-112-generic zfsutils-linux
  linux-image-4.4.0-112-generic:
Installed: 4.4.0-112.135
Candidate: 4.4.0-112.135
Version table:
   *** 4.4.0-112.135 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 
Packages
  100 /var/lib/dpkg/status
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu18
Candidate: 0.6.5.6-0ubuntu18
Version table:
   *** 0.6.5.6-0ubuntu18 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   0.6.5.6-0ubuntu8 500
  500 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: linux-image-4.4.0-112-generic 4.4.0-112.135
  ProcVersionSignature: Ubuntu 4.4.0-112.135-generic 4.4.98
  Uname: Linux 4.4.0-112-generic x86_64
  NonfreeKernelModules: zfs zunicode zcommon znvpair zavl
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Feb 14 16:19 seq
   crw-rw 1 root audio 116, 33 Feb 14 16:19 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay'
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord'
  AudioDevicesInUse: Error: [Errno 2] No such file or directory: 'fuser'
  CRDA: N/A
  CurrentDmesg: Error: command ['dmesg'] failed with exit code 1: dmesg: read 
kernel buffer failed: Operation not permitted
  Date: Thu Feb 15 08:45:07 2018
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig'
  Lspci: Error: [Errno 2] No such file or directory: 'lspci'
  Lsusb: Error: [Errno 2] No such file or directory: 'lsusb'
  MachineType: QEMU Standard PC (i440FX + PIIX, 1996)
  PciMultimedia:
   
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 EFI VGA
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.4.0-112-generic 
root=UUID=db4864d4-cc2e-40c7-bc2b-a14bc0f09c9f ro console=ttyS0 

[Kernel-packages] [Bug 1725859] Re: zfs frequently hangs (up to 30 secs) during sequential read

2018-04-13 Thread Colin Ian King
The iostat -mx shows that the /dev/sdb /dev/sdc devices have a very high
read I/O latency of ~80-110 milliseconds which may be the underlying
issue.

I suggest seeing how long it takes to read data from these raw devices
to sanity check their maximum speed.  One can do this by doing:

sync
echo 3 | sudo tee /proc/sys/vm/drop_caches
dd if=/dev/sdb of=/dev/null bs=512 count=1048576

..and wait for 512MB of data to be read and see what the read speed is
of the raw device.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1725859

Title:
  zfs frequently hangs (up to 30 secs) during sequential read

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  Updated to artful (17.10) yesterday. Trying to read (play video) from 
mirrored ZFS disks from an external USB 3 enclosure. Zpool is defined as:
   
  root@hetty:/home/crlb# zpool status
pool: storage
   state: ONLINE
scan: resilvered 20K in 0h0m with 0 errors on Fri Oct 20 18:38:49 2017
  config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
sdb ONLINE   0 0 0
sdc ONLINE   0 0 0

  errors: No known data errors
  root@hetty:/home/crlb#

  Found that I could re-create the problem with:

   rsync -av --progress  

  Also found that:

dd if=/dev/sdX of=/dev/null status=progress bs=1024 count=1000

  Where "X" is either "b" or "c" does not hang.

  Installed:

  root@hetty:/home/crlb# apt list --installed | grep -i zfs
  libzfs2linux/artful,now 0.6.5.11-1ubuntu3 amd64 [installed,automatic]
  zfs-zed/artful,now 0.6.5.11-1ubuntu3 amd64 [installed,automatic]
  zfsutils-linux/artful,now 0.6.5.11-1ubuntu3 amd64 [installed]
  root@hetty:/home/crlb#

  Help please.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1725859/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1760173] Re: zfs, zpool commands hangs for 10 seconds without a /dev/zfs

2018-04-12 Thread Colin Ian King
** Changed in: zfs-linux (Ubuntu)
   Importance: Medium => High

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1760173

Title:
  zfs, zpool commands hangs for 10 seconds without a /dev/zfs

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  1. # lsb_release -rd
  Description:  Ubuntu 16.04.4 LTS
  Release:  16.04

  2. # apt-cache policy zfsutils-linux
  zfsutils-linux:
Installed: 0.6.5.6-0ubuntu19
Candidate: 0.6.5.6-0ubuntu19
Version table:
   *** 0.6.5.6-0ubuntu19 500
  500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. When inside a lxd container with zfs storage, zfs list or zpool
  status either return or report what's going on.

  4. When inside a lxd container with zfs storage, zfs list or zpool
  status appears to hang, no output for 10 seconds.

  strace reveals that without a /dev/zfs the tools wait for it to appear
  for 10 seconds but do not provide a command line switch to disable or
  make it more verbose.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: zfsutils-linux 0.6.5.6-0ubuntu19
  ProcVersionSignature: Ubuntu 4.13.0-36.40~16.04.1-generic 4.13.13
  Uname: Linux 4.13.0-36-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.15
  Architecture: amd64
  Date: Fri Mar 30 18:09:29 2018
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1760173/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


<    8   9   10   11   12   13   14   15   16   17   >