The output only shows the results, not the order of mount attempts. It
may be the case that there is an ordering bug here. But we need to rule
out the other case first. It could easily be the case that the directory
is non-empty, so the /var/share mount failed even though it was properly
attempted first.

Try manually unmounting everything under /var/share. Then rmdir the
empty lxc directory. Once you are certain that /var/share is empty, re-
run zfs mount -a.

You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.

  ZoL: wrong import order prevents boot

Status in zfs-linux package in Ubuntu:

Bug description:
  I've the following zfs:

  # zfs list -r rpool/VARSHARE
  NAME                              USED  AVAIL  REFER  MOUNTPOINT
  rpool/VARSHARE                           114K   165G    30K  /var/share
  rpool/VARSHARE/lxc                        84K   165G    19K  /var/share/lxc
  rpool/VARSHARE/lxc/xenial                 65K   165G    19K  
  rpool/VARSHARE/lxc/xenial/pkg             19K   165G    19K  
  rpool/VARSHARE/lxc/xenial/rootfs-amd64    27K   165G    27K  

  On boot, we see

           Starting Mount ZFS filesystems...
  [FAILED] Failed to start Mount ZFS filesystems.
  See 'systemctl status zfs-mount.service' for details.
  Welcome to emergPress Enter for maintenance
  (or press Control-D to continue): 

  # df -h /var/share
  rpool/VARSHARE/lxc                        165G     0  165G   0% /var/share/lxc
  rpool/VARSHARE/lxc/xenial                 165G     0  165G   0% 
  rpool/VARSHARE/lxc/xenial/pkg             165G     0  165G   0% 
  rpool/VARSHARE/lxc/xenial/rootfs-amd64    165G     0  165G   0% 

  Obviously rpool/VARSHARE - the parent of rpool/VARSHARE/lxc - was not 
mounted, even so canmount property is for all set to on, rpool/VARSHARE's 
mountpoint to /var/share and rpool/VARSHARE/lxc children inherit their 

  # systemctl status zfs-mount.service
  ‚óŹ zfs-mount.service - Mount ZFS filesystems
     Loaded: loaded (/lib/systemd/system/zfs-mount.service; static; vendor 
preset: enabled)
     Active: failed (Result: exit-code) since Sun 2017-05-28 04:51:46 CEST; 
13min ago
    Process: 6935 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
   Main PID: 6935 (code=exited, status=1/FAILURE)

  May 28 04:51:45 ares systemd[1]: Starting Mount ZFS filesystems...
  May 28 04:51:45 ares zfs[6935]: cannot mount '/var/share': directory is not 
  May 28 04:51:46 ares systemd[1]: zfs-mount.service: Main process exited, 
code=exited, status=1/FAILURE
  May 28 04:51:46 ares systemd[1]: Failed to start Mount ZFS filesystems.
  May 28 04:51:46 ares systemd[1]: zfs-mount.service: Unit entered failed state.
  May 28 04:51:46 ares systemd[1]: zfs-mount.service: Failed with result 

  So 'zfs mount ...' seems to be severely buggy.

To manage notifications about this bug go to:

Mailing list:
Post to     :
Unsubscribe :
More help   :

Reply via email to