You had a setup with multiple root filesystems which each had canmount=on and mountpoint=/. So they both tried to automatically mount at /. (When booting in the root-on-ZFS config, one was already mounted as your root filesystem.) ZFS, unlike other Linux filesystems, refuses to mount over non-empty directories. Thus, mounting over / fails. Thus `zfs mount -a` would fail, which was the underlying command for zfs- mount.service. As a result of the mount failing, you got into a state where some datasets mounted, but not all of them. As a result of this, you had empty directories for some mountpoints in the wrong filesystems. As a result, those empty directories continued to break `zfs mount -a`.
In your case, it's likely that the relevant directory was /vms/vms. This was preventing you from mounting the vms dataset at /vms. To be absolutely clear, this was because /vms was non-empty, because it contained /vms/vms. You first fixed the underlying issue with the root filesystems by setting canmount=noauto on both of them. That still left the second problem. Once you `rmdir`ed the directory(ies) that were in the way, mounting works correctly. Separately from the issues above, it's best practice to NOT store anything in the root dataset on a pool (the dataset with the same name as the pool, in this case "vms"), because that dataset can never be renamed. If you're not actually using the "vms" dataset itself, I suggest the following: zfs unmount vms/vms rmdir /vms/vms # this rmdir not required, but I like to cleanup completely zfs unmount vms zfs set canmount=off vms zfs mount vms/vms -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1846424 Title: 19.10 ZFS Update failed on 2019-10-02 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+subscriptions -- ubuntu-bugs mailing list [email protected] https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
