Validated with ZFS from focal-proposed, according to test case from description:
ubuntu@z-rotomvm34:~$ dpkg -l | grep zfsutils
ii  zfsutils-linux                       0.8.3-1ubuntu12.12                    
amd64        command-line tools to manage OpenZFS filesystems
ubuntu@z-rotomvm34:~$ zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
rpool               2.50G  25.6G      176K  /
rpool/ROOT          2.50G  25.6G      176K  none
rpool/ROOT/zfsroot  2.50G  25.6G     2.50G  /
ubuntu@z-rotomvm34:~$ sudo journalctl -b | grep -i ordering
ubuntu@z-rotomvm34:~$ lsblk -e 7
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
vda         252:0    0   30G  0 disk
├─vda1      252:1    0  512M  0 part  /boot/efi
└─vda2      252:2    0 29.5G  0 part
nvme0n1     259:0    0  9.8G  0 disk
└─nvme0n1p1 259:1    0  9.8G  0 part
  └─swap    253:0    0  9.8G  0 crypt [SWAP]

In addition to the test above, I've also tested the configurations
suggested in the [Test Plan] section. Besides validating the ordering
bug, I've also done basic smoke tests and verified that the ZFS pools
are working as expected.

- Encrypted rootfs on LVM + separate ZFS partitions:
ubuntu@ubuntu-focal:~$ zfs list
NAME           USED  AVAIL     REFER  MOUNTPOINT
zfspool        492K  4.36G       96K  /mnt/zfspool
zfspool/tank    96K  4.36G       96K  /mnt/zfspool/tank
ubuntu@ubuntu-focal:~$ dpkg -l | grep zfsutils
ii  zfsutils-linux                       0.8.3-1ubuntu12.12                    
amd64        command-line tools to manage OpenZFS filesystems
ubuntu@ubuntu-focal:~$ zfs list
NAME           USED  AVAIL     REFER  MOUNTPOINT
zfspool        492K  4.36G       96K  /mnt/zfspool
zfspool/tank    96K  4.36G       96K  /mnt/zfspool/tank
ubuntu@ubuntu-focal:~$ sudo journalctl -b | grep -i ordering
ubuntu@ubuntu-focal:~$ lsblk -e7
NAME                         MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sr0                           11:0    1 1024M  0 rom
vda                          252:0    0   30G  0 disk
├─vda1                       252:1    0  512M  0 part  /boot/efi
├─vda2                       252:2    0    1K  0 part
├─vda5                       252:5    0  731M  0 part  /boot
└─vda6                       252:6    0 28.8G  0 part
  └─vda6_crypt               253:0    0 28.8G  0 crypt
    ├─vgubuntu--focal-root   253:1    0 27.8G  0 lvm   /
    └─vgubuntu--focal-swap_1 253:2    0  980M  0 lvm   [SWAP]
vdb                          252:16   0    5G  0 disk
├─vdb1                       252:17   0    5G  0 part
└─vdb9                       252:25   0    8M  0 part

- ZFS on LUKS
ubuntu@z-rotomvm33:~$ dpkg -l | grep zfsutils
ii  zfsutils-linux                       0.8.3-1ubuntu12.12                    
amd64        command-line tools to manage OpenZFS filesystems
ubuntu@z-rotomvm33:~$ zfs list
NAME           USED  AVAIL     REFER  MOUNTPOINT
zfspool        612K  9.20G       96K  /mnt/zfspool
zfspool/tank    96K  9.20G       96K  /mnt/zfspool/tank
ubuntu@z-rotomvm33:~$ sudo journalctl -b | grep -i ordering
ubuntu@z-rotomvm33:~$ lsblk -e7
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
vda         252:0    0   30G  0 disk
├─vda1      252:1    0  512M  0 part  /boot/efi
└─vda2      252:2    0 29.5G  0 part  /
nvme0n1     259:0    0  9.8G  0 disk
└─nvme0n1p1 259:1    0  9.8G  0 part
  └─zfspool 253:0    0  9.8G  0 crypt
ubuntu@z-rotomvm33:~$ cat /etc/crypttab
# <target name> <source device>         <key file>      <options>
zfspool /dev/nvme0n1p1 /etc/keyfile luks

- ZFS on dm-raid
ubuntu@z-rotomvm33:~$ dpkg -l | grep zfsutils
ii  zfsutils-linux                       0.8.3-1ubuntu12.12                    
amd64        command-line tools to manage OpenZFS filesystems
ubuntu@z-rotomvm33:~$ zfs list
NAME           USED  AVAIL     REFER  MOUNTPOINT
zfspool        612K  9.20G       96K  /mnt/zfspool
zfspool/tank    96K  9.20G       96K  /mnt/zfspool/tank
ubuntu@z-rotomvm33:~$ sudo journalctl -b | grep -i ordering
ubuntu@z-rotomvm33:~$ lsblk -e7
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
vda         252:0    0   30G  0 disk
├─vda1      252:1    0  512M  0 part  /boot/efi
└─vda2      252:2    0 29.5G  0 part  /
nvme0n1     259:0    0  9.8G  0 disk
└─md127       9:127  0  9.8G  0 raid0
  ├─md127p1 259:1    0  9.8G  0 part
  └─md127p9 259:2    0    8M  0 part


** Tags added: verification-done verification-done-focal

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Focal:
  Fix Committed

Bug description:
  [Impact]
  Encrypted swap partitions may not load correctly with ZFS root, due to 
ordering cycle on zfs-mount.service.

  [Test Plan]
  1. Install Ubuntu 20.04 using ZFS-on-root
  2. Add encrypted partition to /etc/crypttab:
     swap    /dev/nvme0n1p1  /dev/urandom    
swap,cipher=aes-xts-plain64,size=256
  3. Add swap partition to /etc/fstab:
     /dev/mapper/swap        none    swap    sw      0 0
  4. Reboot and check whether swap has loaded correctly, and whether boot logs 
show ordering cycle:
  [    6.638228] systemd[1]: systemd-random-seed.service: Found ordering cycle 
on zfs-mount.service/start
  [    6.639418] systemd[1]: systemd-random-seed.service: Found dependency on 
zfs-import.target/start
  [    6.640474] systemd[1]: systemd-random-seed.service: Found dependency on 
zfs-import-cache.service/start
  [    6.641637] systemd[1]: systemd-random-seed.service: Found dependency on 
cryptsetup.target/start
  [    6.642734] systemd[1]: systemd-random-seed.service: Found dependency on 
systemd-cryptsetup@swap.service/start
  [    6.643951] systemd[1]: systemd-random-seed.service: Found dependency on 
systemd-random-seed.service/start
  [    6.645098] systemd[1]: systemd-random-seed.service: Job 
zfs-mount.service/start deleted to break ordering cycle starting with 
systemd-random-seed.service/start
  [ SKIP ] Ordering cycle found, skipping Mount ZFS filesystems

  [Where problems could occur]
  Since we're changing the zfs-mount-generator service, regressions could show 
up during mounting of ZFS partitions. We should thoroughly test different 
scenarios of ZFS such as ZFS-on-root, separate ZFS partitions and the presence 
of swap, to make sure all partitions are mounted correctly and no ordering 
cycles are present.

  Below is a list of suggested test scenarios that we should check for 
regressions:
  1. ZFS-on-root + encrypted swap (see "Test Plan" section above)
  2. Encrypted root + separate ZFS partitions
  3. ZFS on LUKS
  4. ZFS on dm-raid

  Although scenario 4 is usually advised against (ZFS itself should
  handle RAID), it's a good smoke test to validate that mount order is
  being handled correctly.

  [Other Info]
  This has been fixed upstream by the following commits:
  * ec41cafee1da Fix a dependency loop [0]
  * 62663fb7ec19 Fix another dependency loop [1]

  The patches above have been introduced in version 2.1.0, with upstream
  backports to zfs-2.0. In Ubuntu, it's present in Groovy and later
  releases, so it's still needed in Focal.

  $ rmadison -a source zfs-linux
   zfs-linux | 0.8.3-1ubuntu12    | focal           | source
   zfs-linux | 0.8.3-1ubuntu12.9  | focal-security  | source
   zfs-linux | 0.8.3-1ubuntu12.10 | focal-updates   | source
   zfs-linux | 0.8.4-1ubuntu11    | groovy          | source
   zfs-linux | 0.8.4-1ubuntu11.2  | groovy-updates  | source
   zfs-linux | 2.0.2-1ubuntu5     | hirsute         | source
   zfs-linux | 2.0.3-8ubuntu5     | impish          | source

  [0] https://github.com/openzfs/zfs/commit/ec41cafee1da
  [1] https://github.com/openzfs/zfs/commit/62663fb7ec19

  ORIGINAL DESCRIPTION
  ====================

  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:       Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:      20.04
  Codename:     focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
    Installed: (none)
    Candidate: 2:2.2.2-3ubuntu2
    Version table:
       2:2.2.2-3ubuntu2 500
          500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ======================

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  2. Ubuntu 20.04 installed on ZFS root using debootstrap
  (debootstrap_1.0.118ubuntu1_all)

  3. The ZFS root pool is a 2 partition mirror (the first partition of
  each disk)

  4. /etc/crypttab is set up as follows:

  swap      
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914-part2    
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap      
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802933-part2    
/dev/urandom   swap,cipher=aes-xts-plain64,size=256

  WHAT I EXPECTED
  ===============

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper

  WHAT HAPPENED INSTEAD
  =====================

  On reboot, swap setup fails with the following messages in
  /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [    5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [    5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [    5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [    5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [    5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [    5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [    5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-00000_S3W6NX0M802914-part2 was removed

  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw------- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  And top shows no swap:

  MiB Swap:      0.0 total,      0.0 free,      0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to