Public bug reported:
I accidentally booted a computer with a USB drive inserted. After the
system booted, I was unable to log in to my desktop environment. I
dropped to terminal and found that the zfs volume did not mount and I
was dropped into the root directory. zpool status returns that no volume
was defined which was worrying. Upon unplugging the USB drive and
rebooting, the zfs volume returned and I could log in as usual.
lsblk is as follows on a normal boot:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
sda 8:0 0 111.8G 0 disk
├─sda1 8:1 0 32G 0 part [SWAP]
└─sda2 8:2 0 79.8G 0 part
sdb 8:16 0 931.5G 0 disk
├─sdb1 8:17 0 931.5G 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 931.5G 0 disk
├─sdc1 8:33 0 931.5G 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 931.5G 0 disk
├─sdd1 8:49 0 931.5G 0 part
└─sdd9 8:57 0 8M 0 part
...
nvme0n1 259:0 0 232.9G 0 disk
├─nvme0n1p1 259:1 0 8M 0 part
├─nvme0n1p2 259:2 0 512M 0 part /boot/efi
└─nvme0n1p3 259:3 0 232.4G 0 part /
zpool status is as follows on a normal boot:
pool: zpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
cache
sda2 ONLINE 0 0 0
errors: No known data errors
I suspected what happened was that the USB drive somehow took a device
node formerly assigned to one of the zfs disks and that threw the zfs
loader off. Detaching the drive caused the device node to become
reassigned to the zfs disks and thus the system was able to boot as
normal.
Can a fix be devised for this? Is it possible for ZFS to do some extra
work like assigning a UUID to it's volumes during creation, then during
remounting, instead of just using the stored device nodes, probe each
drive attached for matching UUIDs and remounting the volume based on the
UUIDs?
** Affects: zfs-linux (Ubuntu)
Importance: Undecided
Status: New
** Description changed:
I accidentally booted a computer with a USB drive inserted. After the
system booted, I was unable to log in to my desktop environment. I
dropped to terminal and found that the zfs volume did not mount and I
was dropped into the root directory. zpool status returns that no volume
was defined which was worrying. Upon unplugging the USB drive and
rebooting, the zfs volume returned and I could log in as usual.
- lsblk is as follows:
+ lsblk is as follows on a normal boot:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
- sda 8:0 0 111.8G 0 disk
+ sda 8:0 0 111.8G 0 disk
├─sda1 8:1 0 32G 0 part [SWAP]
- └─sda2 8:2 0 79.8G 0 part
- sdb 8:16 0 931.5G 0 disk
- ├─sdb1 8:17 0 931.5G 0 part
- └─sdb9 8:25 0 8M 0 part
- sdc 8:32 0 931.5G 0 disk
- ├─sdc1 8:33 0 931.5G 0 part
- └─sdc9 8:41 0 8M 0 part
- sdd 8:48 0 931.5G 0 disk
- ├─sdd1 8:49 0 931.5G 0 part
- └─sdd9 8:57 0 8M 0 part
+ └─sda2 8:2 0 79.8G 0 part
+ sdb 8:16 0 931.5G 0 disk
+ ├─sdb1 8:17 0 931.5G 0 part
+ └─sdb9 8:25 0 8M 0 part
+ sdc 8:32 0 931.5G 0 disk
+ ├─sdc1 8:33 0 931.5G 0 part
+ └─sdc9 8:41 0 8M 0 part
+ sdd 8:48 0 931.5G 0 disk
+ ├─sdd1 8:49 0 931.5G 0 part
+ └─sdd9 8:57 0 8M 0 part
...
- nvme0n1 259:0 0 232.9G 0 disk
- ├─nvme0n1p1 259:1 0 8M 0 part
+ nvme0n1 259:0 0 232.9G 0 disk
+ ├─nvme0n1p1 259:1 0 8M 0 part
├─nvme0n1p2 259:2 0 512M 0 part /boot/efi
└─nvme0n1p3 259:3 0 232.4G 0 part /
- zpool status is as follows:
- pool: zpool
- state: ONLINE
- scan: none requested
+ zpool status is as follows on a normal boot:
+ pool: zpool
+ state: ONLINE
+ scan: none requested
config:
- NAME STATE READ WRITE CKSUM
- zpool ONLINE 0 0 0
- raidz1-0 ONLINE 0 0 0
- sdb ONLINE 0 0 0
- sdc ONLINE 0 0 0
- sdd ONLINE 0 0 0
- cache
- sda2 ONLINE 0 0 0
+ NAME STATE READ WRITE CKSUM
+ zpool ONLINE 0 0 0
+ raidz1-0 ONLINE 0 0 0
+ sdb ONLINE 0 0 0
+ sdc ONLINE 0 0 0
+ sdd ONLINE 0 0 0
+ cache
+ sda2 ONLINE 0 0 0
errors: No known data errors
I suspected what happened was that the USB drive somehow took a device
node formerly assigned to one of the zfs disks and that threw the zfs
loader off. Detaching the drive caused the device node to become
reassigned to the zfs disks and thus the system was able to boot as
normal.
Can a fix be devised for this? Is it possible for ZFS to do some extra
work like assigning a UUID to it's volumes during creation, then during
remounting, instead of just using the stored device nodes, probe each
drive attached for matching UUIDs and remounting the volume based on the
UUIDs?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1938462
Title:
If a computer is booted with a USB drive installed, ZFS will fail to
find any pools
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1938462/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs