Public bug reported:
I boot from ZFS since the begin of the year. I have a multi-boot situation with:
- Ubuntu 19.10 with Virtualbox booting from ZFS
- Ubuntu Mate 19.10 with QEMO/KVM booting from ZFS
- Ubuntu 19.10 booting from ext4
I have two problems with zfs:
- the last update of zfs failed, because the dataset "/" was not empty. Of
course it was not empty, it contained the second OS e.g. Ubuntu Mate.
- during startup my datapools were not mounted, that is a regression, since
approx a month I have that issue.
Both I can solve by changing the mountpoint for the other system from
"/" to e.g. "/systems/roots/mate". Afterwards the update has been
executed without problems and the system rebooted with the datapools
mounted as expected.
I think "zfs mount -a" should NOT try to mount datasets with mountpoint
"/", except their own system or change the error into a warning and
continue the mount process.
** Affects: zfs-linux (Ubuntu)
Importance: Undecided
Status: New
** Description changed:
I boot from ZFS since the begin of the year. I have a multi-boot situation
with:
- Ubuntu 19.10 with Virtualbox booting from ZFS
- Ubuntu Mate 19.10 with QEMO/KVM booting from ZFS
- Ubuntu 19.19 booting from ext4
I have two problems with zfs:
- - the last update of zfs failed, because the dataset "/" was not empty. Of
course it was not empty, it contained Ubuntu Mate.
+ - the last update of zfs failed, because the dataset "/" was not empty. Of
course it was not empty, it contained the second OS e.g. Ubuntu Mate.
- during startup my datapools were not mounted, that is a regression, since
approx a month I have that issue.
Both I can solve by changing the mountpoint for the other system from
"/" to e.g. "/systems/roots/mate". Afterwards the update has been
- executed without problems and the system rebooted with the datapool
+ executed without problems and the system rebooted with the datapools
mounted as expected.
I think "zfs mount -a" should NOT try to mount datasets with mountpoint
"/", except their own system or change the error into a warning and
continue the mount process.
** Description changed:
I boot from ZFS since the begin of the year. I have a multi-boot situation
with:
- Ubuntu 19.10 with Virtualbox booting from ZFS
- Ubuntu Mate 19.10 with QEMO/KVM booting from ZFS
- - Ubuntu 19.19 booting from ext4
+ - Ubuntu 19.10 booting from ext4
I have two problems with zfs:
- the last update of zfs failed, because the dataset "/" was not empty. Of
course it was not empty, it contained the second OS e.g. Ubuntu Mate.
- during startup my datapools were not mounted, that is a regression, since
approx a month I have that issue.
Both I can solve by changing the mountpoint for the other system from
"/" to e.g. "/systems/roots/mate". Afterwards the update has been
executed without problems and the system rebooted with the datapools
mounted as expected.
I think "zfs mount -a" should NOT try to mount datasets with mountpoint
"/", except their own system or change the error into a warning and
continue the mount process.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852793
Title:
Various problems related to "zfs mount -a
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1852793/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs