I will read it a few times more, because it is complex. In the past I
booted from ext4 and I had stored all my ~15 Virtual Machines in vms/vms
(and on the desktop I have vms/kvm too). I was pleased with the
instantaneous response times in the VMs, because the Linux VMs did
almost completely run
You had a setup with multiple root filesystems which each had
canmount=on and mountpoint=/. So they both tried to automatically mount
at /. (When booting in the root-on-ZFS config, one was already mounted
as your root filesystem.) ZFS, unlike other Linux filesystems, refuses
to mount over non-empty
OK the next zfs-linux update worked for the system I did the rmdir. For the
system I re-installed the error reoccurred, so I was wrong.
But I still have no idea why rmdir did the job.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-li
I used the following commands on both systems;
sudo zpool export hp-data
sudo zfs list # the other command produces a lot of snap mounts
sudo umount -l /vms/vms #the only other mounted dataset
sudo rmdir /hp-data
sudo rmdir /vms/vms
Update and upgrade the system
sudo zpool import hp-data
sudo zf
That has the same error so you are using the same two pools. Please
follow the instructions I’ve given and fix this once so you are in a
fully working state. Once things are working, then you can retry
whatever upgrade steps you think break it.
--
You received this bug notification because you ar
Sorry, you were right about the meaning of import/export and rmdir, but
my excuse is, that it has been very early in the morning.
However I have the same update problem on an ext4 installation of Ubuntu
19.10 on the same laptop. It gives the same error on two zfs modules,
but of course without the
The size of the pool is not particularly relevant. It sounds like you
think I'm asking you to backup and restore your pool, which I definitely
am not. A pool "import" is somewhat like "mounting" a pool (though it's
not literally mounting, because mounting is something that happens with
filesystems)
Sorry, but this is ridiculous. After the upgrade to 19.10, the system
worked fine for a week. I should export/import ~700 GB of data to get an
standard update working? Why should your advice help this time? If I'm
stupid enough to move 700GB of data around, I will probably have the
same problem aga
As the error message indicates, /vms and /hp-data are not empty. ZFS, by
default, will not mount over non-empty directories.
There are many ways to fix this, but here's something that is probably
the safest:
Boot up in rescue mode. If it is imported, export the hp-data pool with
`zpool export hp-
Note that the system also refuses to mount my hp-data datapool a
datapool for all my data and only data, since the directory is not
empty.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs
Tried the change bu it did not work. tried it a second time after update-grub
still not working.
I forced zfs re-installation by removing a non-existing module zfs-dkms.
** Attachment added: "mountpoint-change"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+attachment/52952
You have two datasets with mountpoint=/ (and canmount=on) which is going
to cause problems like this.
vms/roots/mate-1804 mountpoint / local
vms/roots/mate-1804 canmountondefault
vms/roots/xubuntu-1804 mountpoint / local
vm
The output related to the mountpoints, see attachment
** Attachment added: "mountpoints"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+attachment/5295259/+files/mountpoints
--
You received this bug notification because you are a member of Kernel
Packages, which is subscri
Ignore the version numbers in the roots sub dataset, I did not change
the dataset names, but the system versions stored there is 19.10.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/18
Can you provide the following details on your datasets' mountpoints.
zfs get mountpoint,canmount -t filesystem
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1846424
Title:
19.10 ZFS
I boot from ZFS so that datapool will be root mounted. That same
datapool has been used in the past exclusively for Virtual Machines,
later I added the datasets for the host OSes to the same datapool,
because they were at the begin of my SSHD. So that datapool contains
datasets with Virtualbox VMs
The error: "zfs[9317]: cannot mount '/': directory is not empty" seems
to suggest that this is a root mounted zfs. Is that so?
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1846424
T
** Changed in: zfs-linux (Ubuntu)
Status: Incomplete => In Progress
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1846424
Title:
19.10 ZFS Update failed on 2019-10-02
Status
I think we have two bugs:
- one is about dmesg is reporting the wrong versions
- the other is about a failing zfs upgrade, if we boot from ZFS. Why mount the
system again? It has already been mounted during the boot and all other non-zfs
package updates have been completed without problems.
--
I have done a fresh install of zfsutils-linux on another system booted
from ext4. Ubuntu Mate 19.10. Also there the system is confused about
the module installed. See install and the result of dmesg.
** Attachment added: "zfs-update-bug3"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bu
ON the system that I boot from ext4, the system is confused about which
zfs module is loaded, see attachment.
** Attachment added: "zfs-update-bug2"
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+attachment/5293869/+files/zfs-update-bug2
--
You received this bug notificati
I tried removing zfs-dkms, but in the VM with Ubuntu 19.10, it said module not
installed. In that system I boot from ext4.
On my laptop with Ubuntu Mate I boot from ZFS and there, it produced the
following errors, see attachment.
** Attachment added: "zfs-update-bug"
https://bugs.launchpad.
When you have error messages about modules not being updated then this
makes me believe that perhaps you have zfs-dkms install. This package
is not required if you are using the 19.10 5.2 or 5.3 kernel as this has
the zfs modules provided already with it. If you have the official
19.19 5.2 or 5.3
23 matches
Mail list logo