Hi,
While upgrading two clusters tonight, it seems that the Ceph-cluster gets
confused by the updates of tonight. I think it has something to do with the
firewall and connection tracking. A restart of ceph-mon on a node seems to work.
I *think* the issue is that when pve-firewall is upgraded,
Narrowed it down to:
umount /mnt/pve/cephfs
So then it means that 4.15.18-13-pve kernel's modules
ceph 368640 1
libceph 315392 1 ceph
use Jewel interface(?)
--
End of message. Next message?
___
pve-user mailing
>>Is there any way to copy *exactly* disk from one pool to another, using
>>PVE live disk moving ?
It's currently not possible with ceph/librbd. (it's missing some pieces in qemu
librbd driver)
>>Of course when I run "fstrim -va" on the guest, I can't reclaim space
>>because kernel thinks