[PVE-User] Ceph and firewalling

2019-05-07 Thread Mark Schouten
Hi, While upgrading two clusters tonight, it seems that the Ceph-cluster gets confused by the updates of tonight. I think it has something to do with the firewall and connection tracking. A restart of ceph-mon on a node seems to work. I *think* the issue is that when pve-firewall is upgraded,

Re: [PVE-User] Proxmox 5.2, CEPH 12.2.12: still CephFS looks like jewel

2019-05-07 Thread Igor Podlesny
Narrowed it down to: umount /mnt/pve/cephfs So then it means that 4.15.18-13-pve kernel's modules ceph 368640 1 libceph 315392 1 ceph use Jewel interface(?) -- End of message. Next message? ___ pve-user mailing

Re: [PVE-User] RBD move disk : what about "sparse" blocks ?

2019-05-07 Thread Alexandre DERUMIER
>>Is there any way to copy *exactly* disk from one pool to another, using >>PVE live disk moving ? It's currently not possible with ceph/librbd. (it's missing some pieces in qemu librbd driver) >>Of course when I run "fstrim -va" on the guest, I can't reclaim space >>because kernel thinks