[ceph-users] When all Mons are down, does existing RBD volume continue to work

2018-03-04 Thread Mayank Kumar
Ceph Users, My question is if all mons are down(i know its a terrible situation to be), does an existing rbd volume which is mapped to a host and being used(read/written to) continues to work? I understand that it wont get notifications about osdmap, etc, but assuming nothing fails, does the

Re: [ceph-users] Corrupted files on CephFS since Luminous upgrade

2018-03-04 Thread Yan, Zheng
On Tue, Feb 27, 2018 at 2:29 PM, Jan Pekař - Imatic wrote: > I think I hit the same issue. > I have corrupted data on cephfs and I don't remember the same issue before > Luminous (i did the same tests before). > > It is on my test 1 node cluster with lower memory then

[ceph-users] Ceph usage per crush root

2018-03-04 Thread Richard Arends
Hi all, Today i wrote some code to get the usage for a Ceph cluster per crush root. I missed/could not find a way to do it like the way i wanted it, so i wrote i myself. ./ceph_usage.py Crush root    OSDs   GB   GB used    GB available  Average utilization

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-04 Thread Adrian Saul
We are using Ceph+RBD+NFS under pacemaker for VMware. We are doing iSCSI using SCST but have not used it against VMware, just Solaris and Hyper-V. It generally works and performs well enough – the biggest issues are the clustering for iSCSI ALUA support and NFS failover, most of which we have