Hello,

I'm still very new to Ceph. I've created a small test Cluster.
 
ceph-node1
osd0
osd1
osd2
ceph-node2
osd3
osd4
osd5
ceph-node3
osd6
osd7
osd8
 
My pool for CephFS has a replication count of 3. I've powered of 2 nodes(6 
OSDs went down) and my cluster status became critical and my ceph 
clients(cephfs) run into a timeout. My data(I had only one file on my pool) 
was still on one of the active OSDs. Is this the expected behaviour that 
the Cluster status became critical and my Clients run into a timeout?
 
Many thanks for your feedback.
 
Regards - Willi
 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to