Re: [ceph-users] ceph journal failed??

2015-12-22 Thread yuyang
<ceph-users@lists.ceph.com>; Subject: Re: [ceph-users] ceph journal failed?? Le 22/12/2015 09:42, yuyang a ??crit : Hello, everyone, [snip snap] Hi If the SSD failed or down, can the OSD work? Is the osd down or only can be read? If you don't have a journal anymore, the OSD has already qui

[ceph-users] ceph journal failed??

2015-12-22 Thread yuyang
Hello, everyone, I have a ceph cluster with sereral nodes, every node has 1 SSD and 9 STAT disks. Every STAT disk is used as an OSD, in order to improve IO performance, the SSD is used as journal file disk. That is, there are 9 nournal files in every SSD. If the SSD failed or down, can the OSD

[ceph-users] How to get the chroot path in MDS?

2016-01-21 Thread yuyang
Hello, everyone In our cluster, we use cephFS with two MDS, and there are serevel ceph-fuse clients. Every client mount there own dir so that they can not see each other. We use the following cmd to mount: ceph-fuse -m 10.0.9.75:6789 -r /clientA /mnt/cephFS And we want to monit our client and get