,
Felix Lee ~
On 2014年05月30日 23:53, Craig Lewis wrote:
On 5/29/14 01:09 , Felix Lee wrote:
Dear experts,
Recently, a disk for one of our OSDs was failure and caused osd down,
after I recovered the disk and filesystem, I noticed two problems:
1. journal corruption, which causes osd failure from
we used to use, but it's unnecessary for CEPH
indeed, we still need to accommodate ourself to be real ceph user. :)
Best regards,
Felix Lee ~
On 2014年06月02日 13:04, Wido den Hollander wrote:
On 06/02/2014 12:41 PM, Felix Lee wrote:
Hi, Craig,
Many thanks for your reply.
The disk
.3353793a09e6.00193773__head_014DC44B__d
(stat: Input/output error)
In any case, it would be very grateful if you experts could shed me some
light.
Our current ceph version is ceph-0.72.2-0.el6.x86_64
And, the filesystem backend is xfs with fiber direct attached storages.
Thanks in advance
Best regards,
Felix Lee