What are some outputs of commands to show us the state of your cluster.
Most notable is `ceph status` but `ceph osd tree` would be helpful. What
are the size of the pools in your cluster?  Are they all size=3 min_size=2?

On Fri, May 11, 2018 at 12:05 PM Daniel Davidson <[email protected]>
wrote:

> Hello,
>
> Today we had a node crash, and looking at it, it seems there is a
> problem with the RAID controller, so it is not coming back up, maybe
> ever.  It corrupted the local filesytem for the ceph storage there.
>
> The remainder of our storage (10.2.10) cluster is running, and it looks
> to be repairing and our min_size is set to 2.  Normally, I would expect
> that the system would keep running normally from and end user
> perspective when this happens, but the system is down. All mounts that
> were up when this started look to be stale, and new mounts give the
> following error:
>
> # mount -t ceph ceph-0:/ /test/ -o
> name=admin,secretfile=/etc/ceph/admin.secret,noatime,_netdev,rbytes
> mount error 5 = Input/output error
>
> Any suggestions?
>
> Dan
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to