I had 2 MDS servers (one active one standby) and both were down. I took a
dumb chance and marked the active as down (it said it was up but laggy).
Then started the primary again and now both are back up. I have never seen
this before I am also not sure of what I just did.

On Mon, Apr 30, 2018 at 4:32 PM, Sean Sullivan <[email protected]> wrote:

> I was creating a new user and mount point. On another hardware node I
> mounted CephFS as admin to mount as root. I created /aufstest and then
> unmounted. From there it seems that both of my mds nodes crashed for some
> reason and I can't start them any more.
>
> https://pastebin.com/1ZgkL9fa -- my mds log
>
> I have never had this happen in my tests so now I have live data here. If
> anyone can lend a hand or point me in the right direction while
> troubleshooting that would be a godsend!
>
> I tried cephfs-journal-tool inspect and it reports that the journal should
> be fine. I am not sure why it's crashing:
>
> /home/lacadmin# cephfs-journal-tool journal inspect
> Overall journal integrity: OK
>
>
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to