I try myself,  use these command and recover to Health Ok now.  But i do
not know why were these command work, in my opinion , fail mds node first
and rm failed mds node

root@ceph01-vm:~#* ceph mds fail 0*
failed mds.0
root@ceph01-vm:~# ceph -s
    cluster 075f1aae-48de-412e-b024-b0f014dbc8cf
     health HEALTH_ERR mds rank 0 has failed; mds cluster is degraded
     monmap e2: 3 mons at {ceph01-vm=
192.168.123.251:6789/0,ceph02-vm=192.168.123.252:6789/0,ceph04-vm=192.168.123.250:6789/0},
election epoch 128, quorum 0,1,2 ceph04-vm,ceph01-vm,ceph02-vm
     mdsmap e68: 0/1/1 up, 1 failed
     osdmap e588: 8 osds: 8 up, 8 in
      pgmap v285967: 2392 pgs, 21 pools, 4990 MB data, 1391 objects
            15173 MB used, 2768 GB / 2790 GB avail
                2392 active+clean
root@ceph01-vm:~#  *ceph mds rm 0 mds.ceph06-vm  *
mds gid 0 dne
root@ceph01-vm:~# ceph -s
    cluster 075f1aae-48de-412e-b024-b0f014dbc8cf
     health HEALTH_ERR mds rank 0 has failed; mds cluster is degraded
     monmap e2: 3 mons at {ceph01-vm=
192.168.123.251:6789/0,ceph02-vm=192.168.123.252:6789/0,ceph04-vm=192.168.123.250:6789/0},
election epoch 128, quorum 0,1,2 ceph04-vm,ceph01-vm,ceph02-vm
     mdsmap e69: 0/1/1 up, 1 failed
     osdmap e588: 8 osds: 8 up, 8 in
      pgmap v285970: 2392 pgs, 21 pools, 4990 MB data, 1391 objects
            15173 MB used, 2768 GB / 2790 GB avail
                2392 active+clean
root@ceph01-vm:~# *ceph mds newfs 1 0 --yes-i-really-mean-it *
*filesystem 'cephfs' already exists*
root@ceph01-vm:~# ceph -s
    cluster 075f1aae-48de-412e-b024-b0f014dbc8cf
     health HEALTH_ERR mds rank 0 has failed; mds cluster is degraded
     monmap e2: 3 mons at {ceph01-vm=
192.168.123.251:6789/0,ceph02-vm=192.168.123.252:6789/0,ceph04-vm=192.168.123.250:6789/0},
election epoch 128, quorum 0,1,2 ceph04-vm,ceph01-vm,ceph02-vm
     mdsmap e70: 0/1/1 up, 1 failed
     osdmap e588: 8 osds: 8 up, 8 in
      pgmap v285973: 2392 pgs, 21 pools, 4990 MB data, 1391 objects
            15173 MB used, 2768 GB / 2790 GB avail
                2392 active+clean
root@ceph01-vm:~# *ceph mds rmfailed 0*
root@ceph01-vm:~# ceph -s
    cluster 075f1aae-48de-412e-b024-b0f014dbc8cf
     health HEALTH_OK
     monmap e2: 3 mons at {ceph01-vm=
192.168.123.251:6789/0,ceph02-vm=192.168.123.252:6789/0,ceph04-vm=192.168.123.250:6789/0},
election epoch 128, quorum 0,1,2 ceph04-vm,ceph01-vm,ceph02-vm
     *mdsmap e71: 0/1/1 up*
     osdmap e588: 8 osds: 8 up, 8 in
      pgmap v286028: 2392 pgs, 21 pools, 4990 MB data, 1391 objects
            15174 MB used, 2768 GB / 2790 GB avail
                2392 active+clean

2015-01-05 15:03 GMT+07:00 debian Only <[email protected]>:

> i use 0.87,  in side ceph.conf, do not have mds.0  related config
>
> i did
> *root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm    *
> *mds gid 0 dne*
>
> 2015-01-05 11:15 GMT+07:00 Lindsay Mathieson <[email protected]>
> :
>
>> Did you remove the mds.0 entry from ceph.conf?
>>
>> On 5 January 2015 at 14:13, debian Only <[email protected]> wrote:
>>
>>> i have tried ' ceph mds newfs 1 0 --yes-i-really-mean-it'    but not fix
>>> the problem
>>>
>>> 2014-12-30 17:42 GMT+07:00 Lindsay Mathieson <
>>> [email protected]>:
>>>
>>>>  On Tue, 30 Dec 2014 03:11:25 PM debian Only wrote:
>>>>
>>>> > ceph 0.87 , Debian 7.5,   anyone can help ?
>>>>
>>>> >
>>>>
>>>> > 2014-12-29 20:03 GMT+07:00 debian Only <[email protected]>:
>>>>
>>>> > i want to move mds from one host to another.
>>>>
>>>> >
>>>>
>>>> > how to do it ?
>>>>
>>>> >
>>>>
>>>> > what did i do as below, but ceph health not ok, mds was not removed :
>>>>
>>>> >
>>>>
>>>> > root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm
>>>>
>>>> > mds gid 0 dne
>>>>
>>>> >
>>>>
>>>> > root@ceph06-vm:~# ceph health detail
>>>>
>>>> > HEALTH_WARN mds ceph06-vm is laggy
>>>>
>>>> > mds.ceph06-vm at 192.168.123.248:6800/4350 is laggy/unresponsive
>>>>
>>>>
>>>>
>>>> I removed an mds using this guide:
>>>>
>>>>
>>>>
>>>>
>>>> http://www.sebastien-han.fr/blog/2012/07/04/remove-a-mds-server-from-a-ceph-cluster/
>>>>
>>>>
>>>>
>>>> and ran into your problem, which is also mentioned there.
>>>>
>>>>
>>>>
>>>> resolved it using the guide suggestion:
>>>>
>>>>
>>>>
>>>> $ ceph mds newfs metadata data --yes-i-really-mean-it
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Lindsay
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> [email protected]
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>>
>>>
>>
>>
>> --
>> Lindsay
>>
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to