On Mon, Sep 3, 2018 at 1:57 AM Marlin Cremers
<[email protected]> wrote:
>
> Hey there,
>
> So I now have a problem since none of my MDSes can start anymore.
>
> They are stuck in the resolve state since Ceph things there are still MDSes
> alive which I can see when I run:
>
need mds log to check why mds are stuck in resolve state.
> ceph mds deactivate k8s:0
> Error EEXIST: mds.4:0 not active (???)
> ceph mds deactivate k8s:1
> Error EEXIST: mds.4:1 not active (???)
>
> How can I remove the MDSes from Ceph's memory as I currently have no running
> MDSes.
>
> When I look run ceph mds stat -f json then get the following output:
> {
> "fsmap":{
> "epoch":3901,
> "compat":{
> "compat":{
>
> },
> "ro_compat":{
>
> },
> "incompat":{
> "feature_1":"base v0.20",
> "feature_2":"client writeable ranges",
> "feature_3":"default file layouts on dirs",
> "feature_4":"dir inode in separate object",
> "feature_5":"mds uses versioned encoding",
> "feature_6":"dirfrag is stored in omap",
> "feature_8":"no anchor table",
> "feature_9":"file layout v2"
> }
> },
> "feature_flags":{
> "enable_multiple":true,
> "ever_enabled_multiple":false
> },
> "standbys":[
>
> ],
> "filesystems":[
> {
> "mdsmap":{
> "epoch":3896,
> "flags":12,
> "ever_allowed_features":0,
> "explicitly_allowed_features":0,
> "created":"2018-04-21 16:55:37.625468",
> "modified":"2018-09-02 19:36:00.788965",
> "tableserver":0,
> "root":0,
> "session_timeout":60,
> "session_autoclose":300,
> "max_file_size":1099511627776,
> "last_failure":0,
> "last_failure_osd_epoch":17409,
> "compat":{
> "compat":{
>
> },
> "ro_compat":{
>
> },
> "incompat":{
> "feature_1":"base v0.20",
> "feature_2":"client writeable ranges",
> "feature_3":"default file layouts on dirs",
> "feature_4":"dir inode in separate object",
> "feature_5":"mds uses versioned encoding",
> "feature_6":"dirfrag is stored in omap",
> "feature_8":"no anchor table",
> "feature_9":"file layout v2"
> }
> },
> "max_mds":1,
> "in":[
> 0,
> 1
> ],
> "up":{
>
> },
> "failed":[
>
> ],
> "damaged":[
>
> ],
> "stopped":[
> 2,
> 3
> ],
> "info":{
>
> },
> "data_pools":[
> 16
> ],
> "metadata_pool":17,
> "enabled":true,
> "fs_name":"k8s",
> "balancer":"",
> "standby_count_wanted":1
> },
> "id":4
> },
> {
> "mdsmap":{
> "epoch":3901,
> "flags":12,
> "ever_allowed_features":0,
> "explicitly_allowed_features":0,
> "created":"2018-04-29 15:53:35.342750",
> "modified":"2018-09-02 19:37:33.823379",
> "tableserver":0,
> "root":0,
> "session_timeout":60,
> "session_autoclose":300,
> "max_file_size":1099511627776,
> "last_failure":0,
> "last_failure_osd_epoch":17341,
> "compat":{
> "compat":{
>
> },
> "ro_compat":{
>
> },
> "incompat":{
> "feature_1":"base v0.20",
> "feature_2":"client writeable ranges",
> "feature_3":"default file layouts on dirs",
> "feature_4":"dir inode in separate object",
> "feature_5":"mds uses versioned encoding",
> "feature_6":"dirfrag is stored in omap",
> "feature_8":"no anchor table",
> "feature_9":"file layout v2"
> }
> },
> "max_mds":1,
> "in":[
> 0
> ],
> "up":{
> "mds_0":201494080
> },
> "failed":[
>
> ],
> "damaged":[
>
> ],
> "stopped":[
>
> ],
> "info":{
> "gid_201494080":{
> "gid":201494080,
> "name":"node01",
> "rank":0,
> "incarnation":3898,
> "state":"up:active",
> "state_seq":5,
> "addr":"10.14.4.241:6800/1458476866",
> "standby_for_rank":-1,
> "standby_for_fscid":-1,
> "standby_for_name":"",
> "standby_replay":false,
> "export_targets":[
>
> ],
> "features":4611087853745930235
> }
> },
> "data_pools":[
> 19
> ],
> "metadata_pool":18,
> "enabled":true,
> "fs_name":"windoos_click",
> "balancer":"",
> "standby_count_wanted":1
> },
> "id":5
> }
> ]
> },
> "mdsmap_first_committed":3259,
> "mdsmap_last_committed":3901
> }
>
> Which seems to suggest that Ceph things there are two up MDSes in the k8s
> filesystem and two that are stopped.
>
> I hope soneone who knows the internals of Ceph can help me as this looks like
> something I'm not able to fix on my own.
>
> Kind regards,
> Marlinc
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com