After scaling the number of MDS daemons down, we now have a daemon stuck in the
"up:stopping" state. The documentation says it can take several minutes to stop
the
daemon, but it has been stuck in this state for almost a full day. According to
the "ceph fs status" output attached below, it still holds information about 2
inodes, which we assume is the reason why it cannot stop completely.
Does anyone know what we can do to finally stop it?
cephfs - 71 clients
======
RANK STATE MDS ACTIVITY DNS INOS
0 active ceph-mon-01 Reqs: 0 /s 15.7M 15.4M
1 active ceph-mon-02 Reqs: 48 /s 19.7M 17.1M
2 stopping ceph-mon-03 0 2
POOL TYPE USED AVAIL
cephfs_metadata metadata 652G 185T
cephfs_data data 1637T 539T
STANDBY MDS
ceph-mon-03-mds-2
MDS version: ceph version 15.2.11 (e3523634d9c2227df9af89a4eac33d16738c49cb)
octopus (stable)
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]