Hi all!
The mds of our ceph cluster produces a health_err state.
It is a nautilus 14.2.2 on debian buster installed from the repo made by
croit.io with osds on bluestore.
The symptom:
# ceph -s
cluster:
health: HEALTH_ERR
1 MDSs report damaged metadata
services:
mon: 3 daemons, quorum mon1,mon2,mon3 (age 2d)
mgr: mon3(active, since 2d), standbys: mon2, mon1
mds: cephfs_1:1 {0=mds3=up:active} 2 up:standby
osd: 30 osds: 30 up (since 17h), 29 in (since 19h)
data:
pools: 3 pools, 1153 pgs
objects: 435.21k objects, 806 GiB
usage: 4.7 TiB used, 162 TiB / 167 TiB avail
pgs: 1153 active+clean
# ceph health detail
HEALTH_ERR 1 MDSs report damaged metadata
MDS_DAMAGE 1 MDSs report damaged metadata
mdsmds3(mds.0): Metadata damage detected
#ceph tell mds.0 damage ls
2019-08-16 07:20:09.415 7f1254ff9700 0 client.840758 ms_handle_reset on
v2:192.168.16.23:6800/176704036
2019-08-16 07:20:09.431 7f1255ffb700 0 client.840764 ms_handle_reset on
v2:192.168.16.23:6800/176704036
[
{
"damage_type": "backtrace",
"id": 3760765989,
"ino": 1099518115802,
"path": "~mds0/stray7/100005161f7/dovecot.index.backup"
}
]
I tried this without much luck:
# ceph daemon mds.0 "~mds0/stray7/100005161f7/dovecot.index.backup" recursive
repair
admin_socket: exception getting command descriptions: [Errno 2] No such file or
directory
Is there a way out of this error?
Thanks and best regards,
Lars
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com