Hi,recently I upgraded a test cluster from 10.2.9 to 12.2.0. When that was done, I converted all OSDs from filestore to bluestore. Today, ceph reported a scrub error in the cephfs metadata pool:
ceph health detail
HEALTH_ERR 6 scrub errors; Possible data damage: 2 pgs inconsistent
OSD_SCRUB_ERRORS 6 scrub errors
PG_DAMAGED Possible data damage: 2 pgs inconsistent
pg 3.a0 is active+clean+inconsistent, acting [23,219,82]
pg 3.1e6 is active+clean+inconsistent, acting [165,229,147]
rados list-inconsistent-obj 3.a0 --format=json-pretty
{
"epoch": 123758,
"inconsistents": []
}
rados list-inconsistent-obj 3.1e6 --format=json-pretty
{
"epoch": 124133,
"inconsistents": [
{
"object": {
"name": "100003fd495.00000000",
"nspace": "",
"locator": "",
"snap": "head",
"version": 112719
},
"errors": [],
"union_shard_errors": [
"omap_digest_mismatch_oi"
],
"selected_object_info":
"3:67df2121:::100003fd495.00000000:head(79783'119744 osd.165.0:23304
dirty|omap|data_digest|omap_digest s 0 uv 112719 dd ffffffff od 8430fcf1
alloc_hint [0 0 0])",
"shards": [
{
"osd": 147,
"primary": false,
"errors": [
"omap_digest_mismatch_oi"
],
"size": 0,
"omap_digest": "0x98702a83",
"data_digest": "0xffffffff"
},
{
"osd": 165,
"primary": true,
"errors": [
"omap_digest_mismatch_oi"
],
"size": 0,
"omap_digest": "0x98702a83",
"data_digest": "0xffffffff"
},
{
"osd": 229,
"primary": false,
"errors": [
"omap_digest_mismatch_oi"
],
"size": 0,
"omap_digest": "0x98702a83",
"data_digest": "0xffffffff"
}
]
}
]
}
To me there seems to be no obvious error. Could anyone give me a hint
how to handle that? ceph pg repair did not help.
Thanks, Daniel -- Daniel Schreiber Facharbeitsgruppe Systemsoftware Universitaetsrechenzentrum Technische Universität Chemnitz Straße der Nationen 62 (Raum B303) 09111 Chemnitz Germany Tel: +49 371 531 35444 Fax: +49 371 531 835444
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
