Thanks — interesting reading.
Distilling the discussion there, below are my takeaways. Am I interpreting
correctly?
1) The spillover phenomenon and thus the small number of discrete sizes that
are effective without being wasteful — are recognized
2) "I don't think we should plan teh block.db
Btw, the original discussion leading to the 4% recommendation is here:
https://github.com/ceph/ceph/pull/23210
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Thu, Aug 15
Hi Paul,
thank you for your help. But I get the following error:
# ceph tell mds.mds3 scrub start
"~mds0/stray7/15161f7/dovecot.index.backup" repair
2019-08-16 13:29:40.208 7f7e927fc700 0 client.881878 ms_handle_reset on
v2:192.168.16.23:6800/176704036
2019-08-16 13:29:40.240 7f7e937fe700
Hi,
damage_type backtrace is rather harmless and can indeed be repaired
with the repair command, but it's called scrub_path.
Also you need to pass the name and not the rank of the MDS as id, it should be
# (on the server where the MDS is actually running)
ceph daemon mds.mds3 scrub_path .
On 15.08.2019 16:38, huxia...@horebdata.cn wrote:
Dear folks,
I had a Ceph cluster with replication 2, 3 nodes, each node with 3 OSDs,
on Luminous 12.2.12. Some days ago i had one OSD down (the disk is still
fine) due to some errors on rocksdb crash. I tried to restart that OSD
but failed. So