Hi all,

since there seems to be some interest, here some additional notes.

1) The script is tested on octopus. It seems that there was a change in the 
output of ceph commands used and it might need some tweaking to get it to work 
on other versions.

2) If you want to give my findings a shot, you can do so in a gradual way. The 
most important change is setting osd_deep_scrub_randomize_ratio=0 (with 
osd_max_scrubs=1), this will make osd_deep_scrub_interval work exactly as the 
requested osd_deep_scrub_min_interval setting, PGs with a deep-scrub stamp 
younger than osd_deep_scrub_interval will *not* be deep-scrubbed. This is the 
one change to test, all other settings have less impact. The script will not 
report some numbers at the end, but the histogram will be correct. Let it run a 
few deep-scrub-interval rounds until the histogram is evened out.

If you start your test after using osd_max_scrubs>1 for a while -as I did - you 
will need a lot of patience and might need to mute some scrub warnings for a 
while.

3) The changes are mostly relevant for large HDDs that take a long time to 
deep-scrub (many small objects). The overall load reduction, however, is useful 
in general.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to