Hi Kamil,

We got a similar setup, and thats our config:

  osd                                   advanced osd_max_scrubs                 
       1
  osd                                   advanced osd_recovery_max_active        
       4
  osd                                   advanced osd_recovery_max_single_start  
       1
  osd                                   advanced osd_recovery_sleep             
       0.000000
  osd                                   advanced osd_scrub_auto_repair          
       true
  osd                                   advanced osd_scrub_begin_hour           
       18
  osd                                   advanced osd_scrub_end_hour             
       6
  osd                                   advanced osd_scrub_invalid_stats        
       true


Our scrub start at 18:00 PM and finish at 6:00 PM, is enough and the first 
hours of each day system is ready and dont get any performance panic due scrubs.

Implemented since 1 yr and no issue with scrubs.

We use ceph config set for mantain this setting in quorum.

Currently of cluster is S3 for main use.

Regards
Manuel


-----Mensaje original-----
De: Kamil Szczygieł <ka...@szczygiel.io> 
Enviado el: lunes, 25 de mayo de 2020 9:48
Para: ceph-users@ceph.io
Asunto: [ceph-users] Handling scrubbing/deep scrubbing

Hi,

I've 4 node cluster with 13x15TB 7.2k OSDs each and around 300TB data inside. 
I'm having issues with deep scrub/scrub not being done in time, any tips to 
handle these operations with large disks like this?

osd pool default size = 2
osd deep scrub interval = 2592000
osd scrub begin hour = 23
osd scrub end hour = 5
osd scrub sleep = 0.1

Cheers,
Kamil
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to