Hi Fabio,

have a look here: 
https://github.com/ceph/ceph/blob/luminous/src/common/options.cc#L2355 
<https://github.com/ceph/ceph/blob/luminous/src/common/options.cc#L2355>

It’s designed to relieve the pressure generated by the recovery and backfill on 
both the drives and the network as it slows down these activities by 
introducing a sleep after these respectives ops.

Regards
JC

> On Feb 18, 2019, at 09:28, Fabio Abreu <[email protected]> wrote:
> 
> Hi Everybody !
> 
> I finding configure my cluster to receives news disks and pgs and after 
> configure the main standard configuration too, I look the parameter "osd 
> sleep recovery" to implement in production environment but I find just sample 
> doc about this config. 
> 
> Someone have experience with this parameter ? 
> 
> Only discussion in the internet about this :
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-March/025574.html 
> <http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-March/025574.html>
> 
> My main configuration to receive new osds in Jewel 10.2.7 cluster : 
> 
> Before include new nodes : 
> $ ceph tell osd.* injectargs '--osd-max-backfills 2'
> $ ceph tell osd.* injectargs '--osd-recovery-threads 1'
> $ ceph tell osd.* injectargs '--osd-recovery-op-priority 2'
> $ ceph tell osd.* injectargs '--osd-client-op-priority 63'
> $ ceph tell osd.* injectargs '--osd-recovery-max-active 2'
> 
> After include new nodes 
> $ ceph tell osd.* injectargs '--osd-max-backfills 1'
> $ ceph tell osd.* injectargs '--osd-recovery-threads 1'
> $ ceph tell osd.* injectargs '--osd-recovery-op-priority 1'
> $ ceph tell osd.* injectargs '--osd-client-op-priority 63'
> $ ceph tell osd.* injectargs '--osd-recovery-max-active 1'
> 
> 
> Regards, 
> 
> Fabio Abreu Reis
> http://fajlinux.com.br <http://fajlinux.com.br/>
> Tel : +55 21 98244-0161
> Skype : fabioabreureis
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to