We had to disable deep scrub or the cluster would me unusable - we need to turn 
it back on sooner or later, though.
With minimal scrubbing and recovery settings, everything is mostly good. Turned 
out many issues we had were due to too few PGs - once we increased them from 4K 
to 16K everything sped up nicely (because the chunks are smaller), but during 
heavy activity we are still getting some “slow IOs”.
I believe there is an ionice knob in newer versions (we still run Dumpling), 
and that should do the trick no matter how much additional “load” is put on the 
OSDs.
Everybody’s bottleneck will be different - we run all flash so disk IO is not a 
problem but an OSD daemon is - no ionice setting will help with that, it just 
needs to be faster ;-)

Jan


> On 30 May 2015, at 01:17, Gregory Farnum <g...@gregs42.com> wrote:
> 
> On Fri, May 29, 2015 at 2:47 PM, Samuel Just <sj...@redhat.com> wrote:
>> Many people have reported that they need to lower the osd recovery config 
>> options to minimize the impact of recovery on client io.  We are talking 
>> about changing the defaults as follows:
>> 
>> osd_max_backfills to 1 (from 10)
>> osd_recovery_max_active to 3 (from 15)
>> osd_recovery_op_priority to 1 (from 10)
>> osd_recovery_max_single_start to 1 (from 5)
> 
> I'm under the (possibly erroneous) impression that reducing the number
> of max backfills doesn't actually reduce recovery speed much (but will
> reduce memory use), but that dropping the op priority can. I'd rather
> we make users manually adjust values which can have a material impact
> on their data safety, even if most of them choose to do so.
> 
> After all, even under our worst behavior we're still doing a lot
> better than a resilvering RAID array. ;)
> -Greg
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to