It does better because it is a fair share queue and doesn't let recovery
ops take priority over client ops at any point for any time. It allows
clients to have a much more predictable latency to the storage.

Sent from a mobile device, please excuse any typos.

On Sat, Aug 3, 2019, 1:10 PM Alex Gorbachev <a...@iss-integration.com> wrote:

> On Fri, Aug 2, 2019 at 6:57 PM Robert LeBlanc <rob...@leblancnet.us>
> wrote:
> >
> > On Fri, Jul 26, 2019 at 1:02 PM Peter Sabaini <pe...@sabaini.at> wrote:
> >>
> >> On 26.07.19 15:03, Stefan Kooman wrote:
> >> > Quoting Peter Sabaini (pe...@sabaini.at):
> >> >> What kind of commit/apply latency increases have you seen when
> adding a
> >> >> large numbers of OSDs? I'm nervous how sensitive workloads might
> react
> >> >> here, esp. with spinners.
> >> >
> >> > You mean when there is backfilling going on? Instead of doing "a big
> >>
> >> Yes exactly. I usually tune down max rebalance and max recovery active
> >> knobs to lessen impact but still I found the additional write load can
> >> substantially increase i/o latencies. Not all workloads like this.
> >
> >
> > We have been using:
> >
> > osd op queue = wpq
> > osd op queue cut off = high
> >
> > It virtually eliminates the impact of backfills on our clusters. Our
> backfill and recovery times have increased when the cluster has lots of
> client I/O, but the clients haven't noticed that huge backfills have been
> going on.
> >
> > ----------------
> > Robert LeBlanc
> > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> Would this be superior to setting:
>
> osd_recovery_sleep = 0.5 (or some high value)
>
>
> --
> Alex Gorbachev
> Intelligent Systems Services Inc.
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to