Based on original concept of *osd_max_backfills* which prevents the
following:
*situationIf all of these backfills happen simultaneously, it would put
excessive load on the osd.*
the value of osd_max_backfills could be important in some situation. So
we might not be able to say how it's
hi,jan
2015-06-01 15:43 GMT+08:00 Jan Schermer j...@schermer.cz:
We had to disable deep scrub or the cluster would me unusable - we need to
turn it back on sooner or later, though.
With minimal scrubbing and recovery settings, everything is mostly good.
Turned out many issues we had were
From a ease of use standpoint and depending on the situation you are
setting up your environment, the idea is as follow;
It seems like it would be nice to have some easy on demand control where
you don't have to think a whole lot other than knowing how it is going to
affect your cluster in a
With a write-heavy RBD workload, I add the following to ceph.conf:
osd_max_backfills = 2
osd_recovery_max_active = 2
If things are going well during recovery (i.e. guests happy and no slow
requests), I will often bump both up to three:
# ceph tell osd.* injectargs '--osd-max-backfills 3
On Mon, 1 Jun 2015, Gregory Farnum wrote:
On Mon, Jun 1, 2015 at 6:39 PM, Paul Von-Stamwitz
pvonstamw...@us.fujitsu.com wrote:
On Fri, May 29, 2015 at 4:18 PM, Gregory Farnum g...@gregs42.com wrote:
On Fri, May 29, 2015 at 2:47 PM, Samuel Just sj...@redhat.com wrote:
Many people have
Slow requests are not exactly tied to the PG number, but we were getting slow
requests whenever backfills or recoveries fired up - increasing the number of
PGs helped with this as the “blocks” of work are much smaller than before.
We have roughly the same number of OSDs as you but only one
We had to disable deep scrub or the cluster would me unusable - we need to turn
it back on sooner or later, though.
With minimal scrubbing and recovery settings, everything is mostly good. Turned
out many issues we had were due to too few PGs - once we increased them from 4K
to 16K everything
On 06/01/15 09:43, Jan Schermer wrote:
We had to disable deep scrub or the cluster would me unusable - we need to
turn it back on sooner or later, though.
With minimal scrubbing and recovery settings, everything is mostly good.
Turned out many issues we had were due to too few PGs - once we
On 05/29/2015 04:47 PM, Samuel Just wrote:
Many people have reported that they need to lower the osd recovery config
options to minimize the impact of recovery on client io. We are talking about
changing the defaults as follows:
osd_max_backfills to 1 (from 10)
osd_recovery_max_active to 3
Hi Mark, I don¹t suppose you logged latency during those tests, did you?
I¹m one of the folks, as Bryan mentioned, that advocates turning these
values down. I¹m okay with extending recovery time, especially when we are
talking about a default of 3x replication, with the trade off of better
client
On 06/01/2015 05:34 PM, Wang, Warren wrote:
Hi Mark, I don¹t suppose you logged latency during those tests, did you?
I¹m one of the folks, as Bryan mentioned, that advocates turning these
values down. I¹m okay with extending recovery time, especially when we are
talking about a default of 3x
On Mon, Jun 1, 2015 at 6:39 PM, Paul Von-Stamwitz
pvonstamw...@us.fujitsu.com wrote:
On Fri, May 29, 2015 at 4:18 PM, Gregory Farnum g...@gregs42.com wrote:
On Fri, May 29, 2015 at 2:47 PM, Samuel Just sj...@redhat.com wrote:
Many people have reported that they need to lower the osd recovery
On Fri, May 29, 2015 at 5:47 PM, Samuel Just sj...@redhat.com wrote:
Many people have reported that they need to lower the osd recovery config
options to minimize the impact of recovery on client io. We are talking
about changing the defaults as follows:
osd_max_backfills to 1 (from 10)
On Fri, May 29, 2015 at 5:47 PM, Samuel Just sj...@redhat.com wrote:
Many people have reported that they need to lower the osd recovery config
options to minimize the impact of recovery on client io. We are talking
about changing the defaults as follows:
osd_max_backfills to 1 (from 10)
Hi,
We did it the other way around instead, defining a period where the load is
lighter and turn off/on backfill/recover. Then you want the backfill values
to be the what is default right now.
Also, someone said that (think it was Greg?) If you have problems with
backfill, your cluster backing
: Re: [ceph-users] Discuss: New default recovery config settings
Hi,
We did it the other way around instead, defining a period where the load is
lighter and turn off/on backfill/recover. Then you want the backfill values to
be the what is default right now.
Also, someone said that (think
On Fri, May 29, 2015 at 2:47 PM, Samuel Just sj...@redhat.com wrote:
Many people have reported that they need to lower the osd recovery config
options to minimize the impact of recovery on client io. We are talking
about changing the defaults as follows:
osd_max_backfills to 1 (from 10)
Sam,
We are seeing some good client IO results during recovery by using the
following values..
osd recovery max active = 1
osd max backfills = 1
osd recovery threads = 1
osd recovery op priority = 1
It is all flash though. The recovery time in case of entire node (~120 TB)
failure/a single
Many people have reported that they need to lower the osd recovery config
options to minimize the impact of recovery on client io. We are talking about
changing the defaults as follows:
osd_max_backfills to 1 (from 10)
osd_recovery_max_active to 3 (from 15)
osd_recovery_op_priority to 1 (from
19 matches
Mail list logo