I set the values large for a quick recover:

osd_recovery_max_active 16

osd_max_backfills 32

Is it a very bad setting?

Only bad for the clients. ;-) As Stefan already advised, turn down these values to 1 and let the cluster rebalance slowly. If the client performance seems fine you can increase by 1 or so and see how it behaves. You'll have to find reasonable values for your specific setup to have a good mix between quick recovery without impacting client performance too much.

Regards,
Eugen


Zitat von norman <[email protected]>:

Stefan,

I agree with you about the crush rule,  but I truely met the problem for
the cluster,

I set the values large for a quick recover:

osd_recovery_max_active 16

osd_max_backfills 32

Is it a very bad setting?


Kern

On 18/8/2020 下午5:27, Stefan Kooman wrote:
On 2020-08-18 11:13, Hans van den Bogert wrote:
I don't think it might lead to more client slow requests if you set it
to 4096 in one step, since there is a cap on how many recovery/backfill
requests there can be per OSD at any given time.

I am not sure though, but I am happy to be proved wrong by the senior
members in this list :)
Not sure if I qualify for senior, but here are my 2 cents ...

I would argue that you do want to do this in one step. Doing this in
multiple steps will trigger data movement every time you change pg_num
(and pgp_num for that matter). Ceph will recalculate a new mapping every
time you change the pg(p)_num for a pool (or by altering  CRUSH rules).

osd_recovery_max_active = 1
osd_max_backfills = 1

If your cluster can't handle this than I wonder what a disk / host
failure would trigger.

Some on this list would argue that you also want the following setting
to avoid client IO starvation:

ceph config set osd osd_op_queue_cut_off high

This is already the default in Octopus.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to