On 17-06-20 02:44 PM, Richard Hesketh wrote:
Is there a way, either by individual PG or by OSD, I can prioritise
backfill/recovery on a set of PGs which are currently particularly important to
me?
For context, I am replacing disks in a 5-node Jewel cluster, on a node-by-node
basis - mark out
Setting an osd to 0.0 in the crush map will tell all PGs to move off of the
osd. It's right the same as removing the osd from the closer, except it
allows the osd to help move the data that it has and prevents having
degraded PGs and objects while you do it. The limit to weighting osds to
0.0 is ho
these settings are on a specific OSD:
> osd recovery max active = 1
> osd max backfills = 1
I don't know if it will behave as you expect if you set 0... (I tested
setting 0 which didn't complain, but is 0 actually 0 or unlimited or an
error?)
Maybe you could parse the ceph pg dump, then look at t
If you're planning to remove the next set of disks, I would recommend
weighting them to 0.0 in the crush map if you have the room for it. The
process at this point would be weighting the next set to 0.0 when you add
the previous set back in. That way when you finish removing the next set
there is n
Yes, don't know exactly since which release it was introduced, but in
latest jewel and beyond there is:
Please use pool level options recovery_priority and recovery_op_priority
for enabling pool level recovery priority feature:
Raw
# ceph osd pool set default.rgw.buckets.index recovery_priority 5
Is there a way to prioritize specific pools during recovery? I know there are
issues open for it, but I wasn't aware it was implemented yet...
Regards,
Logan
- On Jun 20, 2017, at 8:20 AM, Sam Wouters wrote:
| Hi,
| Are they all in the same pool? Otherwise you could prioritize pool re
Hi,
Are they all in the same pool? Otherwise you could prioritize pool recovery.
If not, maybe you can play with the osd max backfills number, no idea if
it accepts a value of 0 to actually disable it for specific OSDs.
r,
Sam
On 20-06-17 14:44, Richard Hesketh wrote:
> Is there a way, either by
Is there a way, either by individual PG or by OSD, I can prioritise
backfill/recovery on a set of PGs which are currently particularly important to
me?
For context, I am replacing disks in a 5-node Jewel cluster, on a node-by-node
basis - mark out the OSDs on a node, wait for them to clear, rep