Hi, Are they all in the same pool? Otherwise you could prioritize pool recovery. If not, maybe you can play with the osd max backfills number, no idea if it accepts a value of 0 to actually disable it for specific OSDs.
r, Sam On 20-06-17 14:44, Richard Hesketh wrote: > Is there a way, either by individual PG or by OSD, I can prioritise > backfill/recovery on a set of PGs which are currently particularly important > to me? > > For context, I am replacing disks in a 5-node Jewel cluster, on a > node-by-node basis - mark out the OSDs on a node, wait for them to clear, > replace OSDs, bring up and in, mark out the OSDs on the next set, etc. I've > done my first node, but the significant CRUSH map changes means most of my > data is moving. I only currently care about the PGs on my next set of OSDs to > replace - the other remapped PGs I don't care about settling because they're > only going to end up moving around again after I do the next set of disks. I > do want the PGs specifically on the OSDs I am about to replace to backfill > because I don't want to compromise data integrity by downing them while they > host active PGs. If I could specifically prioritise the backfill on those > PGs/OSDs, I could get on with replacing disks without worrying about causing > degraded PGs. > > I'm in a situation right now where there is merely a couple of dozen PGs on > the disks I want to replace, which are all remapped and waiting to backfill - > but there are 2200 other PGs also waiting to backfill because they've moved > around too, and it's extremely frustating to be sat waiting to see when the > ones I care about will finally be handled so I can get on with replacing > those disks. > > Rich > > > > _______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
