Hi Thomas
To get the usage:
ceph osd df | sort -nk8
#VAR is the ratio to avg util
#WEIGHT is CRUSHMAP weight; typically the Disk capacity in TiB
#REWEIGHT is temporary (until osd restart or ceph osd set noout) WEIGHT
correction for manual rebalance
You can use for temporary reweight:
ceph osd reweight osd.<ID> <REWEIGHT>
or :
ceph osd test-reweight-by-utilization <VAR>
ceph osd reweight-by-utilization <VAR>
You can use for permanent reweight:
ceph osd crush reweight osd.<ID> <WEIGHT>
To speed up the backfill I use this (warning it decreases client performance):
ceph tell 'osd.*' injectargs '--osd_max_backfills 30
--osd_recovery_max_active 45 --osd_recovery_op_priority 10'
Then to set to back to default:
ceph tell 'osd.*' injectargs '--osd_max_backfills 1
--osd_recovery_max_active 3 --osd_recovery_op_priority 3'
Cheers
Francois Scheurer
________________________________________
From: Thomas Schneider <[email protected]>
Sent: Wednesday, March 4, 2020 11:15 AM
To: [email protected]
Subject: [ceph-users] Forcibly move PGs from full to empty OSD
Hi,
Ceph balancer is not working correctly; there's an open bug
<https://tracker.ceph.com/issues/43752> report, too.
Until this issue is not solved, I need a workaround because I get more
and more warnings about "nearfull osd(s)".
Therefore my question is:
How can I forcibly move PGs from full OSD to empty OSD?
THX
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
