I have used the gentle reweight script many times in the past. But more
recently, I expanded one cluster from 334 to 1114 OSDs, by just changing
the crush weight 100 OSDs at a time. Once all pgs from those 100 were
stable and backfilling, add another hundred. I stopped at 500 and let the
backfill
Hello,
I would advice to use this Script from dan:
https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight
I have Used it many Times and it works Great - also if you want to drain the
OSDs.
Hth
Mehmet
Am 30. Mai 2019 22:59:05 MESZ schrieb Michel Raabe :
>Hi Mike,
>
>On
Hi Mike,
On 30.05.19 02:00, Mike Cave wrote:
I’d like a s little friction for the cluster as possible as it is in
heavy use right now.
I’m running mimic (13.2.5) on CentOS.
Any suggestions on best practices for this?
You can limit the recovery for example
* max backfills
* recovery max
Hello Mike,
there is no problem adding 100 OSDs at the same time if your cluster is
configured correctly.
Just add the OSDs and let the cluster slowly (as fast as your hardware
supports without service interruption) rebalance.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail:
Good afternoon,
I’m about to expand my cluster from 380 to 480 OSDs (5 nodes with 20 disks per
node) and am trying to determine the best way to go about this task.
I deployed the cluster with ceph ansible and everything worked well. So I’d
like to add the new nodes with ceph ansible as well.