I think it should to set "osd_pool_default_min_size=1" before you add osd ,
and the osd that you add at a time should in same Failure domain.
Hi,
What would be the proper way to add 100 new OSDs to a cluster?
I have to add 100 new OSDs to our actual > 300 OSDs cluster, and I would like
to know how you do it.
Usually, we add them quite slowly. Our cluster is a pure SSD/NVMe one, and it
can handle plenty of load, but for the sake of safety -it hosts thousands of
VMs via RBD- we usually add them one by one, waiting for a long time between
adding each OSD.
Obviously this leads to PLENTY of data movement, as each time the cluster
geometry changes, data is migrated among all the OSDs. But with the kind of
load we have, if we add several OSDs at the same time, some PGs can get stuck
for a while, while they peer to the new OSDs.
Now that I have to add > 100 new OSDs I was wondering if somebody has some
suggestions.
Thanks!
Xavier.
[email protected]
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com