Hi,

we are planning to expand our ceph cluster (reef) from 8 OSD nodes and 3
monitors to 12 OSD nodes and 3 monitors. Currently each OSD node has a JBOD
with 28 6TB HDD disks and we are using EC 3+2, since at first there were
just 5 OSD nodes. No NVMEs are used since we do not have enough of them on
one particular node. Bare metal, no containers, we build binaries for Alma
Linux ourselves. We are using RGW S3 frontend and have on average around
50MB of upload/write during the day with spikes to up to 700MB.

It works well.

New osd nodes will have JBOD with 30 12TB disks. Also no NVMEs for
OSDs, just for OS.

We will add each new node to the cluster then adding new OSD one by one
(until one rebalances), then draining old OSDs one by one (until one
rebalances). Will be a slow process, but we do not want to overload
anything.

I am thinking of creating a new EC pool with 8+3 (leaving one node for
"backup") and migrating user per user to the new EC pool.

Has anyone had a similar experience?

Any thoughts and advice appreciated.

Kind regards,
Rok
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to