Hi everyone,

I have a storage cluster running RHCS v4 (old, I know) and am looking to upgrade
it soon. I would also like to migrate from RHCS to the open source version of
Ceph at some point, as our support contract with RedHat for Ceph is likely going
to not be renewed going forward.

I was wondering if anyone has any advice as to how to upgrade our cluster with
minimal production impact. I have the following server configuration:

  + 3x monitors
  + 3x metadata servers
  + 2x RadosGWs with 2x servers running HAproxy and keepalived for HA RadosGWs.
  + 19x OSDs - 110TB HDD and 1TB NVMe each. (Total ~2.1PB raw)

Currently, I have RHCS v4 installed baremetal on RHEL 7. I see that newer
versions of Ceph require containerized deployments so I am thinking it is best
to first migrate to a containerized installation then try and upgrade everything
else.

My first inclination is to do the upgrade like this:

  1. Move existing installation to containerized, maintain all the same versions
     and OS installations.

  2. Pull one monitor, fresh reinstall RHEL 9, reinstall RHCS v4, readd to
     cluster. Repeat for all the monitors.

  3. Pull one MDS, do the same as step 2 but for MDS.

  4. Pull one RadosGW, do the same as step 2 but for RadosGW.

  5. Pull one OSD, rebalance, fresh reinstall RHEL 9, reinstall RHCS v4, readd
     to cluster, rebalance. Repeat for all OSDs. 

  6. Upgrade RHCS to OSS Ceph Octopus -> Pacific -> Quincy -> Reef.

Does such a plan seem reasonable? Are there any major pitfalls of an approach
like this? Ideally, I would just rebuild an entire new cluster on Ceph Reef,
however there are obvious budgetary issues with such a plan.

My biggest concerns are with moving to a containerized installation, then
migrating from RHCS to OSS Ceph.

Any advice or feedback is much appreciated.

Best,

Josh
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to