On Monday, December 29, 2025 10:22:13 AM Eastern Standard Time Iztok Gregori 
via ceph-users wrote:
> On 29/12/25 14:57, Anthony D'Atri wrote:
> 
> > 
> > 
> >>
> >>
> >> In my case it will be used to "adopt" the services, but after that what
> >> is its intended usage? The "orchestrator" should be used to add/remove
> >> new hosts and to upgrade the cluster.
> 
> > 
> > Cephadm *is* the orchestrator.  The nomenclature may seem misleading.
> > There’s part that runs on each node as well as a Manager module, and CLI
> > commands.
> 
> Understand. My question was around the "cephadm" command, the one 
> documentation says that has to be installed on all the nodes to start 
> the "adoption" process. I'm not sure if this command is needed only for 
> that process or is an integral part of the cephadm/orchestrator.

The cephadm system is split into two parts - a manager module and the cephadm 
command. The command is often called "the binary" or a little more verbosely 
"the cephadm binary".

A system may often have two or more copies of the cephadm binary on a node. 
There's the copy that is installed by an administrator (typically) that is run 
for bootstrapping or adoption and for some administrative commands. Then 
there's a copy that is pushed to nodes from the mgr module. This copy of the 
binary is coupled to the mgr module (you can find cached copies under /var/lib/
ceph).

I hope this short architectural description is helpful. :-) 


> 
> 
> >> I'm asking this question because my upgrade path consists in multiple
> >> consecutive upgrades and I see that for the latest version of Quincy
> >> (17.2.9) and for newer Ceph releases there is no cephadm "package" for
> >> el8
> 
> > 
> > Check the compatibility matrix in the docs.
> 
> 
> Mhmm... I suspect that I'm entering in some edge case here (end exiting 
> the subject of this thread). I have a cluster running Ceph Octopus 
> 15.2.17 on EL8 (almost all the nodes, some nodes are EL7 but can be 
> reinstalled/removed). My goal is to upgrade it AND to embrace cephadm as 
> orchestrator. My plan is/was:
> 
> 1. Adopt cephadm (with octopus containers)
> 2. Upgrade to Quincy.
> 3. Upgrade to Squid.
> 
> But now looking to the various compatibility matrices I'm start to worry 
> that this is not possible:
> 
> - Podman on EL8 is version 4, but from the table 
> https://docs.ceph.com/en/squid/cephadm/compatibility/#cephadm-compatibility-> 
> with-podman 
 podman will not run with Octopus.
> - There is no mention of EL8 in 
> https://docs.ceph.com/en/squid/start/os-recommendations/
> 
> I could skip the adoption with Octopus containers and go directly with 
> adoption+upgrade to Quincy. But I'll end up with in any case with most 
> hosts running EL8 which I cannot find in any matrices.
> 
> So the best solution will be to upgrade all the machines to EL9 and 
> concurrently upgrade Ceph to Quincy. For the MON and MGR shouldn't be 
> too much of a problem (it should be possible to add a new monitor with 
> newer ceph release to an existing cluster), but how to reinstall the OSD 
> servers without data movement?
> 
> In theory it should suffice to put the cluster in 'noout', reinstall the 
> host without touching the OSD disks and then recreate the OSDs. Is this 
> possible?
> 
> Thank you for your time!
> Iztok
> 
> 
> 
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]



_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to