What does it show if you do 'ceph versions'?

Also, there is a way that you can upgrade individual daemons via "staggered" 
upgrade...

"ceph orch upgrade --daemon-types <type1,Type2, etc.>"

Example types: mgr, mon, crash, osd, mds, rgw, rbd-mirror, and cephfs-mirror

NOTE: Enforced upgrade order is: mgr -> mon -> crash -> osd -> mds -> rgw -> 
rbd-mirror -> cephfs-mirror

-- Michael



________________________________
From: Marek Szuba via ceph-users <[email protected]>
Sent: Tuesday, December 2, 2025 6:46 PM
To: [email protected] <[email protected]>
Subject: [ceph-users] Updating cephadm-managed Tentacle cluster from RC to 
production

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.


Hello,

We have got a cephadm-managed test Ceph cluster created with a Tentacle
Release Candidate, RC1 if memory serves me right.

With 20.2.0 having officially been releases mid-November, we would
ideally like to bring this cluster into production without data loss -
none of the data already present on the OSDs is irreplaceable but we
would prefer to retain snapshot history we have established during the
test phase.

Unfortunately, it turns out that the naive approach of having "ceph orch
upgrade" upgrade the cluster to a production release does not work - if
we run it with "--ceph-version 20.2.0" it immediately fails with the
error "cannot downgrade to a dev version", "--image
quay.io/ceph/ceph:v20.2.0" on the other hand appears to start
successfully but gets immediately paused, with the same error message
appearing in the upgrade status report.

The "downgrade" bit is likely related to the fact that according to
"ceph versions", the Ceph version we have presently got running is
20.3.0-2957-g62bcf65e (despite the fact the cephadm package we used
clearly having been labelled 20.1.something). The "to a dev version"
bit, however, has got me stumped given the long-established Ceph has
been using the "even minor versions are production" convention for ages.

Any thoughts on how we can get this cluster upgraded to a production
release without wiping everything and starting from scratch? Thanks in
advance!

PS. cephadm on the admin node has already been upgraded to 20.2.0.

--
MS

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to