Hi Adam.
Re-deploying didn't work, but `ceph config dump` showed
one of the container_images specified 16.2.10-160.
After we removed that var, it instantly redeployed the OSDs.
Thanks again for your help.
___
ceph-users mailing list --
Thanks so much!
I'll give it a shot.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
No, you can't use the image id for hte upgrade command, it has to be the
image name. So it should start, based on what you have,
registry.redhat.io/rhceph/. As for the full name, it depends which image
you want to go with. As for trying this on an OSD first, there is `ceph
orch daemon redeploy
Hi Adam!
In addition to my earlier question of is there a way of trying a more
targeted upgrade first so we don't risk accidentally breaking the
entire production cluster,
`ceph config dump | grep container_image` shows:
global
basic container_image
Thanks! Is there a way of trying out the update on one osd first to make
sure we don't nuke the entire production cluster?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
From the ceph versions output I can see
"osd": {
"ceph version 16.2.10-160.el8cp
(6977980612de1db28e41e0a90ff779627cde7a8c) pacific (stable)": 160
},
It seems like all the OSD daemons on this cluster are using that
16.2.10-160 image, and I'm guessing most of them are running, so it