Hi,
someone else had a similar issue [1], to set the global container
image you can run:
$ ceph config set global container_image my-registry:5000/ceph/ceph:v17.2.6
I usually change that as soon as a cluster is up and running or after
an upgrade so there's no risk of pulling wrong
On 15-09-2023 10:25, Stefan Kooman wrote:
I could just nuke the whole dev cluster, wipe all disks and start
fresh after reinstalling the hosts, but as I have to adopt 17 clusters
to the orchestrator, I rather get some learnings from the not working
thing
There is actually a cephadm "kill
On 15-09-2023 09:21, Boris Behrens wrote:
Hi Stefan,
the cluster is running 17.6.2 through the board. The mentioned container
with other version don't show in the ceph -s or ceph verions.
It looks like it is host related.
One host get the correct 17.2.6 images, one get the 16.2.11 images and
Hi Stefan,
the cluster is running 17.6.2 through the board. The mentioned container
with other version don't show in the ceph -s or ceph verions.
It looks like it is host related.
One host get the correct 17.2.6 images, one get the 16.2.11 images and the
third one uses the 7.0.0-7183-g54142666
> > I currently try to adopt our stage cluster, some hosts just pull strange
> > images.
> >
> > root@0cc47a6df330:/var/lib/containers/storage/overlay-images# podman ps
> > CONTAINER ID IMAGE COMMAND
> > CREATEDSTATUSPORTS
On 14-09-2023 17:49, Boris Behrens wrote:
Hi,
I currently try to adopt our stage cluster, some hosts just pull strange
images.
root@0cc47a6df330:/var/lib/containers/storage/overlay-images# podman ps
CONTAINER ID IMAGE COMMAND
CREATED