At least for the current up-to-date reef branch (not sure what reef version
you're on) when --image is not provided to the shell, it should try to
infer the image in this order

   1. from the CEPHADM_IMAGE env. variable
   2. if you pass --name with a daemon name to the shell command, it will
   try to get the image that daemon uses
   3. next it tries to find the image being used by any ceph container on
   the host
   4. The most recently built ceph image it can find on the host (by
   CreatedAt metadata field for the image)


There is a `ceph cephadm osd activate <host>` command that is meant to do
something similar in terms of OSD activation. If I'm being honest I haven't
looked at it in some time, but it does have some CI test coverage via
https://github.com/ceph/ceph/blob/main/qa/suites/orch/cephadm/osds/2-ops/rmdir-reactivate.yaml

On Thu, May 16, 2024 at 11:45 AM Matthew Vernon <mver...@wikimedia.org>
wrote:

> Hi,
>
> I've some experience with Ceph, but haven't used cephadm much before,
> and am trying to configure a pair of reef clusters with cephadm. A
> couple of newbie questions, if I may:
>
> * cephadm shell image
>
> I'm in an isolated environment, so pulling from a local repository. I
> bootstrapped OK with
> cephadm --image docker-registry.wikimedia.org/ceph bootstrap ...
>
> And that worked nicely, but if I want to run cephadm shell (to do any
> sort of admin), then I have to specify
> cephadm --image docker-registry.wikimedia.org/ceph shell
>
> (otherwise it just hangs failing to talk to quay.io).
>
> I found the docs, which refer to setting lots of other images, but not
> the one that cephadm uses:
>
> https://docs.ceph.com/en/reef/cephadm/install/#deployment-in-an-isolated-environment
>
> I found an old tracker in this area: https://tracker.ceph.com/issues/47274
>
> ...but is there a good way to arrange for cephadm to use the
> already-downloaded image without having to remember to specify --image
> each time?
>
> * OS reimages
>
> We do OS upgrades by reimaging the server (which doesn't touch the
> storage disks); on an old-style deployment you could then use
> ceph-volume to re-start the OSDs and away you went; how does one do this
> in a cephadm cluster?
> [I presume involves telling cephadm to download a new image for podman
> to use and suchlike]
>
> Would the process be smoother if we arranged to leave /var/lib/ceph
> intact between reimages?
>
> Thanks,
>
> Matthew
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to