>
> > However, in practice,
> > many operations (e.g., using ceph-bluestore-tool
>
> Using that tool, to be fair, should be rare.  Notably that tool requires
> that the OSD on which it operates not be running.  I would think it might
> be possible to enter an OSD container and kill the ceph-osd process without
> killing the container so that the tool could be run there, but there might
> not be other processes in OSD containers so that may be a non-starter.


we do have a recommended process for this (
https://docs.ceph.com/en/latest/cephadm/troubleshooting/#running-various-ceph-tools).
The high level overview is you stop the daemon and then run `cephadm shell`
with `--name <daemon-name>` and it should spin up a container with all the
same files and mounts as if we were actually running the daemon, but with
an interactive bash session inside instead of the actual daemon process
running. I was just messing with this for trying to add a wal device to an
OSD earlier today (which wasn't working for another reason related to
ceph-volume, but the process for running the tools in general worked)

[root@vm-00 ~]#
[root@vm-00 ~]# systemctl stop
ceph-50327e5e-3196-11f0-8285-52540034d386@osd.5.service
[root@vm-00 ~]#
[root@vm-00 ~]#
[root@vm-00 ~]# cephadm shell --name osd.5
Inferring fsid 50327e5e-3196-11f0-8285-52540034d386
Inferring config /var/lib/ceph/50327e5e-3196-11f0-8285-52540034d386/osd.5/config
Creating an OSD daemon form without an OSD FSID value
[ceph: root@vm-00 /]#
[ceph: root@vm-00 /]#
[ceph: root@vm-00 /]# ceph-bluestore-tool --path
/var/lib/ceph/osd/ceph-5 bluefs-bdev-new-wal --dev-target /dev/vdf
inferring bluefs devices from bluestore path
WAL device added /dev/vdf
[ceph: root@vm-00 /]#




On Thu, May 15, 2025 at 12:14 PM Anthony D'Atri <a...@dreamsnake.net> wrote:

>
>
> > On May 15, 2025, at 1:22 AM, Florent Carli <fca...@gmail.com> wrote:
> >
> > Hello ceph team,
> >
> > I’ve been working with ceph for some time
>
> Wise choice!
>
>
> >
> > 1) Declarative workflows and Infrastructure as Code
> >
> > One of the advantages of ceph-ansible was the ability to define the
> > cluster state declaratively in YAML files, which aligned well with
> > Infrastructure-as-Code principles.
>
> Absolutely.  If it isn’t in git, it’s hearsay.
>
> > With cephadm, the process appears more imperative and CLI-driven,
> > which makes automation and reproducibility harder in comparison. Is
> > there a recommended approach to achieving a fully declarative
> > deployment model with cephadm? Or plans to support this more directly?
>
> Once you have a cluster bootstrapped, it can be 100% declarative.  There
> are various CLI commands so you can perform various tasks surgically, but
> it’s also entirely possible to maintain almost the entire cluster state in
> a YAML file:
>
>   ceph orch ls —export > myawesomecluster.yaml
>
>   # edit the file with your favorite emacs
>
>   ceph orch apply -i myawesomecluster.yaml —dry-run
>   ceph orch apply -i myawesomecluster.yaml
>
>
> This is the best of both worlds: the YAML file readily fits into revision
> control and peer review, and one can add commit hooks to validate syntax or
> to even perform a dry run for a sanity check.
>
> There’s also a cephadm-ansible project out there that wraps some cephadm
> tasks in Ansible for playbook goodness.
>
> >
> > 2) Containerization vs. local dependencies
> >
> > Cephadm’s move to full containerization makes sense in principle,
> > especially to avoid system-level dependencies.
>
> This is so, so, so nice.  It also greatly facilitates the orchestrator’s
> ability to move daemons around.
>
> > However, in practice,
> > many operations (e.g., using ceph-bluestore-tool
>
> Using that tool, to be fair, should be rare.  Notably that tool requires
> that the OSD on which it operates not be running.  I would think it might
> be possible to enter an OSD container and kill the ceph-osd process without
> killing the container so that the tool could be run there, but there might
> not be other processes in OSD containers so that may be a non-starter.
>
> > or the python modules for Rados/rbd)
>
> I’m not familiar with those.
>
> > 3) Ceph packages for Debian Trixie on download.ceph.com
> >
> > Since I'm using debian, I'm also in the process of anticipating the
> > soon to come Debian 13 version (Trixie).
>
> I can’t speak authoritatively for the build folks, but you do note that
> this is incipient.  It’s not unusual for support for a new OS release to
> take some time for any given software, and for enterprises to let a new
> major OS release to bake / shakeout for a while before betting the farm on
> it.  As David Lindley wrote, “Rasta soon come”.
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to