> On May 15, 2025, at 1:22 AM, Florent Carli <fca...@gmail.com> wrote:
> 
> Hello ceph team,
> 
> I’ve been working with ceph for some time

Wise choice!


> 
> 1) Declarative workflows and Infrastructure as Code
> 
> One of the advantages of ceph-ansible was the ability to define the
> cluster state declaratively in YAML files, which aligned well with
> Infrastructure-as-Code principles.

Absolutely.  If it isn’t in git, it’s hearsay.

> With cephadm, the process appears more imperative and CLI-driven,
> which makes automation and reproducibility harder in comparison. Is
> there a recommended approach to achieving a fully declarative
> deployment model with cephadm? Or plans to support this more directly?

Once you have a cluster bootstrapped, it can be 100% declarative.  There are 
various CLI commands so you can perform various tasks surgically, but it’s also 
entirely possible to maintain almost the entire cluster state in a YAML file:

  ceph orch ls —export > myawesomecluster.yaml

  # edit the file with your favorite emacs

  ceph orch apply -i myawesomecluster.yaml —dry-run
  ceph orch apply -i myawesomecluster.yaml


This is the best of both worlds: the YAML file readily fits into revision 
control and peer review, and one can add commit hooks to validate syntax or to 
even perform a dry run for a sanity check.

There’s also a cephadm-ansible project out there that wraps some cephadm tasks 
in Ansible for playbook goodness.

> 
> 2) Containerization vs. local dependencies
> 
> Cephadm’s move to full containerization makes sense in principle,
> especially to avoid system-level dependencies.

This is so, so, so nice.  It also greatly facilitates the orchestrator’s 
ability to move daemons around.

> However, in practice,
> many operations (e.g., using ceph-bluestore-tool

Using that tool, to be fair, should be rare.  Notably that tool requires that 
the OSD on which it operates not be running.  I would think it might be 
possible to enter an OSD container and kill the ceph-osd process without 
killing the container so that the tool could be run there, but there might not 
be other processes in OSD containers so that may be a non-starter.  

> or the python modules for Rados/rbd)

I’m not familiar with those.

> 3) Ceph packages for Debian Trixie on download.ceph.com
> 
> Since I'm using debian, I'm also in the process of anticipating the
> soon to come Debian 13 version (Trixie).

I can’t speak authoritatively for the build folks, but you do note that this is 
incipient.  It’s not unusual for support for a new OS release to take some time 
for any given software, and for enterprises to let a new major OS release to 
bake / shakeout for a while before betting the farm on it.  As David Lindley 
wrote, “Rasta soon come”.

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to