The things in "ceph orch ps" output are gathered by checking the contents
of the /var/lib/ceph/<cluster-fsid>/ directory on the host. Those
"cephadm.<hash>" files get deployed normally though, and aren't usually
reported in "ceph orch ps" as it should only report things that are
directories rather than files. You could maybe try going and removing them
anyway to see what happens (cephadm should just deploy another one though).
Would be interested anyway in what the contents of
/var/lib/ceph/<cluster-fsid>/ are on that srvcephprod07 node and also what
"cephadm ls" spits out on that node (you would have to put a copy of the
cephadm tool on the host to run that).

As for the logs, the "cephadm.log" on the host is only the log of what the
cephadm tool has done on that host, not what the cephadm mgr module is
running. Could maybe try "ceph mgr fail; ceph -W cephadm" and let it sit
for a bit to see if you get a traceback printout that way.

On Fri, Mar 10, 2023 at 10:41 AM <xadhoo...@gmail.com> wrote:

> looking at ceph orch upgrade check
> I find out
>         },
>
> "cephadm.8d0364fef6c92fc3580b0d022e32241348e6f11a7694d2b957cdafcb9d059ff2":
> {
>             "current_id": null,
>             "current_name": null,
>             "current_version": null
>         },
>
>
> Could this lead to the issue?
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to