no 'ceph-crush' installed as far as I can tell.
-> $ rpm -qa \*ceph\*
centos-release-ceph-squid-1.0-1.el9s.noarch
python3-ceph-argparse-19.2.3-1.el9s.x86_64
libcephfs2-19.2.3-1.el9s.x86_64
python3-cephfs-19.2.3-1.el9s.x86_64
python3-ceph-common-19.2.3-1.el9s.x86_64
ceph-common-19.2.3-1.el9s.x86_64
cephadm-19.2.3-1.el9s.noarch
cluster was deployed with
-> $ cephadm bootstrap ...
What also is "interesting" is:
-> $ cephadm shell --name crash.$HOSTNAME
Inferring fsid 9f4f9dba-72c7-11f0-8052-525400519d29
Inferring config
/var/lib/ceph/9f4f9dba-72c7-11f0-8052-525400519d29/crash.podster1.mine.priv/config
Using ceph image with id 'aade1b12b8e6' and tag 'v19'
created on 2025-07-17 19:53:27 +0000 UTC
quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee
Error: statfs
/var/lib/ceph/9f4f9dba-72c7-11f0-8052-525400519d29/crash.podster1.mine.priv/keyring:
no such file or directory
and that is for all nodes.
But I find:
/var/lib/ceph/9f4f9dba-72c7-11f0-8052-525400519d29/crash/posted/
on all/each node - which path _was_ created by the cluster
(as opposed to human)
also, I find:
/var/lib/ceph/9f4f9dba-72c7-11f0-8052-525400519d29/crash.podster3/keyring
also on each node, which seems to contain other configs - as
'shell --name ...' above, was looking for.
No, failing mon I thought I should have mentioned it only as
trace of originating issue - but I did not think of causality.
Is this some hostnames / FQDN misconfiguration ? but if so,
then that was/is cluster/cephadm on its own.
OS resolver check /etc/hosts - DNS also involved later -
which contains:
-> $ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
10.1.1.61 podster1.mine.priv podster1
10.1.1.62 podster2.mine.priv podster2
10.1.1.63 podster3.mine.priv podster3
I added hosts to the cluster - and 'bootstraped' - with
their FQDNs.
thanks, L.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io