Hi,
I think the ceph-users list is more appropriate for your request, I
removed the dev list from the recipients
If I understand correctly, you should still have one surviving
monitor, is that correct? The quickest way to restore the quorum would
probably be to edit the monmap (make a backup) by removing the dead
monitors and startup the surviving monitor. This would give you a
single-mon cluster, from there you could then redeploy the other
monitors and restore OSD functionality.
I tried reinstalling the system on one node and recreating a monitor
using the data taken from /var/lib/ceph/osd/, but I don't see the
OSDs. This is the situation…
Is this a cephadm managed cluster? In that case the OSDs have their
data directory in /var/lib/ceph/{CLUSTER_FSID}/osd.{OSD_ID}/.
If it's not cephadm managed, you'll need to provide more information
with details and logs etc.
Regards,
Eugen
Zitat von FRANCESCO NAPOLI <francesco.nap...@cnr.it>:
Good morning, Ceph was installed on Ubuntu Server 24.04 with
cephadm. The cluster consists of 4 nodes; the first one also acted
as a monitor in the initial state, but as all the others were added,
in addition to being OSD hosts, they also acted as monitors. These
machines had the major flaw of not having the operating system disk
in RAID, and after 20 days, 3 nodes failed. The OSD disks are
fine... I tried reinstalling the system on one node and recreating a
monitor using the data taken from /var/lib/ceph/osd/, but I don't
see the OSDs. This is the situation…
thanks for the support.
--
Napoli Francesco
Gruppo Reti e Sistemi Informativi
Istituto di Fisiologia Clinica - CNR
Tel. 050.315.2344
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io