Hi,
I don't have an answer for you, but could you elaborate on what
exactly you're are trying to do and what has worked so far? Which ceph
version are you running? I understand that you want to clone your
whole cluster, how exactly are you trying to do that? Is this the
first OSD you're
I think i found where the wrong fsid is located on OSD osdmap but no way to
change fsid...
I tried with ceph-objectstore-tool --op set-osdmap from osdmap from monitor
(ceph osd getmap) but no luck. still with old fsid (no find a way to
set the current epoch on osdmap)
Someone to give a hint ?
Thanks Eugen for answering
Yes it came from another cluster, trying to move all osd from one cluster
to another (1 to 1) so i would avoid wiping the disk
It's indeed a ceph-volume OSD, i checked the lvm label and it's correct
# lvs --noheadings --readonly --separator=";" -o lv_tags
Hi,
this OSD must have been part of a previous cluster, I assume.
I would remove it from crush if it's still there (check just to make
sure), wipe the disk, remove any traces like logical volumes (if it
was a ceph-volume lvm OSD) and if possible, reboot the node.
Regards,
Eugen
Zitat von
Hello
I have an OSD which is stuck in booting state.
I find out that the daemon osd cluster_fsid is not the same that the actual
cluster fsid, which should explain why it does not join the cluster
# ceph daemon osd.0 status
{
"cluster_fsid": "bb55e196-eedd-478d-99b6-1aad00b95f2a",
"osd_fsid":