Hi *,
I have a VM which I use frequently to test cephadm bootstrap
operations as well as upgrades, it's a single node with a few devices
attached. After successfully testing the upgrade to 19.2.3, I wanted
to test the bootstrap again, but removing the cluster with the
--zap-osds flag doesn't actually remove the VGs/LVs. This used to work
just fine until 19.2.2.
This is the command I used:
ceph:~ # cephadm --image myregistry/ceph_v19.2.3 rm-cluster --fsid
{FSID} --zap-osds --force
There's not much to see in the cephadm.log after these sort of lines:
2025-08-01 09:04:02,103 7f3bf5206b80 DEBUG systemctl: Removed
"/etc/systemd/system/ceph-6b501d0a-6ea3-11f0-a251-fa163e2ad8c5.target.wants/ceph-6b501d0a-6ea3-11f0-a251-fa163e2ad8c5@osd.0.service".
But it looks like the inventory output in json format, it's quite
lengthy, so I'll spare you the output here. I can add it to a tracker
though, if there isn't one yet. Has this already been reported?
Thanks,
Eugen
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io