Hello! I have purged my ceph and reinstalled it. ceph-deploy purge node1 node2 node3 ceph-deploy purgedata node1 node2 node3 ceph-deploy forgetkeys
All disks configured as OSDs are physically in two servers. Due to some restrictions I needed to modify the total number of disks usable as OSD, this means I have now less disks as before. The installation with ceph-deploy finished w/o errors. However, if I start all OSDs (on any of the servers) I get some services with status "failed". [email protected] loaded failed failed Ceph object storage daemon [email protected] loaded failed failed Ceph object storage daemon [email protected] loaded failed failed Ceph object storage daemon [email protected] loaded failed failed Ceph object storage daemon [email protected] loaded failed failed Ceph object storage daemon [email protected] loaded failed failed Ceph object storage daemon [email protected] loaded failed failed Ceph object storage daemon Any of these services belong to the previous installation. If I stop any of the failed service and disable it, e.g. systemctl stop [email protected] systemctl disable [email protected] the status is correct. However, when I trigger systemctl restart ceph-osd.target these zombie services get in status "auto-restart" first and then "fail" again. As a workaround I need to mask the zombie services, but this should not be a final solution: systemctl mask [email protected] Question: How can I get rid of the zombie services "[email protected]"? THX _______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
