3. August 2017 16:37, "Burkhard Linke" 
<[email protected]> schrieb:

> Hi,
> 
> On 03.08.2017 16:31, [email protected] wrote:
> 
>> Hello!
>> 
>> I have purged my ceph and reinstalled it.
>> ceph-deploy purge node1 node2 node3
>> ceph-deploy purgedata node1 node2 node3
>> ceph-deploy forgetkeys
>> 
>> All disks configured as OSDs are physically in two servers.
>> Due to some restrictions I needed to modify the total number of disks usable 
>> as OSD, this means I
>> have now less disks as before.
>> 
>> The installation with ceph-deploy finished w/o errors.
>> 
>> However, if I start all OSDs (on any of the servers) I get some services 
>> with status "failed".
>> [email protected] loaded failed failed Ceph object storage daemon
>> [email protected] loaded failed failed Ceph object storage daemon
>> [email protected] loaded failed failed Ceph object storage daemon
>> [email protected] loaded failed failed Ceph object storage daemon
>> [email protected] loaded failed failed Ceph object storage daemon
>> [email protected] loaded failed failed Ceph object storage daemon
>> [email protected] loaded failed failed Ceph object storage daemon
>> 
>> Any of these services belong to the previous installation.
>> 
>> If I stop any of the failed service and disable it, e.g.
>> systemctl stop [email protected]
>> systemctl disable [email protected]
>> the status is correct.
>> 
>> However, when I trigger
>> systemctl restart ceph-osd.target
>> these zombie services get in status "auto-restart" first and then "fail" 
>> again.
>> 
>> As a workaround I need to mask the zombie services, but this should not be a 
>> final solution:
>> systemctl mask [email protected]
>> 
>> Question:
>> How can I get rid of the zombie services "[email protected]"?
> 
> If you are sure that these OSD are "zombie", you can remove the dependencies 
> for ceph-osd.target.
> In case of CentOS, these are symlinks in 
> /etc/systemd/system/ceph-osd.target.wants/ .
> 
> Do not forget to reload systemd afterwards. There might also be a nice 
> systemctl command for
> removing dependencies.
> 
> Regards,
> Burkhard
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

I was looking for this file already, but couldn't find it. My OS is SLES 12SP2.

This is the current content:
ld4464:~ # ll /etc/systemd/system/ceph*
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null
lrwxrwxrwx 1 root root    9 Aug  3 16:17 
/etc/systemd/system/[email protected] -> /dev/null

/etc/systemd/system/ceph-mds.target.wants:
total 0
lrwxrwxrwx 1 root root 41 May 26 17:05 [email protected] -> 
/usr/lib/systemd/system/[email protected]

/etc/systemd/system/ceph.target.wants:
total 0
lrwxrwxrwx 1 root root 39 Aug  2 13:20 ceph-mds.target -> 
/usr/lib/systemd/system/ceph-mds.target
lrwxrwxrwx 1 root root 39 Aug  2 13:20 ceph-mon.target -> 
/usr/lib/systemd/system/ceph-mon.target
lrwxrwxrwx 1 root root 39 Aug  2 13:20 ceph-osd.target -> 
/usr/lib/systemd/system/ceph-osd.target

And this is the content of /usr/lib/systemd/system/ceph-osd.target:
ld4464:~ # cat /usr/lib/systemd/system/ceph-osd.target
[Unit]
Description=ceph target allowing to start/stop all [email protected] instances 
at once
PartOf=ceph.target
[Install]
WantedBy=multi-user.target ceph.target
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to