Hello List,
sometimes some OSD get taken our for some reason ( i am still looking
for the reason, and i guess its due to some overload), however, when i
try to restart them i get:

Nov 02 08:05:26 ceph05 bash[9811]: Error: No such container:
ceph-5436dd5d-83d4-4dc8-a93b-60ab5db145df-osd.47
Nov 02 08:05:29 ceph05 bash[9811]: /usr/bin/docker: Error response
from daemon: endpoint with name
ceph-5436dd5d-83d4-4dc8-a93b-60ab5db145df-osd.47 already exists in
network host.
Nov 02 08:05:29 ceph05 systemd[1]:
[email protected]: Main process
exited, code=exited, status=125/n/a
Nov 02 08:05:34 ceph05 systemd[1]:
[email protected]: Failed with
result 'exit-code'.
Nov 02 08:05:44 ceph05 systemd[1]:
[email protected]: Service
RestartSec=10s expired, scheduling restart.
Nov 02 08:05:44 ceph05 systemd[1]:
[email protected]: Scheduled
restart job, restart counter is at 5.
Nov 02 08:05:44 ceph05 systemd[1]: Stopped Ceph osd.47 for
5436dd5d-83d4-4dc8-a93b-60ab5db145df.
Nov 02 08:05:44 ceph05 systemd[1]:
[email protected]: Start
request repeated too quickly.
Nov 02 08:05:44 ceph05 systemd[1]:
[email protected]: Failed with
result 'exit-code'.
Nov 02 08:05:44 ceph05 systemd[1]: Failed to start Ceph osd.47 for
5436dd5d-83d4-4dc8-a93b-60ab5db145df.

I need to reboot the full host to get the OSD back in again. As far i
can see this is some docker problem?

root@ceph05:~# docker ps | grep osd.47 => no hit
root@ceph05:~# docker network prune => does not solve the problem
Any hint on that?

Thanks,
Michael
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to