Hi,

A disk failed in our cephadm-managed 16.2.15 cluster, the affected OSD is
down, out and stopped with cephadm, I also removed the failed drive from
the host's service definition. The cluster has finished recovering but the
following warning persists:

[WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
    daemon osd.11 on ceph02 is in error state

Is it possible to remove or suppress this warning without having to
completely remove the OSD?

I would appreciate any advice or pointers.

Best regards,
Zakhar
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to