I'm bit confused with 'un/manged'
I had earlier - although I exec it after 'zapping' disk
-> $ ceph orch apply osd --all-available-devices
so ceph automatically re-deployed two "new" osd on that "failed" host.
Now everything is 'HEALTH_OK' yet still:
-> $ ceph orch ls | egrep osd
osd                                           4  6m ago     - <unmanaged> osd.all-available-devices                     2  3m ago     28m *

How does one square these things together? What is happening here? Where/how actually 'managed' osd happens (also with config setting/showing) takes place?

many thanks, L.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to