If you look at
ceph orch ls osd --export
you'll see the details of both OSD specs you have in place, incl. an
"unmanaged: true" statement for "osd". The existing OSDs were
apparently deployed by that spec, but the newly built OSDs were
deployed by your "osd.all-available-devices" spec. I'd recommend to
clean that up (and only use all-available-devices if you're aware of
the impact in prod). You could set the "osd" spec to managed again:
ceph orch set-managed osd
Then disable the all-available-devices:
ceph orch set-unmanaged osd.all-available-devices
Then remove the new OSDs again (and zap them) and let the remaining
managed spec recreate them. There's an easier way by editing the
unit.meta file, but this might be a good opportunity to learn how to
deal with the spec files.
Zitat von lejeczek <pelj...@yahoo.co.uk>:
I'm bit confused with 'un/manged'
I had earlier - although I exec it after 'zapping' disk
-> $ ceph orch apply osd --all-available-devices
so ceph automatically re-deployed two "new" osd on that "failed" host.
Now everything is 'HEALTH_OK' yet still:
-> $ ceph orch ls | egrep osd
osd 4 6m ago - <unmanaged>
osd.all-available-devices 2 3m ago 28m *
How does one square these things together? What is happening here?
Where/how actually 'managed' osd happens (also with config
setting/showing) takes place?
many thanks, L.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io