We have an older cluster which has been iterated on many times. It's
always been cephadm deployed, but I am certain the OSD specification
used has changed over time. I believe at some point, it may have been
'rm'd.
So here's our current state:
root@ceph02:/# ceph orch ls osd --export
service_type: osd
service_id: osd_spec_foo
service_name: osd.osd_spec_foo
placement:
label: osd
spec:
data_devices:
rotational: 1
db_devices:
rotational: 0
db_slots: 12
filter_logic: AND
objectstore: bluestore
---
service_type: osd
service_id: unmanaged
service_name: osd.unmanaged
placement: {}
unmanaged: true
spec:
filter_logic: AND
objectstore: bluestore
root@ceph02:/# ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
crash 7/7 10m ago 14M *
mgr 5/5 10m ago 7M label:mgr
mon 5/5 10m ago 14M label:mon
osd.osd_spec_foo 0/7 - 24m label:osd
osd.unmanaged 167/167 10m ago - <unmanaged>
The osd_spec_foo would match these devices normally, so we're curious
how we can get these 'managed' under this service specification.
What's the appropriate way in order to effectively 'adopt' these
pre-existing OSDs into the service specification that we want them to
be managed under?
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]