I can not confirm that, I just redeployed three OSDs on a Reef test cluster. Can you provide more details what exactly goes wrong? I'd start with inspecting 'ceph orch device ls', 'cephadm ceph-volume lvm inventory' (local on that host) and check the cephadm.log for some details.

Zitat von Nigel Williams <[email protected]>:

We upgraded to Reef from Quincy, all went smoothly (thanks Ceph developers!)

When adding OSDs, the process seems to have changed, the docs no longer
mention OSD spec, and giving it a try it fails when it bumps into the root
drive (which has an active LVM). I expect I can add a filter to avoid it.

But is using the OSD spec (
https://docs.ceph.com/en/octopus/cephadm/drivegroups/) approach now
deprecated? Is the web-interface now the preferred way?

thanks.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to