Hello,

I have by mistake re-installed the OS of an OSD node of my Octopus cluster 
(managed by cephadm). Luckily the OSD data is on a separate disk and did not 
get affected by the re-install.

Now I have the following state:

    health: HEALTH_WARN
            1 stray daemon(s) not managed by cephadm
            1 osds down
            1 host (1 osds) down

To fix that I tried to run:

# ceph orch daemon add osd ceph1f:/dev/sda
Created no osd(s) on host ceph1f; already created?

That did not work, so I tried:

# ceph cephadm osd activate ceph1f
no valid command found; 10 closest matches:
...
Error EINVAL: invalid command

Did not work either. So I wanted to ask how can I "adopt" back an OSD disk to 
my cluster?

Thanks for your help.

Regards,
Mabi
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to