On 8/14/20 11:52 AM, Eugen Block wrote:
> Usually it should also accept the device path (although I haven't tried that 
> in Octopus yet), you could try `ceph-volume lvm prepare --data 
> /path/to/device` first and then activate it. If that doesn't work, try to 
> create a vg and lv and try it with LVM syntax (ceph-volume lvm prepare --data 
> {vg}/{lv}). I don't have a cluster at hand right now so I can't double check. 
> But I find it strange that it doesn't accept the device path, maybe someone 
> with more Octopus experience can chime in.
So I still wasn't able to add the device with the above methods, however I did 
figure out which command (and specifically which options) were causing the 
problem. When I run the `ceph osd new` command without the -n and -k options it 
immediately returns with a new OSD ID:

user@node1:~$ UUID=$(uuidgen)
user@node1:~$ OSD_SECRET=$(ceph-authtool --gen-print-key)
user@node1:~$ echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | sudo ceph osd new 
$UUID -i -
7

If I run it with the -n and -k options it hangs sames as when I call 
`ceph-volume` and eventually times out with the same error:

user@node1:~$ UUID2=$(uuidgen)
user@node1:~$ OSD_SECRET2=$(ceph-authtool --gen-print-key)
user@node1:~$ echo "{\"cephx_secret\": \"$OSD_SECRET2\"}" | sudo ceph osd new 
$UUID2 -i - -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring
[errno 110] RADOS timed out (error connecting to the cluster)

If I run it with just the -n option is immediately comes back with an error:

user@node1:~$ UUID3=$(uuidgen)
user@node1:~$ OSD_SECRET3=$(ceph-authtool --gen-print-key)
user@node1:~$ echo "{\"cephx_secret\": \"$OSD_SECRET3\"}" | sudo ceph osd new 
$UUID3 -i - -n client.bootstrap-osd
[errno 2] RADOS object not found (error connecting to the cluster)

-- 
Thanks,
Joshua Schaeffer

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to