At least ceph thought you the essence of doing first proper testing ;) 
Because if you test your use case you either get a positive or negative 
result and not a problem. 
However I do have to admit that ceph could be more transparent with 
publishing testing and performance results. I have already discussed 
this with them on such a ceph day. It does not make sense to have to do 
everything yourself eg the luks overhead and putting the db/wal on ssd, 
rbd performance on hdds etc. Those can quickly show if ceph can be a 
candidate or not.


-----Original Message-----
From: Kevin Myers [mailto:[email protected]] 
Cc: Janne Johansson; Marc Roos; ceph-devel; ceph-users
Subject: Re: [ceph-users] Re: Understanding what ceph-volume does, with 
bootstrap-osd/ceph.keyring, tmpfs

Tbh ceph caused us more problems than it tried to fix ymmv good luck


> On 22 Sep 2020, at 13:04, [email protected] wrote:
> 
> The key is stored in the ceph cluster config db. It can be retrieved 

> by
> 
> KEY=`/usr/bin/ceph --cluster ceph --name 
> client.osd-lockbox.${OSD_FSID} --keyring $OSD_PATH/lockbox.keyring 
> config-key get dm-crypt/osd/$OSD_FSID/luks`
> 
> September 22, 2020 2:25 AM, "Janne Johansson" <[email protected]> 
wrote:
> 
>> Den mån 21 sep. 2020 kl 16:15 skrev Marc Roos 
<[email protected]>:
>> 
>>> When I create a new encrypted osd with ceph volume[1]
>>> 
>>> Q4: Where is this luks passphrase stored?
>> 
>> I think the OSD asks the mon for it after auth:ing, so "in the mon 
DBs"
>> somewhere.
>> 
>> --
>> May the most significant bit of your life be positive.
>> _______________________________________________
>> ceph-users mailing list -- [email protected] To unsubscribe send an 
>> email to [email protected]
> _______________________________________________
> ceph-users mailing list -- [email protected] To unsubscribe send an 
> email to [email protected]


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to