> There could be millions of tennants. Looking deeper at the docs, it looks 
> like Ceph prefers to have one OSD per disk.  We're aiming at having 
> backblazes, so will be looking at 45 OSDs per machine, many machines.  I want 
> to separate the tennants and separately encrypt their data.  The encryption 
> will be provided by us, but I was originally intending to have 
> passphrase-based encryption, and use programmatic means to either hash the 
> passphrase or/and encrypt it using the same passphrase.  This way, we 
> wouldn't be able to access the tennant's data, or the key for the passphrase, 
> although we'd still be able to store both.


The way I see it you have several options:

1. Encrypted OSDs

Preserve confidentiality in the event someone gets physical access to
a disk, whether theft or RMA. Requires tenant to trust provider.

vm
rbd
rados
osd     <-here
disks

2. Whole disk VM encryption

Preserve confidentiality in the even someone gets physical access to
disk, whether theft or RMA.

tenant: key/passphrase
provider: nothing

tenant: passphrase
provider: key

tenant: nothing
provider: key

vm <--- here
rbd
rados
osd
disks

3. Encryption further up stack (application perhaps?)

To me, #1/#2 are identical except in the case of #2 when the rbd
volume is not attached to a VM. Block devices attached to a VM and
mounted will be decrypted, making the encryption only useful at
defending against unauthorized access to storage media. With a
different key per VM, with potentially millions of tenants, you now
have a massive key escrow/management problem that only buys you a bit
of additional security when block devices are detached. Sounds like a
crappy deal to me, I'd either go with #1 or #3.

-- 

Kyle
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to