Thanks Kyle.
I've deliberately not provided the entire picture.  I'm aware of memory 
residency and of in-flight encryption issues.  Theses are less of a problem for 
us.
For me, it's a question of finding a reliably encrypted, OSS, at-rest setup 
which involves Ceph and preferably ZFS for flexibility.
M
On 2014 Mar 10, at 21:04, Kyle Bader wrote:

>> Ceph is seriously badass, but my requirements are to create a cluster in 
>> which I can host my customer's data in separate areas which are 
>> independently encrypted, with passphrases which we as cloud admins do not 
>> have access to.
>> 
>> My current thoughts are:
>> 1. Create an OSD per machine stretching over all installed disks, then 
>> create a user-sized block device per customer.  Mount this block device on 
>> an access VM and create a LUKS container in to it followed by a zpool and 
>> then I can allow the users to create separate bins of data as separate ZFS 
>> filesystems in the container which is actually a blockdevice striped across 
>> the OSDs.
>> 2. Create an OSD per customer and use dm-crypt, then store the dm-crypt key 
>> somewhere which is rendered in some way so that we cannot access it, such as 
>> a pgp-encrypted file using a passphrase which only the customer knows.
> 
>> My questions are:
>> 1. What are people's comments regarding this problem (irrespective of my 
>> thoughts)
> 
> What is the threat model that leads to these requirements? The story
> "cloud admins do not have access" is not achievable through technology
> alone.
> 
>> 2. Which would be the most efficient of (1) and (2) above?
> 
> In the case of #1 and #2, you are only protecting data at rest. With
> #2 you would need to decrypt the key to open the block device, and the
> key would remain in memory until it is unmounted (which the cloud
> admin could access). This means #2 is safe so long as you never mount
> the volume, which means it's utility is rather limited (archive
> perhaps). Neither of these schemes buy you much more than the
> encryption handling provided by ceph-disk-prepare (dmcrypted osd
> data/journal volumes), the key management problem becomes more acute,
> eg. per tenant.
> 
> -- 
> 
> Kyle

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to