I would say that wipe-on-delete is not necessary in most deployments.

Most storage backends exhibit the following behavior:
1. Delete volume A that has data on physical sectors 1-10
2. Create new volume B
3. Read from volume B before writing, which happens to map to physical
sector 5 - backend should return zeroes here, and not data from volume A

In case the backend doesn't provide this rather standard behavior, data
must be wiped immediately.  Otherwise, the only risk is physical security,
and if that's not adequate, customers shouldn't be storing all their data
there regardless.  You could also run a periodic job to wipe deleted
volumes to reduce the window of vulnerability, without making delete_volume
take a ridiculously long time.

Encryption is a good option as well, and of course it protects the data
before deletion as well (as long as your keys are protected...)

Bottom line - I too think the default in devstack should be to disable this
option, and think we should consider making the default False in Cinder
itself.  This isn't the first time someone has asked why volume deletion
takes 20 minutes...

As for queuing backup operations and managing bandwidth for various
operations, ideally this would be done with a holistic view, so that for
example Cinder operations won't interfere with Nova, or different Nova
operations won't interfere with each other, but that is probably far down
the road.

Thanks,
Avishay


On Tue, Oct 21, 2014 at 9:16 AM, Chris Friesen <chris.frie...@windriver.com>
wrote:

> On 10/19/2014 09:33 AM, Avishay Traeger wrote:
>
>> Hi Preston,
>> Replies to some of your cinder-related questions:
>> 1. Creating a snapshot isn't usually an I/O intensive operation.  Are
>> you seeing I/O spike or CPU?  If you're seeing CPU load, I've seen the
>> CPU usage of cinder-api spike sometimes - not sure why.
>> 2. The 'dd' processes that you see are Cinder wiping the volumes during
>> deletion.  You can either disable this in cinder.conf, or you can use a
>> relatively new option to manage the bandwidth used for this.
>>
>> IMHO, deployments should be optimized to not do very long/intensive
>> management operations - for example, use backends with efficient
>> snapshots, use CoW operations wherever possible rather than copying full
>> volumes/images, disabling wipe on delete, etc.
>>
>
> In a public-cloud environment I don't think it's reasonable to disable
> wipe-on-delete.
>
> Arguably it would be better to use encryption instead of wipe-on-delete.
> When done with the backing store, just throw away the key and it'll be
> secure enough for most purposes.
>
> Chris
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to