Re: [openstack-dev] [cinder] dd performance for wipe in cinder
Have you looked at the volume_clear and volume_clear_size options in cinder.conf? https://github.com/openstack/cinder/blob/2013.2.rc1/etc/cinder/cinder.conf.sample#L1073 The default is to zero out the volume. You could try 'none' to see if that helps with performance. Thanks, MATT RIEDEMANN Advisory Software Engineer Cloud Solutions and OpenStack Development Phone: 1-507-253-7622 | Mobile: 1-507-990-1889 E-mail: mrie...@us.ibm.com 3605 Hwy 52 N Rochester, MN 55901-1407 United States From: cosmos cosmos cosmos0...@gmail.com To: openstack-dev@lists.openstack.org, Date: 10/11/2013 04:26 AM Subject:[openstack-dev] dd performance for wipe in cinder Hello. My name is Rucia for Samsung SDS. Now I am in trouble in cinder volume deleting. I am developing for supporting big data storage in lvm But it takes too much time for deleting of cinder lvm volume because of dd. Cinder volume is 200GB for supporting hadoop master data. When i delete cinder volume in using 'dd if=/dev/zero of $cinder-volume count=100 bs=1M' it takes about 30 minutes. Is there the better and quickly way for deleting? Cheers. Rucia. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev image/gif___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] dd performance for wipe in cinder
On Fri, Oct 11, 2013 at 8:41 AM, Matt Riedemann mrie...@us.ibm.com wrote: Have you looked at the volume_clear and volume_clear_size options in cinder.conf? * https://github.com/openstack/cinder/blob/2013.2.rc1/etc/cinder/cinder.conf.sample#L1073 *https://github.com/openstack/cinder/blob/2013.2.rc1/etc/cinder/cinder.conf.sample#L1073 The default is to zero out the volume. You could try 'none' to see if that helps with performance. Thanks, *MATT RIEDEMANN* Advisory Software Engineer Cloud Solutions and OpenStack Development -- *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889* E-mail:* *mrie...@us.ibm.com* mrie...@us.ibm.com [image: IBM] 3605 Hwy 52 N Rochester, MN 55901-1407 United States From:cosmos cosmos cosmos0...@gmail.com To:openstack-dev@lists.openstack.org, Date:10/11/2013 04:26 AM Subject:[openstack-dev] dd performance for wipe in cinder -- Hello. My name is Rucia for Samsung SDS. Now I am in trouble in cinder volume deleting. I am developing for supporting big data storage in lvm But it takes too much time for deleting of cinder lvm volume because of dd. Cinder volume is 200GB for supporting hadoop master data. When i delete cinder volume in using 'dd if=/dev/zero of $cinder-volume count=100 bs=1M' it takes about 30 minutes. Is there the better and quickly way for deleting? Cheers. Rucia. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev As Matt pointed out there's an option to turn off secure-delete altogether. The reason for the volume_clear setting (aka secure delete) is that since we're allocating volumes via LVM from a shared VG there is the possibility that a user had a volume with sensitive data and deleted/removed the logical volume they were using. If there was no encryption or if no secure delete operation were performed it is possible that another tenant when creating a new volume from the Volume Group could be allocated some of the blocks that the previous volume utilized and potentially inspect/read those blocks and obtain some of the other users data. To be honest the options provided won't likely make this operation as fast as you'd like, especially when dealing with 200GB volumes. Depending on your environment you may want to consider using encryption or possibly if acceptable using the volume_clear=None. John image/gif___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] dd performance for wipe in cinder
On 10/11/2013 09:02 AM, John Griffith wrote: As Matt pointed out there's an option to turn off secure-delete altogether. The reason for the volume_clear setting (aka secure delete) is that since we're allocating volumes via LVM from a shared VG there is the possibility that a user had a volume with sensitive data and deleted/removed the logical volume they were using. If there was no encryption or if no secure delete operation were performed it is possible that another tenant when creating a new volume from the Volume Group could be allocated some of the blocks that the previous volume utilized and potentially inspect/read those blocks and obtain some of the other users data. Sounds like we could use some kind of layer that will zero out blocks on read if they haven't been written by that user. That way the performance penalty would only affect people that try to read data from the volume without writing it first (which nobody should actually be doing). Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] dd performance for wipe in cinder
On 2013-10-11 10:50:33 -0600 (-0600), Chris Friesen wrote: Sounds like we could use some kind of layer that will zero out blocks on read if they haven't been written by that user. [...] You've mostly just described thin provisioning... reads to previously unused blocks are returned empty/all-zero and don't get allocated actual addresses on the underlying storage medium until written. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] dd performance for wipe in cinder
On Fri, Oct 11, 2013 at 11:05 AM, Jeremy Stanley fu...@yuggoth.org wrote: On 2013-10-11 10:50:33 -0600 (-0600), Chris Friesen wrote: Sounds like we could use some kind of layer that will zero out blocks on read if they haven't been written by that user. [...] You've mostly just described thin provisioning... reads to previously unused blocks are returned empty/all-zero and don't get allocated actual addresses on the underlying storage medium until written. +1, which by the way was the number one driving factor for adding the thin provisioning LVM option in Grizzly. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev