[openstack-dev] [nova] Feature about QEMU Assisted online-extend volume
Online-extend volume feature aims to extend a cinder volume which is in-use, and make the corresponding disk in instance extend without stop the instance. The background is that, John Griffith has proposed a BP ([1]) aimed to provide an cinder extension to enable extend of in-use/attached volumes. After discussing with Paul Marshall, the assignee of this BP, he only focus on OpenVZ driver currently, so I want to take the work of libvirt/qemu based on his current work. A volume can be extended or not is determined by Cinder. However, if we want the capacity of corresponding disk in instance extends, Nova must be involved. Libvirt provides block_resize interface for this situation. For QEMU, the internal workflow for block_resize as follows: 1) Drain all IO of this disk from instance 2) If the backend of disk is a normal file, such as raw, qcow2, etc, qemu will do the *extend* work 3) If the backend of disk is block device, qemu will first judge if there is enough free space on the device, if only so, it will do the *extend* work. So I think the online-extend volume will need QEMU Assisted, which is simlar to BP [2]. Do you think we should introduce this feature? [1] https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension [2] https://blueprints.launchpad.net/nova/+spec/qemu-assisted-snapshots ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Cinder] About storing volume format info for filesystem-based drivers
Hi, all: Currently, there are several filesystem-based drivers in Cinder, such as nfs, glusterfs, etc. Multiple format of volume other than raw can be potentially supported in these drivers, such as qcow2, raw, sparse, etc. However, Cinder does not store the actual format of volume and suppose all volumes are raw format. It will has or already has several problems as follows: 1. For volume migration, the generic migration implementation in Cinder uses the dd command to copy src volume to dest volume. If the src volume is qcow2 format, instance will not get the right data from volume after the dest volume attached to instance, because the info returned from Cinder states that the volume's format is raw other than qcow2 2. For volume backup, the backup driver also supposes that src volumes are raw format, other format will not be supported Indeed, glusterfs driver has used qemu-img info command to judge the format of volume. However, as the comment from Duncan in [1] says, this auto detection method has many possible error / exploit vectors. Because if the beginning content of a raw volume happens to a qcow2 disk, auto detection method will judge this volume to be a qcow2 volume wrongly. I proposed that the format info should be added to admin_metadata of volumes, and enforce it on all operations, such as create, copy, migrate and retype. The format will be only set / updated for filesystem-based drivers, other drivers will not contains this metadata and have a default raw format. Any advice? [1] https://review.openstack.org/#/c/100529/ -- --- Best Regards Trump.Zhang ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Question about modifying instance attribute(such as cpu-QoS, disk-QoS ) without shutdown the instance
Such as QoS attributes of vCPU, Memory and Disk, including IOPS limit, Bandwidth limit, etc. 2014-04-08 23:04 GMT+08:00 Jay Pipes jaypi...@gmail.com: On Tue, 2014-04-08 at 08:30 +, Zhangleiqiang (Trump) wrote: Hi, Stackers, For Amazon, after calling ModifyInstanceAttribute API , the instance must be stopped. In fact, the hypervisor can online-adjust these attribute. But amzon and openstack do not support it. So I want to know what are your advice about introducing the capability of online adjusting these instance attribute? What kind of attributes? Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] cinder not support query volume/snapshot with regular expression
Thanks for your reply. Regex matching can be implemented in Database, and glob matches may not work fine with paginate_query. However, the ReDoS you mentioned will not be avoided when using regex matching. I will think of it again. Thanks. 2014-04-28 19:04 GMT+08:00 Duncan Thomas duncan.tho...@gmail.com: Regex matching in APIs can be a dangerous source of DoS attacks - see http://en.wikipedia.org/wiki/ReDoS. Unless this is mitigated sensibly, I will continue to resist any cinder patch that adds them. Glob matches might be safer? On 26 April 2014 05:02, Zhangleiqiang (Trump) zhangleiqi...@huawei.com wrote: Hi, all: I see Nova allows search instances by name, ip and ip6 fields which can be normal string and regular expression: [stack@leiqzhang-stack cinder]$ nova help list List active servers. Optional arguments: --ip ip-regexp Search with regular expression match by IP address (Admin only). --ip6 ip6-regexpSearch with regular expression match by IPv6 address (Admin only). --name name-regexp Search with regular expression match by name --instance-name name-regexp Search with regular expression match by server name (Admin only). I think it is also needed for Cinder when query the volume/snapshot/backup by name. Any advice? -- zhangleiqiang (Trump) Best Regards ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- --- Best Regards Trump.Zhang ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata
@Duncan Thanks for your reply and help, :) About I expect that other than adding filtering on metadata to the API (if it isn't already there - I can't remember) that it will stay this way. you mentioned, I am sorry that I am not quite understand what you men. Did you mean using volume metadata was not the right way for the first situation I mentioned in ealier mail? 2014-05-06 18:50 GMT+08:00 Duncan Thomas duncan.tho...@gmail.com: 'metadata' is a free form key-value space for the tenant to use for their own purposes - it has no semantic meaning to cinder. I expect that other than adding filtering on metadata to the API (if it isn't already there - I can't remember) that it will stay this way. I take your point on the glance metadata. Glance has the concept to mutable and immutable metadata, maybe we can do something with that? I'll ask somebody who knows more about glance than me and get back to you... On 4 May 2014 10:33, Zhangleiqiang (Trump) zhangleiqi...@huawei.com wrote: Hi, stackers: I have some confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata. I know glance_image_metadata comes from image which is the volume created from, and it is immutable. Glance_image_metadata is used for many cases, such as billing, ram requirement, etc. And it also includes property which can effects the use-pattern of volume, such as volumes with hw_scsi_mode=virtio-scsi will be supposed that there is corresponding virtio-scsi driver installed and will be used as a device of virtio-scsi controller which has higher performance when booting from it with scsi bus type. However, volume is constantly having blocks changed, which may result in situations as follows: 1. A volume not created from image or created from image without hw_scsi_mode property at first but then has the virtio-scsi driver manually installed, there will be no method to make the volume used with virito-scsi controller when booting from it. 2. If a volume was created from an image with hw_scsi_mode property at first, and then the virtio-scsi driver in the instance is uninstalled, there will be no method to make the volume not used with virtio-scsi controller when booting from it. For the first situation, is it suitable to set corresponding metadata to volume? Should we use metadata or admin_metadata? I notice that volumes will have attach_mode and readonly admin_metadata and empty metadata after created, and I can't find the respective use cases for admin_metada and metadata. For the second situation, what is the better way to handle it? Any advice? -- zhangleiqiang (Trump) Best Regards ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- --- Best Regards Trump.Zhang ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata
Thanks for your further instructions. I think the situations I mentioned are the reasonable use cases. They are similar to the bootable volume use cases, user can create an empty volume and install os in it from an image or create bootable volume from instance ([1]). If volume metadata is not intended to be interpreted by cinder or nova as meaning anything, maybe Cinder needs to add support for updating some of glance_image_metadata of volume or introduce new property for volume like bootable ? I don't think these two methods are good either. [1] https://blueprints.launchpad.net/cinder/+spec/add-bootable-option 2014-05-07 1:00 GMT+08:00 Duncan Thomas duncan.tho...@gmail.com: On 6 May 2014 14:46, Trump.Zhang zhangleiqi...@gmail.com wrote: Did you mean using volume metadata was not the right way for the first situation I mentioned in ealier mail? Correct. Volume metadata is entirely for the tenant to use, it is not interpreted by cinder or nova as meaning anything. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- --- Best Regards Trump.Zhang ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata
@Mike Thanks. Sorry for misleading you. I mean that I know volume already has a bootable field. My question is that once a volume has been created, its glance_image_metadata will be immutable. However, the volume is constantly having blocks changed, so some property of its glance_image_metadata will become overdue. The example is the hw_scsi_mode property of glance_image_metadata, which will affect the scsi controller used when booting from volume. 2014-05-07 11:09 GMT+08:00 Mike Perez thin...@gmail.com: On 06:31 Wed 07 May , Trump.Zhang wrote: Thanks for your further instructions. I think the situations I mentioned are the reasonable use cases. They are similar to the bootable volume use cases, user can create an empty volume and install os in it from an image or create bootable volume from instance ([1]). If volume metadata is not intended to be interpreted by cinder or nova as meaning anything, maybe Cinder needs to add support for updating some of glance_image_metadata of volume or introduce new property for volume like bootable ? I don't think these two methods are good either. [1] https://blueprints.launchpad.net/cinder/+spec/add-bootable-option Volume already has a bootable field: https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/models.py#L122 -- Mike Perez ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- --- Best Regards Trump.Zhang ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata
@Tripp, Thanks for your reply and info. I am also thinking if it is proper to add support for updating the volume's glance_image_metadta to reflect the newest status of volume. However, there may be alternative ways to achieve it: 1. Using the volume's metatadata 2. Using the volume's admin_metadata So I am wondering which is the most proper method. 2014-05-07 12:32 GMT+08:00 Tripp, Travis S travis.tr...@hp.com: A few days ago I entered a client blueprint on the same topic [1], but maybe it has a server side dependency as well? When it comes to scheduling, as far as I have been able to tell from looking at Nova code, the scheduler is only getting volume_image_metadata and not the regular cinder_metadata. So, if you want to add some volume_image_metadata for scheduler filtering or for passing compute driver options through after creating a volume, there doesn't seem to be a way to do this from the python-cinderclient. If I'm wrong, please correct me. [1] https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata -Original Message- From: Mike Perez [mailto:thin...@gmail.com] Sent: Tuesday, May 06, 2014 9:10 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata On 06:31 Wed 07 May , Trump.Zhang wrote: Thanks for your further instructions. I think the situations I mentioned are the reasonable use cases. They are similar to the bootable volume use cases, user can create an empty volume and install os in it from an image or create bootable volume from instance ([1]). If volume metadata is not intended to be interpreted by cinder or nova as meaning anything, maybe Cinder needs to add support for updating some of glance_image_metadata of volume or introduce new property for volume like bootable ? I don't think these two methods are good either. [1] https://blueprints.launchpad.net/cinder/+spec/add-bootable-option Volume already has a bootable field: https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/model s.py#L122 -- Mike Perez ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- --- Best Regards Trump.Zhang ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Cinder] Question about response code of API
Hi, all: I find a lot of methods in cinder.api.contrib.* return 202 code instead of 200, even when these methods only involve database operation, which are not async process. For example, cinder.api.contrib.types_extra_specs.\ VolumeTypeExtraSpecsController:delete, cinder.api.contrib.volume_actions.\ VolumeActionsController:_reserve, etc. From the HTTP/1.1 Status Code Definitions [1], 202 means Accepted, i.e. the request has been accepted for processing, but the processing has not been completed. Are these response codes are returned by mistake? -- --- Best Regards Trump.Zhang ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder] Question about storage backend capacity expansion
Thanks for your reply and advice. Manual update of the database can achieve it, however, I don't think it is reasonable especially in a production environment. 2014-05-14 23:43 GMT+08:00 Vishvananda Ishaya vishvana...@gmail.com: On May 14, 2014, at 12:14 AM, Zhangleiqiang (Trump) zhangleiqi...@huawei.com wrote: Hi, all: I meet a requirement in my OpenStack environment which initially uses one LVMISCSI backend. Along with the usage, the storage is insufficient, so I want to add a NFS backend to the exists Cinder. There is only a single Cinder-volume in environment, so I need to configure the Cinder to use multi-backend, which means the initial LVMISCSI storage and the new added NFS storage are both used as the backend. However, the existing volume on initial LVMISCSI backend will not be handled normally after using multi-backend, because the host of the exists volume will be thought down. I know that the migrate and retype APIs aim to handle the backend capacity expansion, however, each of them can't used for this situation. I think the use case above is common in production environment. Is there some existing method can achieve it ? Currently, I manually updated the host value of the existing volumes in database, and the existing volumes can then be handled normally. While the above use case may be common, you are explicitly changing the config of the system, and requiring a manual update of the database in this case seems reasonable to me. Vish Thanks. -- zhangleiqiang (Trump) Best Regards ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- --- Best Regards Trump.Zhang ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev