Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-07 Thread Paul Marshall

On Mar 6, 2014, at 9:56 PM, Zhangleiqiang zhangleiqi...@huawei.com wrote:

 get them working. For example, in a devstack VM the only way I can get the
 iSCSI target to show the new size (after an lvextend) is to delete and 
 recreate
 the target, something jgriffiths said he doesn't want to support ;-).
 
 I know a method can achieve it, but it maybe need the instance to pause first 
 (during the step2 below), but without detaching/reattaching. The steps as 
 follows:
 
 1. Extend the LV
 2.Refresh the size info in tgtd:
  a) tgtadm --op show --mode target # get the tid and lun_id properties of 
 target related to the lv; the size property in output result is still the 
 old size before lvextend
  b) tgtadm --op delete --mode logicalunit --tid={tid} --lun={lun_id}  # 
 delete lun mapping in tgtd
  c) tgtadm --op new --mode logicalunit --tid={tid} --lun={lun_id} 
 --backing-store=/dev/cinder-volumes/{lv-name} # re-add lun mapping

Sure, this is my current workaround, but it's what I thought we *didn't* want 
to have to do.

  d) tgtadm --op show --mode target #now the size property in output result 
 is the new size
 *PS*:  
 a) During the procedure, the corresponding device on the compute node won't 
 disappear. But I am not sure the result if Instance has IO on this volume, so 
 maybe the instance may be paused during this procedure.

Yeah, but pausing the instance isn't an online extend. As soon as the user 
can't interact with their instance, even briefly, it's an offline extend in my 
view.

 b) Maybe we can modify tgtadm, and make it support the operation which is 
 just refresh the size of backing store.

Maybe. I'd be interested in any thoughts/patches you have to accomplish this. :)

 
 3. Rescan the lun info in compute node: iscsiadm -m node --targetname 
 {target_name} -R

Yeah, right now as part of this work I'm adding two extensions to Nova. One to 
issue this rescan on the compute host and another to get the size of the block 
device so Cinder can poll until the device is actually the new size (not an 
ideal solution, but so far I don't have a better one).

 
 I also
 haven't dived into any of those other limits you mentioned (nfs_used_ratio,
 etc.).
 
 Till now, we focused on the volume which is based on *block device*. Under 
 this scenario, we must first try to extend the volume and notify the 
 hypervisor, I think one of the preconditions is to make sure the extend 
 operation will not affect the IO in Instance.
 
 However, there is another scenario which maybe a little different. For 
 *online-extend virtual disks (qcow2, sparse, etc) whose backend storage is 
 file system (ext3, nfs, glusterfs, etc), the current implementation of QEMU 
 is as follows:
 1. QEMU drain all IO
 2. *QEMU* extend the virtual disk
 3. QEMU resume IO
 
 The difference is the *extend* work need be done by QEMU other than cinder 
 driver. 
 
 Feel free to ping me on IRC (pdmars).
 
 I don't know your time zone, we can continue the discussion on IRC, :)

Good point. :) I'm in the US central time zone.

Paul

 
 --
 zhangleiqiang
 
 Best Regards
 
 
 -Original Message-
 From: Paul Marshall [mailto:paul.marsh...@rackspace.com]
 Sent: Thursday, March 06, 2014 12:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Luohao (brian)
 Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the
 online-extend feature to cinder ?
 
 Hey,
 
 Sorry I missed this thread a couple of days ago. I am working on a 
 first-pass of
 this and hope to have something soon. So far I've mostly focused on getting
 OpenVZ and the HP LH SAN driver working for online extend. I've had trouble
 with libvirt+kvm+lvm so I'd love some help there if you have ideas about how 
 to
 get them working. For example, in a devstack VM the only way I can get the
 iSCSI target to show the new size (after an lvextend) is to delete and 
 recreate
 the target, something jgriffiths said he doesn't want to support ;-). I also
 haven't dived into any of those other limits you mentioned (nfs_used_ratio,
 etc.). Feel free to ping me on IRC (pdmars).
 
 Paul
 
 
 On Mar 3, 2014, at 8:50 PM, Zhangleiqiang zhangleiqi...@huawei.com
 wrote:
 
 @john.griffith. Thanks for your information.
 
 I have read the BP you mentioned ([1]) and have some rough thoughts about
 it.
 
 As far as I know, the corresponding online-extend command for libvirt is
 blockresize, and for Qemu, the implement differs among disk formats.
 
 For the regular qcow2/raw disk file, qemu will take charge of the 
 drain_all_io
 and truncate_disk actions, but for raw block device, qemu will only check if 
 the
 *Actual* size of the device is larger than current size.
 
 I think the former need more consideration, because the extend work is done
 by libvirt, Nova may need to do this first and then notify Cinder. But if we 
 take
 allocation limit of different cinder backend drivers (such as quota,
 nfs_used_ratio, nfs_oversub_ratio, etc) into account

Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-07 Thread Paul Marshall

On Mar 7, 2014, at 7:55 AM, Paul Marshall paul.marsh...@rackspace.com
 wrote:

 
 On Mar 6, 2014, at 9:56 PM, Zhangleiqiang zhangleiqi...@huawei.com wrote:
 
 get them working. For example, in a devstack VM the only way I can get the
 iSCSI target to show the new size (after an lvextend) is to delete and 
 recreate
 the target, something jgriffiths said he doesn't want to support ;-).
 
 I know a method can achieve it, but it maybe need the instance to pause 
 first (during the step2 below), but without detaching/reattaching. The steps 
 as follows:
 
 1. Extend the LV
 2.Refresh the size info in tgtd:
 a) tgtadm --op show --mode target # get the tid and lun_id properties of 
 target related to the lv; the size property in output result is still the 
 old size before lvextend
 b) tgtadm --op delete --mode logicalunit --tid={tid} --lun={lun_id}  # 
 delete lun mapping in tgtd
 c) tgtadm --op new --mode logicalunit --tid={tid} --lun={lun_id} 
 --backing-store=/dev/cinder-volumes/{lv-name} # re-add lun mapping
 
 Sure, this is my current workaround, but it's what I thought we *didn't* want 
 to have to do.
 
 d) tgtadm --op show --mode target #now the size property in output result 
 is the new size
 *PS*:  
 a) During the procedure, the corresponding device on the compute node won't 
 disappear. But I am not sure the result if Instance has IO on this volume, 
 so maybe the instance may be paused during this procedure.
 
 Yeah, but pausing the instance isn't an online extend. As soon as the user 
 can't interact with their instance, even briefly, it's an offline extend in 
 my view.
 
 b) Maybe we can modify tgtadm, and make it support the operation which is 
 just refresh the size of backing store.
 
 Maybe. I'd be interested in any thoughts/patches you have to accomplish this. 
 :)
 
 
 3. Rescan the lun info in compute node: iscsiadm -m node --targetname 
 {target_name} -R
 
 Yeah, right now as part of this work I'm adding two extensions to Nova. One 
 to issue this rescan on the compute host and another to get the size of the 
 block device so Cinder can poll until the device is actually the new size 
 (not an ideal solution, but so far I don't have a better one).

Sorry, I should correct myself here: I'm adding one extension with two calls. 
One to issue the rescan on the compute host and one to get the blockdev size so 
Cinder can wait until it's actually the new size.

 
 
 I also
 haven't dived into any of those other limits you mentioned (nfs_used_ratio,
 etc.).
 
 Till now, we focused on the volume which is based on *block device*. Under 
 this scenario, we must first try to extend the volume and notify the 
 hypervisor, I think one of the preconditions is to make sure the extend 
 operation will not affect the IO in Instance.
 
 However, there is another scenario which maybe a little different. For 
 *online-extend virtual disks (qcow2, sparse, etc) whose backend storage is 
 file system (ext3, nfs, glusterfs, etc), the current implementation of QEMU 
 is as follows:
 1. QEMU drain all IO
 2. *QEMU* extend the virtual disk
 3. QEMU resume IO
 
 The difference is the *extend* work need be done by QEMU other than cinder 
 driver. 
 
 Feel free to ping me on IRC (pdmars).
 
 I don't know your time zone, we can continue the discussion on IRC, :)
 
 Good point. :) I'm in the US central time zone.
 
 Paul
 
 
 --
 zhangleiqiang
 
 Best Regards
 
 
 -Original Message-
 From: Paul Marshall [mailto:paul.marsh...@rackspace.com]
 Sent: Thursday, March 06, 2014 12:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Luohao (brian)
 Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the
 online-extend feature to cinder ?
 
 Hey,
 
 Sorry I missed this thread a couple of days ago. I am working on a 
 first-pass of
 this and hope to have something soon. So far I've mostly focused on getting
 OpenVZ and the HP LH SAN driver working for online extend. I've had trouble
 with libvirt+kvm+lvm so I'd love some help there if you have ideas about 
 how to
 get them working. For example, in a devstack VM the only way I can get the
 iSCSI target to show the new size (after an lvextend) is to delete and 
 recreate
 the target, something jgriffiths said he doesn't want to support ;-). I also
 haven't dived into any of those other limits you mentioned (nfs_used_ratio,
 etc.). Feel free to ping me on IRC (pdmars).
 
 Paul
 
 
 On Mar 3, 2014, at 8:50 PM, Zhangleiqiang zhangleiqi...@huawei.com
 wrote:
 
 @john.griffith. Thanks for your information.
 
 I have read the BP you mentioned ([1]) and have some rough thoughts about
 it.
 
 As far as I know, the corresponding online-extend command for libvirt is
 blockresize, and for Qemu, the implement differs among disk formats.
 
 For the regular qcow2/raw disk file, qemu will take charge of the 
 drain_all_io
 and truncate_disk actions, but for raw block device, qemu will only check 
 if the
 *Actual* size of the device

Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-05 Thread Paul Marshall
Hey, 

Sorry I missed this thread a couple of days ago. I am working on a first-pass 
of this and hope to have something soon. So far I've mostly focused on getting 
OpenVZ and the HP LH SAN driver working for online extend. I've had trouble 
with libvirt+kvm+lvm so I'd love some help there if you have ideas about how to 
get them working. For example, in a devstack VM the only way I can get the 
iSCSI target to show the new size (after an lvextend) is to delete and recreate 
the target, something jgriffiths said he doesn't want to support ;-). I also 
haven't dived into any of those other limits you mentioned (nfs_used_ratio, 
etc.). Feel free to ping me on IRC (pdmars).

Paul


On Mar 3, 2014, at 8:50 PM, Zhangleiqiang zhangleiqi...@huawei.com wrote:

 @john.griffith. Thanks for your information.
  
 I have read the BP you mentioned ([1]) and have some rough thoughts about it.
  
 As far as I know, the corresponding online-extend command for libvirt is 
 “blockresize”, and for Qemu, the implement differs among disk formats.
  
 For the regular qcow2/raw disk file, qemu will take charge of the 
 drain_all_io and truncate_disk actions, but for raw block device, qemu will 
 only check if the *Actual* size of the device is larger than current size.
  
 I think the former need more consideration, because the extend work is done 
 by libvirt, Nova may need to do this first and then notify Cinder. But if we 
 take allocation limit of different cinder backend drivers (such as quota, 
 nfs_used_ratio, nfs_oversub_ratio, etc) into account, the workflow will be 
 more complicated.
  
 This scenario is not included by the Item 3 of BP ([1]), as it cannot be 
 simply “just work” or notified by the compute node/libvirt after the volume 
 is extended.
  
 This regular qcow2/raw disk files are normally stored in file system based 
 storage, maybe the Manila project is more appropriate for this scenario?
  
  
 Thanks.
  
  
 [1]: 
 https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
  
 --
 zhangleiqiang
  
 Best Regards
  
 From: John Griffith [mailto:john.griff...@solidfire.com] 
 Sent: Tuesday, March 04, 2014 1:05 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Luohao (brian)
 Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the 
 online-extend feature to cinder ?
  
  
  
 
 On Mon, Mar 3, 2014 at 2:01 AM, Zhangleiqiang zhangleiqi...@huawei.com 
 wrote:
 Hi, stackers:
 
 Libvirt/qemu have supported online-extend for multiple disk formats, 
 including qcow2, sparse, etc. But Cinder only support offline-extend volumes 
 currently.
 
 Offline-extend volume will force the instance to be shutoff or the volume 
 to be detached. I think it will be useful if we introduce the online-extend 
 feature to cinder, especially for the file system based driver, e.g. nfs, 
 glusterfs, etc.
 
 Is there any other suggestions?
 
 Thanks.
 
 
 --
 zhangleiqiang
 
 Best Regards
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 Hi Zhangleiqiang,
  
 So yes, there's a rough BP for this here: [1], and some of the folks from the 
 Trove team (pdmars on IRC) have actually started to dive into this.  Last I 
 checked with him there were some sticking points on the Nova side but we 
 should synch up with Paul, it's been a couple weeks since I've last caught up 
 with him.
  
 Thanks,
 John
 [1]: 
 https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-03 Thread Zhangleiqiang
Hi, stackers:

Libvirt/qemu have supported online-extend for multiple disk formats, 
including qcow2, sparse, etc. But Cinder only support offline-extend volumes 
currently. 

Offline-extend volume will force the instance to be shutoff or the volume 
to be detached. I think it will be useful if we introduce the online-extend 
feature to cinder, especially for the file system based driver, e.g. nfs, 
glusterfs, etc.

Is there any other suggestions?

Thanks.


--
zhangleiqiang

Best Regards


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-03 Thread John Griffith
On Mon, Mar 3, 2014 at 2:01 AM, Zhangleiqiang zhangleiqi...@huawei.comwrote:

 Hi, stackers:

 Libvirt/qemu have supported online-extend for multiple disk
 formats, including qcow2, sparse, etc. But Cinder only support
 offline-extend volumes currently.

 Offline-extend volume will force the instance to be shutoff or the
 volume to be detached. I think it will be useful if we introduce the
 online-extend feature to cinder, especially for the file system based
 driver, e.g. nfs, glusterfs, etc.

 Is there any other suggestions?

 Thanks.


 --
 zhangleiqiang

 Best Regards


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi Zhangleiqiang,

So yes, there's a rough BP for this here: [1], and some of the folks from
the Trove team (pdmars on IRC) have actually started to dive into this.
 Last I checked with him there were some sticking points on the Nova side
but we should synch up with Paul, it's been a couple weeks since I've last
caught up with him.

Thanks,
John
[1]:
https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-03 Thread Zhangleiqiang
@john.griffith. Thanks for your information.

I have read the BP you mentioned ([1]) and have some rough thoughts about it.

As far as I know, the corresponding online-extend command for libvirt is 
blockresize, and for Qemu, the implement differs among disk formats.

For the regular qcow2/raw disk file, qemu will take charge of the drain_all_io 
and truncate_disk actions, but for raw block device, qemu will only check if 
the *Actual* size of the device is larger than current size.

I think the former need more consideration, because the extend work is done by 
libvirt, Nova may need to do this first and then notify Cinder. But if we take 
allocation limit of different cinder backend drivers (such as quota, 
nfs_used_ratio, nfs_oversub_ratio, etc) into account, the workflow will be more 
complicated.

This scenario is not included by the Item 3 of BP ([1]), as it cannot be simply 
just work or notified by the compute node/libvirt after the volume is 
extended.

This regular qcow2/raw disk files are normally stored in file system based 
storage, maybe the Manila project is more appropriate for this scenario?


Thanks.


[1]: https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension

--
zhangleiqiang

Best Regards

From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Tuesday, March 04, 2014 1:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Luohao (brian)
Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the 
online-extend feature to cinder ?



On Mon, Mar 3, 2014 at 2:01 AM, Zhangleiqiang 
zhangleiqi...@huawei.commailto:zhangleiqi...@huawei.com wrote:
Hi, stackers:

Libvirt/qemu have supported online-extend for multiple disk formats, 
including qcow2, sparse, etc. But Cinder only support offline-extend volumes 
currently.

Offline-extend volume will force the instance to be shutoff or the volume 
to be detached. I think it will be useful if we introduce the online-extend 
feature to cinder, especially for the file system based driver, e.g. nfs, 
glusterfs, etc.

Is there any other suggestions?

Thanks.


--
zhangleiqiang

Best Regards


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi Zhangleiqiang,

So yes, there's a rough BP for this here: [1], and some of the folks from the 
Trove team (pdmars on IRC) have actually started to dive into this.  Last I 
checked with him there were some sticking points on the Nova side but we should 
synch up with Paul, it's been a couple weeks since I've last caught up with him.

Thanks,
John
[1]: https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev