Re: [openstack-dev] [nova][cinder] Extending attached disks

2015-08-21 Thread Taylor . Bertie
For RBDs it IS as simple as making calls to virsh after an attached volume is 
extended. I've done it half a dozen time with no intermediate steps and it 
works. I'd love to test it more robustly, obviously, but unfortunately I got 
bigger fish to fry with BAU.

iSCSI might involve more work, I acknowledge that, but there is nothing wrong 
with putting the framework in place now and throwing an unsupported volume 
type error message if we haven't worked out the best method for doing this for 
a particular type.

The way I see it, the only ones that are going to cause us problems are ones 
that require the host to suspend the disk prior to operating on it. In other 
words if the notification to the host can't be done atomically, that could 
definitely cause issues.

However, all the examples I have seem implemented in OpenStack volumes thus far 
(iSCSI, RDB) are either atomic or no notification required (in the case of 
RBD). Even Multipath is atomic (granted, it's multiple chained atomic 
operations, but still, they won't be left in an irrecoverable failure state).

Yes, the page you linked does warn about the issue when there is no path the 
device, however I think that if you're trying to resize a volume the compute 
node can't connect to, you got bigger problems (that is to say, throwing an 
error here is perfectly reasonable).

Regards,

Taylor Bertie
Enterprise Support Infrastructure Engineer

Mobile +64 27 952 3949
Phone +64 4 462 5030
Email taylor.ber...@solnet.co.nz

Solnet Solutions Limited
Level 12, Solnet House
70 The Terrace, Wellington 6011
PO Box 397, Wellington 6140

www.solnet.co.nz 


-Walter A. Boring IV walter.bor...@hp.com wrote: -
To: openstack-dev@lists.openstack.org
From: Walter A. Boring IV walter.bor...@hp.com
Date: 2015-08-22 7:13
Subject: Re: [openstack-dev] [nova][cinder] Extending attached disks

This isn't as simple as making calls to virsh after an attached volume 
is extended on the cinder backend, especially when multipath is involved.
You need the host system to understand that the volume has changed size 
first, or virsh will really never see it.

For iSCSI/FC volumes you need to issue a rescan on the bus (iSCSI 
session, FC fabric),  and then when multipath is involved, it gets quite 
a bit more complex.

This lead to one of the sticking points with doing this at all, is 
because when cinder extends the volume, it needs to tell nova that it 
has happened, and the nova (or something on the compute node), will have 
to issue the correct commands in sequence for it all to work.

You'll also have to consider multi-attached volumes as well, which adds 
yet another wrinkle.

A good quick source of some of the commands and procedures that are 
needed you can see here:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/online-logical-units.html


You can see that volumes with multipath requires a lot of hand holding 
to be done correctly.  It's non trivial.  I see this as being very error 
prone, and any failure
in the multipath process could lead to big problems :(

Walt
 Hi everyone,

 Apologises for the duplicate send, looks like my mail client doesn't create 
 very clean HTML messages. Here is the message in plain-text. I'll make sure 
 to send to the list in plain-text from now on.

 In my current pre-production deployment we were looking for a method to live 
 extend attached volumes to an instance. This was one of the requirements for 
 deployment. I've worked with libvirt hypervisors before so it didn't take 
 long to find a workable solution. However I'm not sure how transferable this 
 will be across deployment models. Our deployment model is using libvirt for 
 nova and ceph for backend storage. This means obviously libvirt is using rdb 
 to connect to volumes.

 Currently the method I use is:

 - Force cinder to run an extend operation.
 - Tell Libvirt that the attached disk has been extended.

 It would be worth discussing if this can be ported to upstream such that the 
 API can handle the leg work, rather than this current manual method.

 Detailed instructions.
 You will need: volume-id of volume you want to resize, hypervisor_hostname 
 and instance_name from instance volume is attached to.

 Example: extending volume f9fa66ab-b29a-40f6-b4f4-e9c64a155738 attached to 
 instance-0012 on node-6 to 100GB

 $ cinder reset-state --state available f9fa66ab-b29a-40f6-b4f4-e9c64a155738
 $ cinder extend f9fa66ab-b29a-40f6-b4f4-e9c64a155738 100
 $ cinder reset-state --state in-use f9fa66ab-b29a-40f6-b4f4-e9c64a155738

 $ssh node-6
 node-6$ virsh qemu-monitor-command instance-0012 --hmp info block | 
 grep f9fa66ab-b29a-40f6-b4f4-e9c64a155738
 drive-virtio-disk1: removable=0 io-status=ok 
 file=rbd:volumes-slow/volume-f9fa66ab-b29a-40f6-b4f4-e9c64a155738:id=cinder:key=keyhere==:auth_supported=cephx\\;none:mon_host=10.1.226.64\\:6789\\;10.1.226.65\\:6789\\;10.1.226.66\\:6789
  ro=0 drv=raw

Re: [openstack-dev] [nova][cinder] Extending attached disks

2015-08-20 Thread Taylor . Bertie
Excellent,

Good to see this is being discussed on the Cinder side.

I'll keep looking and hopefully we hear back from Nova either in this thread or 
by other means if they're prepared to go ahead with this.

Regards,

Taylor Bertie
Enterprise Support Infrastructure Engineer

Mobile +64 27 952 3949
Phone +64 4 462 5030
Email taylor.ber...@solnet.co.nz

Solnet Solutions Limited
Level 12, Solnet House
70 The Terrace, Wellington 6011
PO Box 397, Wellington 6140

www.solnet.co.nz 


-Mike Perez thin...@gmail.com wrote: -
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
From: Mike Perez thin...@gmail.com
Date: 2015-08-21 9:45
Subject: Re: [openstack-dev] [nova][cinder] Extending attached disks

On 19:53 Aug 19, John Griffith wrote:
 On Wed, Aug 19, 2015 at 7:48 PM, taylor.ber...@solnet.co.nz wrote:
 
  Hi everyone,
 
  Apologises for the duplicate send, looks like my mail client doesn't
  create very clean HTML messages. Here is the message in plain-text. I'll
  make sure to send to the list in plain-text from now on.
 
  In my current pre-production deployment we were looking for a method to
  live extend attached volumes to an instance. This was one of the
  requirements for deployment. I've worked with libvirt hypervisors before so
  it didn't take long to find a workable solution. However I'm not sure how
  transferable this will be across deployment models. Our deployment model is
  using libvirt for nova and ceph for backend storage. This means obviously
  libvirt is using rdb to connect to volumes.

snip
 
 ​Hey Taylor,
 
 This is something that has come up a number of times but I personally
 didn't have a good solution for it on the iSCSI side.  I'm not sure if your
 method would work with iSCSI attached devices because typically you need to
 detach/reattach for size changes to take effect, in other words I'm
 uncertain if libvirt would be able to see the changes.  That being said I
 also didn't know about this option in libvirt so it may work out.

This was discussed at the last Cinder midcycle meetup [1]. I'm not really
involved with working out the solution here, but the people involved we're
saying that there is a solution of having the shared library between Cinder and
Nova os-brick having the ability to inform libvirt of the size change.

The only caveat that we need Nova to agree on is this would only work with
Libvirt. This is planned for the M release.

[1] - https://etherpad.openstack.org/p/cinder-meetup-summer-2015

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Attention:
This email may contain information intended for the sole use of
the original recipient. Please respect this when sharing or
disclosing this email's contents with any third party. If you
believe you have received this email in error, please delete it
and notify the sender or postmas...@solnetsolutions.co.nz as
soon as possible. The content of this email does not necessarily
reflect the views of Solnet Solutions Ltd.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Extending attached disks

2015-08-19 Thread Taylor . Bertie
Hi everyone,In my current pre-production deployment we were looking for a method to live extend attached volumes to an instance. This was one of the requirements for deployment. I've worked with libvirt hypervisors before so it didn't take long to find a workable solution. However I'm not sure how transferable this will be across deployment models. Our deployment model is using libvirt for nova and ceph for backend storage. This means obviously libvirt is using rdb to connect to volumes.Currently the method I use is:- Force cinder to run an extend operation.- Tell Libvirt that the attached disk has been extended.It would be worth discussing if this can be ported to upstream such that the API can handle the leg work, rather than this current manual method.Detailed instructions.You will need: volume-id of volume you want to resize, hypervisor_hostname and instance_name from instance volume is attached to.Example: extending volumef9fa66ab-b29a-40f6-b4f4-e9c64a155738 attached to instance-0012 on node-6 to 100GB$ cinder reset-state --state availablef9fa66ab-b29a-40f6-b4f4-e9c64a155738$ cinder extendf9fa66ab-b29a-40f6-b4f4-e9c64a155738 100$ cinder reset-state --state in-usef9fa66ab-b29a-40f6-b4f4-e9c64a155738$ssh node-6node-6$ virsh qemu-monitor-command instance-0012 --hmp "info block" | grep f9fa66ab-b29a-40f6-b4f4-e9c64a155738drive-virtio-disk1: removable=0 io-status=ok file=rbd:volumes-slow/volume-f9fa66ab-b29a-40f6-b4f4-e9c64a155738:id=cinder:key=keyhere==:auth_supported=cephx\\;none:mon_host=10.1.226.64\\:6789\\;10.1.226.65\\:6789\\;10.1.226.66\\:6789 ro=0 drv=raw encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0This will get you the disk-id, which in this case is drive-virtio-disk1.node-6$virsh qemu-monitor-command instance-0012 --hmp "block_resize drive-virtio-disk1 100G"Finally, you need to perform a drive rescan on the actual instance and resize and extend thefile-system. This will be OS specific.I've tested this a few times and it seems very reliable.Taylor BertieEnterprise Support Infrastructure EngineerMobile +64 27 952 3949Phone +64 4 462 5030Email taylor.ber...@solnet.co.nzSolnet Solutions LimitedLevel 12, Solnet House70 The Terrace, Wellington 6011PO Box 397, Wellington 6140www.solnet.co.nzAttention:
This email may contain information intended for the sole use of
the original recipient. Please respect this when sharing or
disclosing this email's contents with any third party. If you
believe you have received this email in error, please delete it
and notify the sender or postmas...@solnetsolutions.co.nz as
soon as possible. The content of this email does not necessarily
reflect the views of Solnet Solutions Ltd.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Extending attached disks

2015-08-19 Thread Taylor . Bertie
Hi everyone,

Apologises for the duplicate send, looks like my mail client doesn't create 
very clean HTML messages. Here is the message in plain-text. I'll make sure to 
send to the list in plain-text from now on.

In my current pre-production deployment we were looking for a method to live 
extend attached volumes to an instance. This was one of the requirements for 
deployment. I've worked with libvirt hypervisors before so it didn't take long 
to find a workable solution. However I'm not sure how transferable this will be 
across deployment models. Our deployment model is using libvirt for nova and 
ceph for backend storage. This means obviously libvirt is using rdb to connect 
to volumes.

Currently the method I use is:

- Force cinder to run an extend operation.
- Tell Libvirt that the attached disk has been extended.

It would be worth discussing if this can be ported to upstream such that the 
API can handle the leg work, rather than this current manual method.

Detailed instructions.
You will need: volume-id of volume you want to resize, hypervisor_hostname and 
instance_name from instance volume is attached to.

Example: extending volume f9fa66ab-b29a-40f6-b4f4-e9c64a155738 attached to 
instance-0012 on node-6 to 100GB

$ cinder reset-state --state available f9fa66ab-b29a-40f6-b4f4-e9c64a155738
$ cinder extend f9fa66ab-b29a-40f6-b4f4-e9c64a155738 100
$ cinder reset-state --state in-use f9fa66ab-b29a-40f6-b4f4-e9c64a155738

$ssh node-6
node-6$ virsh qemu-monitor-command instance-0012 --hmp info block | grep 
f9fa66ab-b29a-40f6-b4f4-e9c64a155738
drive-virtio-disk1: removable=0 io-status=ok 
file=rbd:volumes-slow/volume-f9fa66ab-b29a-40f6-b4f4-e9c64a155738:id=cinder:key=keyhere==:auth_supported=cephx\\;none:mon_host=10.1.226.64\\:6789\\;10.1.226.65\\:6789\\;10.1.226.66\\:6789
 ro=0 drv=raw encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0

This will get you the disk-id, which in this case is drive-virtio-disk1.

node-6$ virsh qemu-monitor-command instance-0012 --hmp block_resize 
drive-virtio-disk1 100G

Finally, you need to perform a drive rescan on the actual instance and resize 
and extend the file-system. This will be OS specific.

I've tested this a few times and it seems very reliable.

Taylor Bertie
Enterprise Support Infrastructure Engineer

Mobile +64 27 952 3949
Phone +64 4 462 5030
Email taylor.ber...@solnet.co.nz

Solnet Solutions Limited
Level 12, Solnet House
70 The Terrace, Wellington 6011
PO Box 397, Wellington 6140

www.solnet.co.nz 

Attention:
This email may contain information intended for the sole use of
the original recipient. Please respect this when sharing or
disclosing this email's contents with any third party. If you
believe you have received this email in error, please delete it
and notify the sender or postmas...@solnetsolutions.co.nz as
soon as possible. The content of this email does not necessarily
reflect the views of Solnet Solutions Ltd.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev