Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-25 Thread Boxiang Zhu


Great, Jon. Thanks for your reply. I am looking forward to your report.


Cheers,
Boxiang
On 10/23/2018 22:01,Jon Bernard wrote:
* melanie witt  wrote:
On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
host dev@rbd-2#ceph, but it failed with the exception
'NotImplementedError(_("Swap only supports host devices")'.

So that, my real problem is that is there any work to migrate
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is
*available*(the spec) and another is *in-use*(my scope).


[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150

Ah, I think I understand now, thank you for providing all of those details.
And I think you explained it in your first email, that cinder supports
migration of ceph volumes if they are 'available' but not if they are
'in-use'. Apologies that I didn't get your meaning the first time.

I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
LOG.debug('Only available volumes can be migrated using backend '
'assisted migration. Falling back to generic migration.')
return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance',
it's falling back to generic migration, which will end up with an error in
nova because the source_path is not set in the volume config.

Can anyone from the cinder team chime in about whether the ceph volume
migration could be expanded to allow migration of 'in-use' volumes? Is there
a reason not to allow migration of 'in-use' volumes?

Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.

--
Jon


[3] 
https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621

Cheers,
-melanie






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-24 Thread melanie witt

On Tue, 23 Oct 2018 10:01:42 -0400, Jon Bernard wrote:

* melanie witt  wrote:

On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:

I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
host dev@rbd-2#ceph, but it failed with the exception
'NotImplementedError(_("Swap only supports host devices")'.

So that, my real problem is that is there any work to migrate
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is
*available*(the spec) and another is *in-use*(my scope).


[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Ah, I think I understand now, thank you for providing all of those details.
And I think you explained it in your first email, that cinder supports
migration of ceph volumes if they are 'available' but not if they are
'in-use'. Apologies that I didn't get your meaning the first time.

I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
 LOG.debug('Only available volumes can be migrated using backend '
   'assisted migration. Falling back to generic migration.')
 return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance',
it's falling back to generic migration, which will end up with an error in
nova because the source_path is not set in the volume config.

Can anyone from the cinder team chime in about whether the ceph volume
migration could be expanded to allow migration of 'in-use' volumes? Is there
a reason not to allow migration of 'in-use' volumes?


Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.


OK, thanks for this info, Jon. I'll be interested in your findings.

Cheers,
-melanie




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-24 Thread Jay S. Bryant



On 10/23/2018 9:01 AM, Jon Bernard wrote:

* melanie witt  wrote:

On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:

I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
host dev@rbd-2#ceph, but it failed with the exception
'NotImplementedError(_("Swap only supports host devices")'.

So that, my real problem is that is there any work to migrate
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is
*available*(the spec) and another is *in-use*(my scope).


[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150

Ah, I think I understand now, thank you for providing all of those details.
And I think you explained it in your first email, that cinder supports
migration of ceph volumes if they are 'available' but not if they are
'in-use'. Apologies that I didn't get your meaning the first time.

I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
 LOG.debug('Only available volumes can be migrated using backend '
   'assisted migration. Falling back to generic migration.')
 return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance',
it's falling back to generic migration, which will end up with an error in
nova because the source_path is not set in the volume config.

Can anyone from the cinder team chime in about whether the ceph volume
migration could be expanded to allow migration of 'in-use' volumes? Is there
a reason not to allow migration of 'in-use' volumes?

Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.

Jon,

Thanks for the explanation and investigation!

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-23 Thread Jon Bernard
* melanie witt  wrote:
> On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote:
> > 
> > The version of my cinder and nova is Rocky. The scope of the cinder spec[1]
> > is only for available volume migration between two pools from the same
> > ceph cluster.
> > If the volume is in-use status[2], it will call the generic migration
> > function. So that as you
> > describe it, on the nova side, it raises NotImplementedError(_("Swap
> > only supports host devices").
> > The get_config of net volume[3] has not source_path.
> 
> Ah, OK, so you're trying to migrate a volume across two separate ceph
> clusters, and that is not supported.
> 
> > So does anyone try to succeed to migrate volume(in-use) with ceph
> > backend or is anyone doing something of it?
> 
> Hopefully someone can share their experience with trying to migrate volumes
> across separate ceph clusters. I unfortunately don't know anything about it.

If this is the case, then Cinder cannot request a storage-specific
migration which is typically more efficient.  The migration will require
a complete copy of each allocated block.  Whether the volume is attached
or not will determine who (cinder or nova) will perform the operation.

-- 
Jon

> 
> Best,
> -melanie
> 
> > [1] https://review.openstack.org/#/c/296150
> > [2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
> > [3] 
> > https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-23 Thread Jon Bernard
* melanie witt  wrote:
> On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
> > I created a new vm and a new volume with type 'ceph'[So that the volume
> > will be created on one of two hosts. I assume that the volume created on
> > host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
> > vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
> > host dev@rbd-2#ceph, but it failed with the exception
> > 'NotImplementedError(_("Swap only supports host devices")'.
> > 
> > So that, my real problem is that is there any work to migrate
> > volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
> > in the same ceph cluster?
> > The difference between the spec[2] with my scope is only one is
> > *available*(the spec) and another is *in-use*(my scope).
> > 
> > 
> > [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> > [2] https://review.openstack.org/#/c/296150
> 
> Ah, I think I understand now, thank you for providing all of those details.
> And I think you explained it in your first email, that cinder supports
> migration of ceph volumes if they are 'available' but not if they are
> 'in-use'. Apologies that I didn't get your meaning the first time.
> 
> I see now the code you were referring to is this [3]:
> 
> if volume.status not in ('available', 'retyping', 'maintenance'):
> LOG.debug('Only available volumes can be migrated using backend '
>   'assisted migration. Falling back to generic migration.')
> return refuse_to_migrate
> 
> So because your volume is not 'available', 'retyping', or 'maintenance',
> it's falling back to generic migration, which will end up with an error in
> nova because the source_path is not set in the volume config.
> 
> Can anyone from the cinder team chime in about whether the ceph volume
> migration could be expanded to allow migration of 'in-use' volumes? Is there
> a reason not to allow migration of 'in-use' volumes?

Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.

-- 
Jon

> 
> [3] 
> https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621
> 
> Cheers,
> -melanie
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-22 Thread melanie witt

On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
I created a new vm and a new volume with type 'ceph'[So that the volume 
will be created on one of two hosts. I assume that the volume created on 
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the 
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to 
host dev@rbd-2#ceph, but it failed with the exception 
'NotImplementedError(_("Swap only supports host devices")'.


So that, my real problem is that is there any work to migrate 
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool) 
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is 
*available*(the spec) and another is *in-use*(my scope).



[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Ah, I think I understand now, thank you for providing all of those 
details. And I think you explained it in your first email, that cinder 
supports migration of ceph volumes if they are 'available' but not if 
they are 'in-use'. Apologies that I didn't get your meaning the first time.


I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
LOG.debug('Only available volumes can be migrated using backend '
  'assisted migration. Falling back to generic migration.')
return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance', 
it's falling back to generic migration, which will end up with an error 
in nova because the source_path is not set in the volume config.


Can anyone from the cinder team chime in about whether the ceph volume 
migration could be expanded to allow migration of 'in-use' volumes? Is 
there a reason not to allow migration of 'in-use' volumes?


[3] 
https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621


Cheers,
-melanie






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-21 Thread Boxiang Zhu


Jay and Melanie, It's my fault to let you misunderstand the problem. I should 
describe my problem more clearly. My problem is not to migrate volumes between 
two ceph clusters. 


I have two clusters, one is openstack cluster(allinone env, hostname is dev) 
and another is ceph cluster. Omit the integrated configurations for openstack 
and ceph.[1] The special config of cinder.conf is as followed:


[DEFAULT]
enabled_backends = rbd-1,rbd-2
..
[rbd-1]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes001
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 2
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81
[rbd-2]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes002
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 2
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81


There will be two hosts named dev@rbd-1#ceph and dev@rbd-2#ceph.
Then I create a volume type named 'ceph' with the command 'cinder type-create 
ceph' and add extra_spec 'volume_backend_name=ceph' for it with the command 
'cinder type-key  set volume_backend_name=ceph'. 


I created a new vm and a new volume with type 'ceph'[So that the volume will be 
created on one of two hosts. I assume that the volume created on host 
dev@rbd-1#ceph this time]. Next step is to attach the volume to the vm. At last 
I want to migrate the volume from host dev@rbd-1#ceph to host dev@rbd-2#ceph, 
but it failed with the exception 'NotImplementedError(_("Swap only supports 
host devices")'.


So that, my real problem is that is there any work to migrate 
volume(in-use)(ceph rbd) from one host(pool) to another host(pool) in the same 
ceph cluster?
The difference between the spec[2] with my scope is only one is available(the 
spec) and another is in-use(my scope).




[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Cheers,
Boxiang
On 10/21/2018 23:19,Jay S. Bryant wrote:

Boxiang,

I have not herd any discussion of extending this functionality for Ceph to work 
between different Ceph Clusters.  I wasn't aware, however, that the existing 
spec was limited to one Ceph cluster.  So, that is good to know.

I would recommend reaching out to Jon Bernard or Eric Harney for guidance on 
how to proceed.  They work closely with the Ceph driver and could provide 
insight.

Jay




On 10/19/2018 10:21 AM, Boxiang Zhu wrote:



Hi melanie, thanks for your reply.


The version of my cinder and nova is Rocky. The scope of the cinder spec[1] 
is only for available volume migration between two pools from the same ceph 
cluster.
If the volume is in-use status[2], it will call the generic migration function. 
So that as you 
describe it, on the nova side, it raises NotImplementedError(_("Swap only 
supports host devices"). 
The get_config of net volume[3] has not source_path.


So does anyone try to succeed to migrate volume(in-use) with ceph backend or is 
anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101




Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt wrote:
On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm.
I can migrate the volume(in-use) from one host to another. The nova
libvirt will call the 'rebase' to finish it. But if using ceph backend,
it raises exception 'Swap only supports host devices'. So now it does
not support to migrate volume(in-use). Does anyone do this work now? Or
Is there any way to let me migrate volume(in-use) with ceph backend?

What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be something
additional missing that we would need to do to make the migration work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-21 Thread Jay S. Bryant

Boxiang,

I have not herd any discussion of extending this functionality for Ceph 
to work between different Ceph Clusters.  I wasn't aware, however, that 
the existing spec was limited to one Ceph cluster. So, that is good to know.


I would recommend reaching out to Jon Bernard or Eric Harney for 
guidance on how to proceed.  They work closely with the Ceph driver and 
could provide insight.


Jay


On 10/19/2018 10:21 AM, Boxiang Zhu wrote:


Hi melanie, thanks for your reply.

The version of my cinder and nova is Rocky. The scope of the cinder 
spec[1]
is only for available volume migration between two pools from the same 
ceph cluster.
If the volume is in-use status[2], it will call the generic migration 
function. So that as you
describe it, on the nova side, it raises NotImplementedError(_("Swap 
only supports host devices").

The get_config of net volume[3] has not source_path.

So does anyone try to succeed to migrate volume(in-use) with ceph 
backend or is anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] 
https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101



Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt 
 wrote:


On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:

When I use the LVM backend to create the volume, then attach
it to a vm.
I can migrate the volume(in-use) from one host to another. The
nova
libvirt will call the 'rebase' to finish it. But if using ceph
backend,
it raises exception 'Swap only supports host devices'. So now
it does
not support to migrate volume(in-use). Does anyone do this
work now? Or
Is there any way to let me migrate volume(in-use) with ceph
backend?


What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:


https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be
something
additional missing that we would need to do to make the migration
work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread melanie witt

On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote:


The version of my cinder and nova is Rocky. The scope of the cinder spec[1]
is only for available volume migration between two pools from the same 
ceph cluster.
If the volume is in-use status[2], it will call the generic migration 
function. So that as you
describe it, on the nova side, it raises NotImplementedError(_("Swap 
only supports host devices").

The get_config of net volume[3] has not source_path.


Ah, OK, so you're trying to migrate a volume across two separate ceph 
clusters, and that is not supported.


So does anyone try to succeed to migrate volume(in-use) with ceph 
backend or is anyone doing something of it?


Hopefully someone can share their experience with trying to migrate 
volumes across separate ceph clusters. I unfortunately don't know 
anything about it.


Best,
-melanie


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread Boxiang Zhu


Hi melanie, thanks for your reply.


The version of my cinder and nova is Rocky. The scope of the cinder spec[1] 
is only for available volume migration between two pools from the same ceph 
cluster.
If the volume is in-use status[2], it will call the generic migration function. 
So that as you 
describe it, on the nova side, it raises NotImplementedError(_("Swap only 
supports host devices"). 
The get_config of net volume[3] has not source_path.


So does anyone try to succeed to migrate volume(in-use) with ceph backend or is 
anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101




Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt wrote:
On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm.
I can migrate the volume(in-use) from one host to another. The nova
libvirt will call the 'rebase' to finish it. But if using ceph backend,
it raises exception 'Swap only supports host devices'. So now it does
not support to migrate volume(in-use). Does anyone do this work now? Or
Is there any way to let me migrate volume(in-use) with ceph backend?

What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be something
additional missing that we would need to do to make the migration work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread melanie witt

On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm. 
I can migrate the volume(in-use) from one host to another. The nova 
libvirt will call the 'rebase' to finish it. But if using ceph backend, 
it raises exception 'Swap only supports host devices'. So now it does 
not support to migrate volume(in-use). Does anyone do this work now? Or 
Is there any way to let me migrate volume(in-use) with ceph backend?


What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to 
enable migration of in-use volumes with ceph semi-recently (Queens).


On the nova side, the code looks for the source_path in the volume 
config, and if there is not one present, it raises 
NotImplementedError(_("Swap only supports host devices"). So in your 
environment, the volume configs must be missing a source_path.


If you are using at least Queens version, then there must be something 
additional missing that we would need to do to make the migration work.


[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-18 Thread Boxiang Zhu


Hi folks,
When I use the LVM backend to create the volume, then attach it to a vm. I 
can migrate the volume(in-use) from one host to another. The nova libvirt will 
call the 'rebase' to finish it. But if using ceph backend, it raises exception 
'Swap only supports host devices'. So now it does not support to migrate 
volume(in-use). Does anyone do this work now? Or Is there any way to let me 
migrate volume(in-use) with ceph backend?


Cheers,
Boxiang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-09 Thread Jay S Bryant



On 10/8/2018 8:54 AM, Sean McGinnis wrote:

On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote:

In Denver, we agree to add a new "re-image" API in cinder to support upport
volume-backed server rebuild with a new image.

An initial blueprint has been drafted in [3], welcome to review it, thanks.
: )

[snip]

The "force" parameter idea comes from [4], means that
1. we can re-image an "available" volume directly.
2. we can't re-image "in-use"/"reserved" volume directly.
3. we can only re-image an "in-use"/"reserved" volume with "force"
parameter.

And it means nova need to always call re-image API with an extra "force"
parameter,
because the volume status is "in-use" or "reserve" when we rebuild the
server.

*So, what's you idea? Do we really want to add this "force" parameter?*


I would prefer we have the "force" parameter, even if it is something that will
always be defaulted to True from Nova.

Having this exposed as a REST API means anyone could call it, not just Nova
code. So as protection from someone doing something that they are not really
clear on the full implications of, having a flag in there to guard volumes that
are already attached or reserved for shelved instances is worth the very minor
extra overhead.
I concur with Sean's assessment.  I think putting a safety switch in 
place in this design is important to ensure that people using the API 
directly are less likely to do something that they may not actually want 
to do.


Jay

[1] https://etherpad.openstack.org/p/nova-ptg-stein L483
[2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12
[3] https://review.openstack.org/#/c/605317
[4]
https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75

Regards,
Yikun

Jiang Yikun(Kero)
Mail: yikunk...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-09 Thread Matt Riedemann

On 10/9/2018 8:04 AM, Erlon Cruz wrote:
If you are planning to re-image an image on a bootable volume then yes 
you should use a force parameter. I have lost the discussion about this 
on PTG. What is the main use cases? This seems to me something that 
could be leveraged with the current revert-to-snapshot API, which would 
be even better. The flow would be:


1 - create a volume from image
2 - create an snapshot
3 - do whatever you wan't
4 - revert the snapshot

Would that help in your the use cases?


As the spec mentions, this is for enabling re-imaging the root volume on 
a server when nova rebuilds the server. That is not allowed today 
because the compute service can't re-image the root volume. We don't 
want to jump through a bunch of gross alternative hoops to create a new 
root volume with the new image and swap them out (the reasons why are in 
the spec, and have been discussed previously in the ML). So nova is 
asking cinder to provide an API to change the image in a volume which 
the nova rebuild operation will use to re-image the root volume on a 
volume-backed server. I don't know if revert-to-snapshot solves that use 
case, but it doesn't sound like it. With the nova rebuild API, the user 
provides an image reference and that is used to re-image the root disk 
on the server. So it might not be a snapshot, it could be something new.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-09 Thread Erlon Cruz
If you are planning to re-image an image on a bootable volume then yes you
should use a force parameter. I have lost the discussion about this on PTG.
What is the main use cases? This seems to me something that could be
leveraged with the current revert-to-snapshot API, which would be even
better. The flow would be:

1 - create a volume from image
2 - create an snapshot
3 - do whatever you wan't
4 - revert the snapshot

Would that help in your the use cases?

Em seg, 8 de out de 2018 às 10:54, Sean McGinnis 
escreveu:

> On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote:
> > In Denver, we agree to add a new "re-image" API in cinder to support
> upport
> > volume-backed server rebuild with a new image.
> >
> > An initial blueprint has been drafted in [3], welcome to review it,
> thanks.
> > : )
> >
> > [snip]
> >
> > The "force" parameter idea comes from [4], means that
> > 1. we can re-image an "available" volume directly.
> > 2. we can't re-image "in-use"/"reserved" volume directly.
> > 3. we can only re-image an "in-use"/"reserved" volume with "force"
> > parameter.
> >
> > And it means nova need to always call re-image API with an extra "force"
> > parameter,
> > because the volume status is "in-use" or "reserve" when we rebuild the
> > server.
> >
> > *So, what's you idea? Do we really want to add this "force" parameter?*
> >
>
> I would prefer we have the "force" parameter, even if it is something that
> will
> always be defaulted to True from Nova.
>
> Having this exposed as a REST API means anyone could call it, not just Nova
> code. So as protection from someone doing something that they are not
> really
> clear on the full implications of, having a flag in there to guard volumes
> that
> are already attached or reserved for shelved instances is worth the very
> minor
> extra overhead.
>
> > [1] https://etherpad.openstack.org/p/nova-ptg-stein L483
> > [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild
> L12
> > [3] https://review.openstack.org/#/c/605317
> > [4]
> >
> https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75
> >
> > Regards,
> > Yikun
> > 
> > Jiang Yikun(Kero)
> > Mail: yikunk...@gmail.com
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-08 Thread Sean McGinnis
On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote:
> In Denver, we agree to add a new "re-image" API in cinder to support upport
> volume-backed server rebuild with a new image.
> 
> An initial blueprint has been drafted in [3], welcome to review it, thanks.
> : )
> 
> [snip]
> 
> The "force" parameter idea comes from [4], means that
> 1. we can re-image an "available" volume directly.
> 2. we can't re-image "in-use"/"reserved" volume directly.
> 3. we can only re-image an "in-use"/"reserved" volume with "force"
> parameter.
> 
> And it means nova need to always call re-image API with an extra "force"
> parameter,
> because the volume status is "in-use" or "reserve" when we rebuild the
> server.
> 
> *So, what's you idea? Do we really want to add this "force" parameter?*
> 

I would prefer we have the "force" parameter, even if it is something that will
always be defaulted to True from Nova.

Having this exposed as a REST API means anyone could call it, not just Nova
code. So as protection from someone doing something that they are not really
clear on the full implications of, having a flag in there to guard volumes that
are already attached or reserved for shelved instances is worth the very minor
extra overhead.

> [1] https://etherpad.openstack.org/p/nova-ptg-stein L483
> [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12
> [3] https://review.openstack.org/#/c/605317
> [4]
> https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75
> 
> Regards,
> Yikun
> 
> Jiang Yikun(Kero)
> Mail: yikunk...@gmail.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-08 Thread Yikun Jiang
In Denver, we agree to add a new "re-image" API in cinder to support upport
volume-backed server rebuild with a new image.

An initial blueprint has been drafted in [3], welcome to review it, thanks.
: )

The API is very simple, something like:

URL:

  POST /v3/{project_id}/volumes/{volume_id}/action

Request body:

  {
  'os-reimage': {
  'image_id': "71543ced-a8af-45b6-a5c4-a46282108a90"
  }
  }

The question is do we need a "force" parameter in request body? like:
  {
  'os-reimage': {
  'image_id': "71543ced-a8af-45b6-a5c4-a46282108a90",
*  'force': True*
  }
  }

The "force" parameter idea comes from [4], means that
1. we can re-image an "available" volume directly.
2. we can't re-image "in-use"/"reserved" volume directly.
3. we can only re-image an "in-use"/"reserved" volume with "force"
parameter.

And it means nova need to always call re-image API with an extra "force"
parameter,
because the volume status is "in-use" or "reserve" when we rebuild the
server.

*So, what's you idea? Do we really want to add this "force" parameter?*

[1] https://etherpad.openstack.org/p/nova-ptg-stein L483
[2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12
[3] https://review.openstack.org/#/c/605317
[4]
https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75

Regards,
Yikun

Jiang Yikun(Kero)
Mail: yikunk...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova][placement] Doodle Calendar Created for Placement Discussion

2018-09-06 Thread Jay S Bryant

All,

We discussed in our weekly meeting yesterday that it might be good to 
plan an additional meeting at the PTG to continue discussions with 
regards to Cinder's use of the Placement Service.


I have looked at the room schedule [1] and there are quite a few open 
rooms on Monday.  Fewer rooms on Tuesday but there are still some 
options each day.


Please fill out the poll [2] if you are interested in attending ASAP and 
then I will reserve a room as soon as it looks like we have quorum.


Thank you!

Jay

[1] http://ptg.openstack.org/ptg.html

[2] https://doodle.com/poll/4twwhy46bxerrthx


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] - Barbican w/Live Migration in DevStack Multinode

2018-07-30 Thread Walsh, Helen
Hi OpenStack Community,

I am having some issues with key management in a multinode devstack (from 
master branch 27th July '18) environment where Barbican is the configured 
key_manager.  I have followed setup instructions from the following pages:

  *   https://docs.openstack.org/barbican/latest/contributor/devstack.html 
(manual configuration)
  *   
https://docs.openstack.org/cinder/latest/configuration/block-storage/volume-encryption.html

So far:

  *   Unencrypted block volumes can be attached to instances on any compute node
  *   Instances with unencrypted volumes can also be live migrated to other 
compute node
  *   Encrypted bootable volumes created successfully
  *   Instances can be launched using these encrypted volumes when the instance 
is spawned on demo_machine1 (controller & compute node)
  *   Instances cannot be launched using encrypted volumes when the instance is 
spawned on demo_machine2 or demo_machine3 (compute only), the same failure can 
be seen in nova logs from both compute nodes:

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: DEBUG cinderclient.v3.client 
[None req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] GET call to 
cinderv3 for 
http://10.0.0.63/volume/v3/3f22a0262a7b4832a08c24ac0295cbd9/volumes/296148bf-edb8-4c9f-88c2-44464907f7e7/encryption
 used request id req-71fa7f20-c0bc-46c3-9f07-5866344d31a1 {{(pid=25686) request 
/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:844}}

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: DEBUG os_brick.encryptors 
[None req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Using volume 
encryption metadata '{u'cipher': u'aes-xts-plain64', u'encryption_key_id': 
u'da7ee21c-67ff-4d74-95a0-18ee6c25d85a', u'provider': u'luks', u'key_size': 
256, u'control_location': u'front-end'}' for connection: {'status': 
u'attaching', 'detached_at': u'', u'volume_id': 
u'296148bf-edb8-4c9f-88c2-44464907f7e7', 'attach_mode': u'null', 
'driver_volume_type': u'iscsi', 'instance': 
u'e0dc6eac-09bb-4232-bea7-7b8b161cfa31', 'attached_at': 
u'2018-07-30T13:35:17.00', 'serial': 
u'296148bf-edb8-4c9f-88c2-44464907f7e7', 'data': {'device_path': 
'/dev/disk/by-id/scsi-SEMC_SYMMETRIX_900049_wy000', u'target_discovered': True, 
u'encrypted': True, u'qos_specs': None, u'target_iqn': 
u'iqn.1992-04.com.emc:69700bcbb7112504018f', u'target_portal': 
u'192.168.0.60:3260', u'volume_id': u'296148bf-edb8-4c9f-88c2-44464907f7e7', 
u'target_lun': 1, u'access_mode': u'rw'}} {{(pid=25686) get_encryption_metadata 
/usr/local/lib/python2.7/dist-packages/os_brick/encryptors/__init__.py:125}}

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: WARNING 
keystoneauth.identity.generic.base [None 
req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Failed to discover 
available identity versions when contacting http://localhost/identity/v3. 
Attempting to parse version from URL.: NotFound: Not Found (HTTP 404)

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: ERROR 
castellan.key_manager.barbican_key_manager [None 
req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Error creating Barbican 
client: Could not find versioned identity endpoints when attempting to 
authenticate. Please check that your auth_url is correct. Not Found (HTTP 404): 
DiscoveryFailure: Could not find versioned identity endpoints when attempting 
to authenticate. Please check that your auth_url is correct. Not Found (HTTP 
404)

All instance of Nova have [key_manager] configured as follows:
[key_manager]
backend = barbican
auth_url = http://10.0.0.63/identity/
### Tried with and without the below config options, same result
# auth_type = password
# password = devstack
# username = barbican

Any assistance here would be greatly appreciated, I have spent a lot of time 
looking for some additional information for the use of Barbican in multinode 
devstack environments or with live migration but there is nothing out there, 
everything is for all-in-one environments and I'm not having any issues when 
everything is on one node. I am wondering if at this point there is something I 
am missing in terms of services in a multinode devstack environment, 
qualification of barbican in a multinode environment is outside of the 
recommended test config but following the docs it looks very straight forward.

Some information on the three nodes in my environment are below, if there is 
any other information I can provide let me know, thanks for the help!

Node & Service Breakdown
Node 1 (Controller & Compute)
stack@demo_machine1:~$ openstack service list
+--+-++
| ID   | Name| Type   |
+--+-++
| 43a1334c755c4c81969565097cc9c30c | cinder  | volume |
| 52a8927c09154e33900f24c7c95a9f8b | cinderv2| volumev2   |
| 5427a9dff3b6477197062e1747843c4d | nova_legacy | compute_legacy |
| 5b319b6d5063466199

Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-24 Thread Lee Yarwood
On 20-07-18 08:10:37, Erlon Cruz wrote:
> Nice, good to know. Thanks all for the feedback. We will fix that in our
> drivers.

FWIW Nova does not and AFAICT never has called os-force_detach.

We previously used os-terminate_connection with v2 where the connector
was optional. Even then we always provided one, even providing the
destination connector during an evacuation when the source connector
wasn't stashed in connection_info.
 
> @Walter, so, in this case, if Cinder has the connector, it should not need
> to call the driver passing a None object right?

Yeah I don't think this is an issue with v3 given the connector is
stashed with the attachment, so all we require is a reference to the
attachment to cleanup the connection during evacuations etc.

Lee
 
> Erlon
> 
> Em qua, 18 de jul de 2018 às 12:56, Walter Boring 
> escreveu:
> 
> > The whole purpose of this test is to simulate the case where Nova doesn't
> > know where the vm is anymore,
> > or may simply not exist, but we need to clean up the cinder side of
> > things.   That being said, with the new
> > attach API, the connector is being saved in the cinder database for each
> > volume attachment.
> >
> > Walt
> >
> > On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor 
> > wrote:
> >
> >> On 17/07, Sean McGinnis wrote:
> >> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> >> > > Hi Cinder and Nova folks,
> >> > >
> >> > > Working on some tests for our drivers, I stumbled upon this tempest
> >> test
> >> > > 'force_detach_volume'
> >> > > that is calling Cinder API passing a 'None' connector. At the time
> >> this was
> >> > > added several CIs
> >> > > went down, and people started discussing whether this
> >> (accepting/sending a
> >> > > None connector)
> >> > > would be the proper behavior for what is expected to a driver to
> >> do[1]. So,
> >> > > some of CIs started
> >> > > just skipping that test[2][3][4] and others implemented fixes that
> >> made the
> >> > > driver to disconnected
> >> > > the volume from all hosts if a None connector was received[5][6][7].
> >> >
> >> > Right, it was determined the correct behavior for this was to
> >> disconnect the
> >> > volume from all hosts. The CIs that are skipping this test should stop
> >> doing so
> >> > (once their drivers are fixed of course).
> >> >
> >> > >
> >> > > While implementing this fix seems to be straightforward, I feel that
> >> just
> >> > > removing the volume
> >> > > from all hosts is not the correct thing to do mainly considering that
> >> we
> >> > > can have multi-attach.
> >> > >
> >> >
> >> > I don't think multiattach makes a difference here. Someone is forcibly
> >> > detaching the volume and not specifying an individual connection. So
> >> based on
> >> > that, Cinder should be removing any connections, whether that is to one
> >> or
> >> > several hosts.
> >> >
> >>
> >> Hi,
> >>
> >> I agree with Sean, drivers should remove all connections for the volume.
> >>
> >> Even without multiattach there are cases where you'll have multiple
> >> connections for the same volume, like in a Live Migration.
> >>
> >> It's also very useful when Nova and Cinder get out of sync and your
> >> volume has leftover connections. In this case if you try to delete the
> >> volume you get a "volume in use" error from some drivers.
> >>
> >> Cheers,
> >> Gorka.
> >>
> >>
> >> > > So, my questions are: What is the best way to fix this problem? Should
> >> > > Cinder API continue to
> >> > > accept detachments with None connectors? If, so, what would be the
> >> effects
> >> > > on other Nova
> >> > > attachments for the same volume? Is there any side effect if the
> >> volume is
> >> > > not multi-attached?
> >> > >
> >> > > Additionally to this thread here, I should bring this topic to
> >> tomorrow's
> >> > > Cinder's meeting,
> >> > > so please join if you have something to share.
> >> > >
> >> >
> >> > +1 - good plan.
> >> >
> >> >
> >> >
> >> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questi

Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-20 Thread Erlon Cruz
Nice, good to know. Thanks all for the feedback. We will fix that in our
drivers.

@Walter, so, in this case, if Cinder has the connector, it should not need
to call the driver passing a None object right?

Erlon

Em qua, 18 de jul de 2018 às 12:56, Walter Boring 
escreveu:

> The whole purpose of this test is to simulate the case where Nova doesn't
> know where the vm is anymore,
> or may simply not exist, but we need to clean up the cinder side of
> things.   That being said, with the new
> attach API, the connector is being saved in the cinder database for each
> volume attachment.
>
> Walt
>
> On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor 
> wrote:
>
>> On 17/07, Sean McGinnis wrote:
>> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
>> > > Hi Cinder and Nova folks,
>> > >
>> > > Working on some tests for our drivers, I stumbled upon this tempest
>> test
>> > > 'force_detach_volume'
>> > > that is calling Cinder API passing a 'None' connector. At the time
>> this was
>> > > added several CIs
>> > > went down, and people started discussing whether this
>> (accepting/sending a
>> > > None connector)
>> > > would be the proper behavior for what is expected to a driver to
>> do[1]. So,
>> > > some of CIs started
>> > > just skipping that test[2][3][4] and others implemented fixes that
>> made the
>> > > driver to disconnected
>> > > the volume from all hosts if a None connector was received[5][6][7].
>> >
>> > Right, it was determined the correct behavior for this was to
>> disconnect the
>> > volume from all hosts. The CIs that are skipping this test should stop
>> doing so
>> > (once their drivers are fixed of course).
>> >
>> > >
>> > > While implementing this fix seems to be straightforward, I feel that
>> just
>> > > removing the volume
>> > > from all hosts is not the correct thing to do mainly considering that
>> we
>> > > can have multi-attach.
>> > >
>> >
>> > I don't think multiattach makes a difference here. Someone is forcibly
>> > detaching the volume and not specifying an individual connection. So
>> based on
>> > that, Cinder should be removing any connections, whether that is to one
>> or
>> > several hosts.
>> >
>>
>> Hi,
>>
>> I agree with Sean, drivers should remove all connections for the volume.
>>
>> Even without multiattach there are cases where you'll have multiple
>> connections for the same volume, like in a Live Migration.
>>
>> It's also very useful when Nova and Cinder get out of sync and your
>> volume has leftover connections. In this case if you try to delete the
>> volume you get a "volume in use" error from some drivers.
>>
>> Cheers,
>> Gorka.
>>
>>
>> > > So, my questions are: What is the best way to fix this problem? Should
>> > > Cinder API continue to
>> > > accept detachments with None connectors? If, so, what would be the
>> effects
>> > > on other Nova
>> > > attachments for the same volume? Is there any side effect if the
>> volume is
>> > > not multi-attached?
>> > >
>> > > Additionally to this thread here, I should bring this topic to
>> tomorrow's
>> > > Cinder's meeting,
>> > > so please join if you have something to share.
>> > >
>> >
>> > +1 - good plan.
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-18 Thread Walter Boring
The whole purpose of this test is to simulate the case where Nova doesn't
know where the vm is anymore,
or may simply not exist, but we need to clean up the cinder side of
things.   That being said, with the new
attach API, the connector is being saved in the cinder database for each
volume attachment.

Walt

On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor  wrote:

> On 17/07, Sean McGinnis wrote:
> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> > > Hi Cinder and Nova folks,
> > >
> > > Working on some tests for our drivers, I stumbled upon this tempest
> test
> > > 'force_detach_volume'
> > > that is calling Cinder API passing a 'None' connector. At the time
> this was
> > > added several CIs
> > > went down, and people started discussing whether this
> (accepting/sending a
> > > None connector)
> > > would be the proper behavior for what is expected to a driver to
> do[1]. So,
> > > some of CIs started
> > > just skipping that test[2][3][4] and others implemented fixes that
> made the
> > > driver to disconnected
> > > the volume from all hosts if a None connector was received[5][6][7].
> >
> > Right, it was determined the correct behavior for this was to disconnect
> the
> > volume from all hosts. The CIs that are skipping this test should stop
> doing so
> > (once their drivers are fixed of course).
> >
> > >
> > > While implementing this fix seems to be straightforward, I feel that
> just
> > > removing the volume
> > > from all hosts is not the correct thing to do mainly considering that
> we
> > > can have multi-attach.
> > >
> >
> > I don't think multiattach makes a difference here. Someone is forcibly
> > detaching the volume and not specifying an individual connection. So
> based on
> > that, Cinder should be removing any connections, whether that is to one
> or
> > several hosts.
> >
>
> Hi,
>
> I agree with Sean, drivers should remove all connections for the volume.
>
> Even without multiattach there are cases where you'll have multiple
> connections for the same volume, like in a Live Migration.
>
> It's also very useful when Nova and Cinder get out of sync and your
> volume has leftover connections. In this case if you try to delete the
> volume you get a "volume in use" error from some drivers.
>
> Cheers,
> Gorka.
>
>
> > > So, my questions are: What is the best way to fix this problem? Should
> > > Cinder API continue to
> > > accept detachments with None connectors? If, so, what would be the
> effects
> > > on other Nova
> > > attachments for the same volume? Is there any side effect if the
> volume is
> > > not multi-attached?
> > >
> > > Additionally to this thread here, I should bring this topic to
> tomorrow's
> > > Cinder's meeting,
> > > so please join if you have something to share.
> > >
> >
> > +1 - good plan.
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-18 Thread Gorka Eguileor
On 17/07, Sean McGinnis wrote:
> On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> > Hi Cinder and Nova folks,
> >
> > Working on some tests for our drivers, I stumbled upon this tempest test
> > 'force_detach_volume'
> > that is calling Cinder API passing a 'None' connector. At the time this was
> > added several CIs
> > went down, and people started discussing whether this (accepting/sending a
> > None connector)
> > would be the proper behavior for what is expected to a driver to do[1]. So,
> > some of CIs started
> > just skipping that test[2][3][4] and others implemented fixes that made the
> > driver to disconnected
> > the volume from all hosts if a None connector was received[5][6][7].
>
> Right, it was determined the correct behavior for this was to disconnect the
> volume from all hosts. The CIs that are skipping this test should stop doing 
> so
> (once their drivers are fixed of course).
>
> >
> > While implementing this fix seems to be straightforward, I feel that just
> > removing the volume
> > from all hosts is not the correct thing to do mainly considering that we
> > can have multi-attach.
> >
>
> I don't think multiattach makes a difference here. Someone is forcibly
> detaching the volume and not specifying an individual connection. So based on
> that, Cinder should be removing any connections, whether that is to one or
> several hosts.
>

Hi,

I agree with Sean, drivers should remove all connections for the volume.

Even without multiattach there are cases where you'll have multiple
connections for the same volume, like in a Live Migration.

It's also very useful when Nova and Cinder get out of sync and your
volume has leftover connections. In this case if you try to delete the
volume you get a "volume in use" error from some drivers.

Cheers,
Gorka.


> > So, my questions are: What is the best way to fix this problem? Should
> > Cinder API continue to
> > accept detachments with None connectors? If, so, what would be the effects
> > on other Nova
> > attachments for the same volume? Is there any side effect if the volume is
> > not multi-attached?
> >
> > Additionally to this thread here, I should bring this topic to tomorrow's
> > Cinder's meeting,
> > so please join if you have something to share.
> >
>
> +1 - good plan.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-17 Thread Sean McGinnis
On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> Hi Cinder and Nova folks,
> 
> Working on some tests for our drivers, I stumbled upon this tempest test
> 'force_detach_volume'
> that is calling Cinder API passing a 'None' connector. At the time this was
> added several CIs
> went down, and people started discussing whether this (accepting/sending a
> None connector)
> would be the proper behavior for what is expected to a driver to do[1]. So,
> some of CIs started
> just skipping that test[2][3][4] and others implemented fixes that made the
> driver to disconnected
> the volume from all hosts if a None connector was received[5][6][7].

Right, it was determined the correct behavior for this was to disconnect the
volume from all hosts. The CIs that are skipping this test should stop doing so
(once their drivers are fixed of course).

> 
> While implementing this fix seems to be straightforward, I feel that just
> removing the volume
> from all hosts is not the correct thing to do mainly considering that we
> can have multi-attach.
> 

I don't think multiattach makes a difference here. Someone is forcibly
detaching the volume and not specifying an individual connection. So based on
that, Cinder should be removing any connections, whether that is to one or
several hosts.

> So, my questions are: What is the best way to fix this problem? Should
> Cinder API continue to
> accept detachments with None connectors? If, so, what would be the effects
> on other Nova
> attachments for the same volume? Is there any side effect if the volume is
> not multi-attached?
> 
> Additionally to this thread here, I should bring this topic to tomorrow's
> Cinder's meeting,
> so please join if you have something to share.
> 

+1 - good plan.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-17 Thread Erlon Cruz
Hi Cinder and Nova folks,

Working on some tests for our drivers, I stumbled upon this tempest test
'force_detach_volume'
that is calling Cinder API passing a 'None' connector. At the time this was
added several CIs
went down, and people started discussing whether this (accepting/sending a
None connector)
would be the proper behavior for what is expected to a driver to do[1]. So,
some of CIs started
just skipping that test[2][3][4] and others implemented fixes that made the
driver to disconnected
the volume from all hosts if a None connector was received[5][6][7].

While implementing this fix seems to be straightforward, I feel that just
removing the volume
from all hosts is not the correct thing to do mainly considering that we
can have multi-attach.

So, my questions are: What is the best way to fix this problem? Should
Cinder API continue to
accept detachments with None connectors? If, so, what would be the effects
on other Nova
attachments for the same volume? Is there any side effect if the volume is
not multi-attached?

Additionally to this thread here, I should bring this topic to tomorrow's
Cinder's meeting,
so please join if you have something to share.

Erlon

___
[1] https://bugs.launchpad.net/cinder/+bug/1686278
[2]
https://openstack-ci-logs.aws.infinidat.com/14/578114/2/check/dsvm-tempest-infinibox-fc/14fa930/console.html
[3]
http://54.209.116.144/14/578114/2/check/kaminario-dsvm-tempest-full-iscsi/ce750c8/console.html
[4]
http://logs.openstack.netapp.com/logs/14/578114/2/upstream-check/cinder-cDOT-iSCSI/8e2c549/console.html#_2018-07-16_20_06_16_937286
[5]
https://review.openstack.org/#/c/551832/1/cinder/volume/drivers/dell_emc/vnx/adapter.py
[6]
https://review.openstack.org/#/c/550324/2/cinder/volume/drivers/hpe/hpe_3par_common.py
[7]
https://review.openstack.org/#/c/536778/2/cinder/volume/drivers/infinidat.py
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] RBD multi-attach

2018-04-13 Thread Eric Harney
On 04/12/2018 10:25 PM, 李俊波 wrote:
> Hello Nova, Cinder developers,
> 
>  
> 
> I would like to ask you a question concerns a Cinder patch [1].
> 
>  
> 
> In this patch, it mentioned that RBD features were incompatible with
> multi-attach, which disabled multi-attach for RBD. I would like to know
> which RBD features that are incompatible?
> 
>  
> 
> In the Bug [2], yao ning also raised this question, and in his envrionment,
> it proved that they did not find ant problems when enable this feature.
> 
>  
> 
> So, I also would like to know which features in ceph will make this feature
> unsafe? 
> 
>  
> 
> [1] https://review.openstack.org/#/c/283695/
> 
> [2] https://bugs.launchpad.net/cinder/+bug/1535815
> 
>  
> 
>  
> 
> Best wishes and Regards
> 
> junboli
> 
>  

Hi,

As noted in the comment in the code [1] -- the exclusive lock feature
must be disabled.  However, this feature is required for RBD mirroring
[2], which will be the basis of Cinder volume replication for RBD.

We are currently prioritizing completing support for replication over
multi-attach for this driver, since there is more demand for that
feature.  After that, we will look more at multi-attach and how to let
deployers choose to enable replication or multi-attach.

[1]
https://git.openstack.org/cgit/openstack/cinder/tree/cinder/volume/drivers/rbd.py?id=d1bae7462e3bc#n485

[2]
http://docs.ceph.com/docs/master/rbd/rbd-mirroring/#enable-image-journaling-support

Thanks,
Eric

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova]

2018-04-12 Thread 李俊波
Hello Nova, Cinder developers,

 

I would like to ask you a question concerns a Cinder patch [1].

 

In this patch, it mentioned that RBD features were incompatible with
multi-attach, which disabled multi-attach for RBD. I would like to know
which RBD features that are incompatible?

 

In the Bug [2], yao ning also raised this question, and in his envrionment,
it proved that they did not find ant problems when enable this feature.

 

So, I also would like to know which features in ceph will make this feature
unsafe? 

 

[1] https://review.openstack.org/#/c/283695/

[2] https://bugs.launchpad.net/cinder/+bug/1535815

 

 

Best wishes and Regards

junboli

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-10 Thread Gorka Eguileor
On 09/04, Sean McGinnis wrote:
> On Mon, Apr 09, 2018 at 07:00:56PM +0100, Duncan Thomas wrote:
> > Hopefully this flow means we can do rebuild root filesystem from
> > snapshot/backup too? It seems rather artificially limiting to only do
> > restore-from-image. I'd expect restore-from-snap to be a more common
> > use case, personally.
> >
>
> That could get tricky. We only support reverting to the last snapshot if we
> reuse the same volume. Otherwise, we can create volume from snapshot, but I
> don't think it's often that the first thing a user does is create a snapshot 
> on
> initial creation of a boot image. If it was created from image cache, and the
> backend creates those cached volume by using a snapshot, then that might be an
> option.
>
> But these are a lot of ifs, so that seems like it would make the logic for 
> this
> much more complicated.
>
> Maybe a phase II optimization we can look into?
>

From the Cinder side of things I think these two would be easier than
the re-image, because we would have even fewer steps, and the
functionality to do the copying is exactly what we have now, as it will
copy the data to the same volume, so we wouldn't need to fiddle with the
UUID fields etc.

Moreover I know customers who have asked about this functionality in the
past, mostly interested in restoring the root volume of an existing VM
from a backup to preserve the system ID and not break licenses.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-09 Thread Matt Riedemann

On 4/9/2018 1:00 PM, Duncan Thomas wrote:

Hopefully this flow means we can do rebuild root filesystem from
snapshot/backup too? It seems rather artificially limiting to only do
restore-from-image. I'd expect restore-from-snap to be a more common
use case, personally.


Hmm, now you've got me thinking about image-defined block device 
mappings, which is something you'd have if you snapshot a volume-backed 
instance and then later use that image snapshot, which has metadata 
about the volume snapshot in it, to later create (or rebuild?) a server.


Tempest has a scenario test for the boot from volume case here:

https://review.openstack.org/#/c/555495/

I should note that even if you did snapshot a volume-backed server and 
then used that image to rebuild another non-volume-backed server, nova 
won't even look at the block_device_mapping_v2 metadata in the snapshot 
image during rebuild, it doesn't treat it like boot from volume does 
where nova uses the image-defined BDM to create a new volume-backed 
instance.


And now that I've said that, I wonder if people would expect the same 
semantics for rebuild as boot from volume with those types of 
images...it makes my head hurt. Maybe mdbooth would like to weigh in on 
this given he's present in this thread.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-09 Thread Sean McGinnis
On Mon, Apr 09, 2018 at 07:00:56PM +0100, Duncan Thomas wrote:
> Hopefully this flow means we can do rebuild root filesystem from
> snapshot/backup too? It seems rather artificially limiting to only do
> restore-from-image. I'd expect restore-from-snap to be a more common
> use case, personally.
> 

That could get tricky. We only support reverting to the last snapshot if we
reuse the same volume. Otherwise, we can create volume from snapshot, but I
don't think it's often that the first thing a user does is create a snapshot on
initial creation of a boot image. If it was created from image cache, and the
backend creates those cached volume by using a snapshot, then that might be an
option.

But these are a lot of ifs, so that seems like it would make the logic for this
much more complicated.

Maybe a phase II optimization we can look into?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-09 Thread Duncan Thomas
Hopefully this flow means we can do rebuild root filesystem from
snapshot/backup too? It seems rather artificially limiting to only do
restore-from-image. I'd expect restore-from-snap to be a more common
use case, personally.

On 9 April 2018 at 09:51, Gorka Eguileor  wrote:
> On 06/04, Matt Riedemann wrote:
>> On 4/6/2018 5:09 AM, Matthew Booth wrote:
>> > I think you're talking at cross purposes here: this won't require a
>> > swap volume. Apart from anything else, swap volume only works on an
>> > attached volume, and as previously discussed Nova will detach and
>> > re-attach.
>> >
>> > Gorka, the Nova api Matt is referring to is called volume update
>> > externally. It's the operation required for live migrating an attached
>> > volume between backends. It's called swap volume internally in Nova.
>>
>> Yeah I was hoping we were just having a misunderstanding of what 'swap
>> volume' in nova is, which is the blockRebase for an already attached volume
>> to the guest, called from cinder during a volume retype or migration.
>>
>> As for the re-image thing, nova would be detaching the volume from the guest
>> prior to calling the new cinder re-image API, and then re-attach to the
>> guest afterward - similar to how shelve and unshelve work, and for that
>> matter how rebuild works today with non-root volumes.
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Hi,
>
> Thanks for the clarification.  When I was talking about "swapping" I was
> referring to the fact that Nova will have to not only detach the volume
> locally using OS-Brick, but it will also need to use new connection
> information to do the attach after the volume has been re-imaged.
>
> As I see it, the process would look something like this:
>
> - Nova detaches volume using OS-Brick
> - Nova calls Cinder re-image passing the node's info (like we do when
>   attaching a new volume)
> - Cinder would:
>   - Ensure only that node is connected to the volume
>   - Terminate connection to the original volume
>   - If we can do optimized volume creation:
> - If encrypted volume we create a copy of the encryption key on
>   Barbican or copy the ID field from the DB and ensure we don't
>   delete the Barbican key on the delete.
> - Create new volume from image
> - Swap DB fields to preserve the UUID
> - Delete original volume
>   - If it cannot do optimized volume creation:
> - Initialize+Attach volume to Cinder node
> - DD the new image into the volume
> - Detach+Terminate volume
>   - Initialize connection for the new volume to the Nova node
>   - Return connection information to the volume
> - Nova attaches volume with OS-Brick using returned connection
>   information.
>
> So I agree, it's not a blockRebase operation, just a change in the
> volume that is used.
>
> Regards,
> Gorka.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-09 Thread Matt Riedemann

On 4/9/2018 3:51 AM, Gorka Eguileor wrote:

As I see it, the process would look something like this:

- Nova detaches volume using OS-Brick
- Nova calls Cinder re-image passing the node's info (like we do when
   attaching a new volume)
- Cinder would:
   - Ensure only that node is connected to the volume
   - Terminate connection to the original volume
   - If we can do optimized volume creation:
 - If encrypted volume we create a copy of the encryption key on
   Barbican or copy the ID field from the DB and ensure we don't
   delete the Barbican key on the delete.
 - Create new volume from image
 - Swap DB fields to preserve the UUID
 - Delete original volume
   - If it cannot do optimized volume creation:
 - Initialize+Attach volume to Cinder node
 - DD the new image into the volume
 - Detach+Terminate volume
   - Initialize connection for the new volume to the Nova node
   - Return connection information to the volume
- Nova attaches volume with OS-Brick using returned connection
   information.

So I agree, it's not a blockRebase operation, just a change in the
volume that is used.


Yeah we're on the same page with respect to the high level changes on 
the nova side.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-09 Thread Gorka Eguileor
On 06/04, Matt Riedemann wrote:
> On 4/6/2018 5:09 AM, Matthew Booth wrote:
> > I think you're talking at cross purposes here: this won't require a
> > swap volume. Apart from anything else, swap volume only works on an
> > attached volume, and as previously discussed Nova will detach and
> > re-attach.
> >
> > Gorka, the Nova api Matt is referring to is called volume update
> > externally. It's the operation required for live migrating an attached
> > volume between backends. It's called swap volume internally in Nova.
>
> Yeah I was hoping we were just having a misunderstanding of what 'swap
> volume' in nova is, which is the blockRebase for an already attached volume
> to the guest, called from cinder during a volume retype or migration.
>
> As for the re-image thing, nova would be detaching the volume from the guest
> prior to calling the new cinder re-image API, and then re-attach to the
> guest afterward - similar to how shelve and unshelve work, and for that
> matter how rebuild works today with non-root volumes.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi,

Thanks for the clarification.  When I was talking about "swapping" I was
referring to the fact that Nova will have to not only detach the volume
locally using OS-Brick, but it will also need to use new connection
information to do the attach after the volume has been re-imaged.

As I see it, the process would look something like this:

- Nova detaches volume using OS-Brick
- Nova calls Cinder re-image passing the node's info (like we do when
  attaching a new volume)
- Cinder would:
  - Ensure only that node is connected to the volume
  - Terminate connection to the original volume
  - If we can do optimized volume creation:
- If encrypted volume we create a copy of the encryption key on
  Barbican or copy the ID field from the DB and ensure we don't
  delete the Barbican key on the delete.
- Create new volume from image
- Swap DB fields to preserve the UUID
- Delete original volume
  - If it cannot do optimized volume creation:
- Initialize+Attach volume to Cinder node
- DD the new image into the volume
- Detach+Terminate volume
  - Initialize connection for the new volume to the Nova node
  - Return connection information to the volume
- Nova attaches volume with OS-Brick using returned connection
  information.

So I agree, it's not a blockRebase operation, just a change in the
volume that is used.

Regards,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-06 Thread Matt Riedemann

On 4/6/2018 5:09 AM, Matthew Booth wrote:

I think you're talking at cross purposes here: this won't require a
swap volume. Apart from anything else, swap volume only works on an
attached volume, and as previously discussed Nova will detach and
re-attach.

Gorka, the Nova api Matt is referring to is called volume update
externally. It's the operation required for live migrating an attached
volume between backends. It's called swap volume internally in Nova.


Yeah I was hoping we were just having a misunderstanding of what 'swap 
volume' in nova is, which is the blockRebase for an already attached 
volume to the guest, called from cinder during a volume retype or migration.


As for the re-image thing, nova would be detaching the volume from the 
guest prior to calling the new cinder re-image API, and then re-attach 
to the guest afterward - similar to how shelve and unshelve work, and 
for that matter how rebuild works today with non-root volumes.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-06 Thread Matthew Booth
On 6 April 2018 at 09:31, Gorka Eguileor  wrote:
> On 05/04, Matt Riedemann wrote:
>> On 4/5/2018 3:15 AM, Gorka Eguileor wrote:
>> > But just to be clear, Nova will have to initialize the connection with
>> > the re-imagined volume and attach it again to the node, as in all cases
>> > (except when defaulting to downloading the image and dd-ing it to the
>> > volume) the result will be a new volume in the backend.
>>
>> Yeah I think I pointed this out earlier in this thread on what I thought the
>> steps would be on the nova side with respect to creating a new empty
>> attachment to keep the volume 'reserved' while we delete the old attachment,
>> re-image the volume, and then update the volume attachment for the new
>> connection. I think that would be similar to how shelve and unshelve works
>> in nova.
>>
>> Would this really require a swap volume call from Cinder? I'd hope not since
>> swap volume in itself is a pretty gross operation on the nova side.
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>
> Hi Matt,
>
> Yes, it will require a volume swap, with the worst case scenario
> exception where we dd the image into the volume.

I think you're talking at cross purposes here: this won't require a
swap volume. Apart from anything else, swap volume only works on an
attached volume, and as previously discussed Nova will detach and
re-attach.

Gorka, the Nova api Matt is referring to is called volume update
externally. It's the operation required for live migrating an attached
volume between backends. It's called swap volume internally in Nova.

Matt

>
> In the same way that anyone would expect a re-imaging preserving the
> volume id, one would also expect it to behave like creating a new volume
> from the same image: be as fast and take up as much space on the
> backend.
>
> And to do so we have to use existing optimized mechanisms that will only
> work when creating a new volume.
>
> The alternative would be to have the worst case scenario as the default
> (attach and dd the image) and make *ALL* Cinder drivers implement the
> optimized mechanism where they can efficiently re-imagine a volume.  I
> can't talk for the Cinder team, but I for one would oppose this
> alternative.
>
> Cheers,
> Gorka.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-06 Thread Gorka Eguileor
On 05/04, Matt Riedemann wrote:
> On 4/5/2018 3:15 AM, Gorka Eguileor wrote:
> > But just to be clear, Nova will have to initialize the connection with
> > the re-imagined volume and attach it again to the node, as in all cases
> > (except when defaulting to downloading the image and dd-ing it to the
> > volume) the result will be a new volume in the backend.
>
> Yeah I think I pointed this out earlier in this thread on what I thought the
> steps would be on the nova side with respect to creating a new empty
> attachment to keep the volume 'reserved' while we delete the old attachment,
> re-image the volume, and then update the volume attachment for the new
> connection. I think that would be similar to how shelve and unshelve works
> in nova.
>
> Would this really require a swap volume call from Cinder? I'd hope not since
> swap volume in itself is a pretty gross operation on the nova side.
>
> --
>
> Thanks,
>
> Matt
>

Hi Matt,

Yes, it will require a volume swap, with the worst case scenario
exception where we dd the image into the volume.

In the same way that anyone would expect a re-imaging preserving the
volume id, one would also expect it to behave like creating a new volume
from the same image: be as fast and take up as much space on the
backend.

And to do so we have to use existing optimized mechanisms that will only
work when creating a new volume.

The alternative would be to have the worst case scenario as the default
(attach and dd the image) and make *ALL* Cinder drivers implement the
optimized mechanism where they can efficiently re-imagine a volume.  I
can't talk for the Cinder team, but I for one would oppose this
alternative.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-05 Thread Matt Riedemann

On 4/5/2018 3:15 AM, Gorka Eguileor wrote:

But just to be clear, Nova will have to initialize the connection with
the re-imagined volume and attach it again to the node, as in all cases
(except when defaulting to downloading the image and dd-ing it to the
volume) the result will be a new volume in the backend.


Yeah I think I pointed this out earlier in this thread on what I thought 
the steps would be on the nova side with respect to creating a new empty 
attachment to keep the volume 'reserved' while we delete the old 
attachment, re-image the volume, and then update the volume attachment 
for the new connection. I think that would be similar to how shelve and 
unshelve works in nova.


Would this really require a swap volume call from Cinder? I'd hope not 
since swap volume in itself is a pretty gross operation on the nova side.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-05 Thread Gorka Eguileor
On 04/04, Matt Riedemann wrote:
> On 4/2/2018 6:59 AM, Gorka Eguileor wrote:
> > I can only see one benefit from implementing this feature in Cinder
> > versus doing it in Nova, and that is that we can preserve the volume's
> > UUID, but I don't think this is even relevant for this use case, so why
> > is it better to implement this in Cinder than in Nova?
>
> With a new image, the volume_image_metadata in the volume would also be
> wrong, and I don't think nova should (or even can) update that information.
> So nova re-imaging the volume doesn't seem like a good fit to me given
> Cinder "owns" the volume along with any metadata about it.
>
> If Cinder isn't agreeable to this new re-image API, then I think we're stuck

Hi Matt,

I didn't mean to imply that the Cinder team is against this proposal, I
just want to make sure that Cinder is the right place to do it and that
we will actually get some benefits from doing it in Cinder, because
right now I don't see that many...



> with the original proposal of creating a new volume and swapping out the
> root disk, along with all of the problems that can arise from that (original
> volume type is gone, tenant goes over-quota, what do we do with the original
> volume (delete it?), etc).
>
> --
>
> Thanks,
>
> Matt
>

This is what I thought the Nova alternative was, so that's why I didn't
understand the image metadata issue.

For clarification, the original volume type cannot be gone, as the type
delete operation prevents used volume types to be deleted, and if for
some reason it were gone (though I don't see how) Cinder would find
itself with the exact same problem, so there's no difference here.

The flow you are describing is basically what the generic implementation
for that functionality would do in Cinder:

- Create a new volume from image using the same volume type
- Swap the volume information like we do in the live migration case
- Delete the original volume
- Nova will have to swap the root volume (request new connection
  information for that volume and attach it to the node).

Because the alternative is for Cinder to download the image and dd it
into the original volume, which breaks all the optimizations that Cinder
has for speed and storage saving in the backend (there would be no
cloning).

So reading your response I expand the benefits to 2 if done by Cinder:

- Preserve volume UUID
- Remove unlikely race condition of someone deleting the volume type
  between Nova deleting the original volume and creating the new one (in
  this order to avoid the quota issue) when there is no other volume
  using that volume type.

I guess the user facing volume UUID preservation is good enough reason
to have this API in Cinder, as one would assume re-imaging a volume
would never result in having a new volume ID.

But just to be clear, Nova will have to initialize the connection with
the re-imagined volume and attach it again to the node, as in all cases
(except when defaulting to downloading the image and dd-ing it to the
volume) the result will be a new volume in the backend.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-04 Thread Matt Riedemann

On 4/2/2018 6:59 AM, Gorka Eguileor wrote:

I can only see one benefit from implementing this feature in Cinder
versus doing it in Nova, and that is that we can preserve the volume's
UUID, but I don't think this is even relevant for this use case, so why
is it better to implement this in Cinder than in Nova?


With a new image, the volume_image_metadata in the volume would also be 
wrong, and I don't think nova should (or even can) update that 
information. So nova re-imaging the volume doesn't seem like a good fit 
to me given Cinder "owns" the volume along with any metadata about it.


If Cinder isn't agreeable to this new re-image API, then I think we're 
stuck with the original proposal of creating a new volume and swapping 
out the root disk, along with all of the problems that can arise from 
that (original volume type is gone, tenant goes over-quota, what do we 
do with the original volume (delete it?), etc).


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-02 Thread Gorka Eguileor
On 29/03, Sean McGinnis wrote:
> >   This is the spec [0] about rebuild the volumed backed server.
> > The question raised in the spec is about how to bandle the root volume.
> > Finally,in Nova team,we think that the cleanest / best solution to this is 
> > to
> > add a volume action API to cinder for re-imaging the volume.Once that is
> > available in a new cinder v3 microversion, nova can use it. The reason I
> > ...
> >   So Nova team want Cinder to achieve the re-image api.But, I see a spec
> > about volume revert by snapshot[1].It is so good for rebuild operation.In
> > short,I have two ideas,one is change the volume revert by snapshot spec to
> > re-image spec,not only it can let the volume revert by snapshot,but also can
> > re-image the volume which the image's size is greater than 0;another idea is
> > add a only re-image spec,it only can re-image the volume which the image's
> > size is greater than 0.
> >
>
> I do not think changing the revert to snapshot implementation is appropriate
> here. There may be some cases where this can get the desired result, but there
> is no guarantee that there is a snapshot on the volume's base image state to
> revert to. It also would not make sense to overload this functionality to
> "revert to snapshot if you can, otherwise do all this other stuff instead."
>
> This would need to be a new API (microversioned) to add a reimage call. I
> wouldn't expect implementation to be too difficult as we already have that
> functionality for new volumes. We would just need to figure out the most
> appropriate way to take an already in-use volume, detach it, rewrite the 
> image,
> then reattach it.
>

Hi,

The implementation may be more complex that we think, as we have 4
create volume from image mechanisms we have to consider:

- When Glance is using Cinder as backend
- When using Glance image location to do cloning
- When using Cinder cache and we do cloning
- Basic case where we download the image, attach the volume, and copy
  the data.

The only simple, yet efficient, solution I can see is calling the
driver's delete volume method (without soft-deleting it from the DB),
clear the volume DB information of the image metadata, and then run the
create volume from image flow with the same volume information but the
new image metadata.

I can only see one benefit from implementing this feature in Cinder
versus doing it in Nova, and that is that we can preserve the volume's
UUID, but I don't think this is even relevant for this use case, so why
is it better to implement this in Cinder than in Nova?

Cheers,
Gorka.


> Ideally, from my perspective, Nova would take care of the detach/attach 
> portion
> and Cinder would only need to take care of imaging the volume.
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-03-30 Thread Jay S Bryant



On 3/29/2018 8:36 PM, Matt Riedemann wrote:

On 3/29/2018 6:50 PM, Sean McGinnis wrote:
May we can add a "Reimaging" state to the volume? Then Nova could 
poll for it

to go from that back to Available?


That would be fine with me, and maybe similar to how 'extending' and 
'retyping' work for an attached volume?


Nova wouldn't wait for the volume to go to 'available', we don't want 
it to go to 'available', we'd just wait for it to go back to 
'reserved'. During a rebuild the instance still needs to keep the 
volume logically attached to it so another instance can't grab it.



This all sounds reasonable to me.

Thanks for hashing it out guys!

Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-03-29 Thread Matt Riedemann

On 3/29/2018 6:50 PM, Sean McGinnis wrote:

May we can add a "Reimaging" state to the volume? Then Nova could poll for it
to go from that back to Available?


That would be fine with me, and maybe similar to how 'extending' and 
'retyping' work for an attached volume?


Nova wouldn't wait for the volume to go to 'available', we don't want it 
to go to 'available', we'd just wait for it to go back to 'reserved'. 
During a rebuild the instance still needs to keep the volume logically 
attached to it so another instance can't grab it.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-03-29 Thread Sean McGinnis
> 
> >
> >Ideally, from my perspective, Nova would take care of the detach/attach 
> >portion
> >and Cinder would only need to take care of imaging the volume.
> 
> Agree. :) And yeah, I pointed this out in the nova spec for volume-backed
> rebuild also. I think nova can basically handle this like it does for shelve
> today, and we'd do something like this:
> 
> 1. disconnect the volume from the host
> 2. create a new empty volume attachment for the volume and instance - this
> is needed so the volume stays 'reserved' while we re-image it
> 3. delete the old volume attachment
> 4. call the new cinder re-image API
> 5. once the volume is available (TODO: how would we know?)

May we can add a "Reimaging" state to the volume? Then Nova could poll for it
to go from that back to Available? Since Nova is driving things, I would be
hesitant to expect and assume that Cinder is appropriately configured to call
back in to Nova.

Or a notification?

Or...?

> 6. re-attach the volume by updating the attachment with the host connector,
> connect on the host, and complete the attachment (marks the volume as in-use
> again)
> 
> -- 
> 
> Thanks,
> 
> Matt
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-03-29 Thread Matt Riedemann

On 3/29/2018 9:28 AM, Sean McGinnis wrote:

I do not think changing the revert to snapshot implementation is appropriate
here. There may be some cases where this can get the desired result, but there
is no guarantee that there is a snapshot on the volume's base image state to
revert to. It also would not make sense to overload this functionality to
"revert to snapshot if you can, otherwise do all this other stuff instead."



Agree.


This would need to be a new API (microversioned) to add a reimage call. I
wouldn't expect implementation to be too difficult as we already have that
functionality for new volumes. We would just need to figure out the most
appropriate way to take an already in-use volume, detach it, rewrite the image,
then reattach it.


Agree.



Ideally, from my perspective, Nova would take care of the detach/attach portion
and Cinder would only need to take care of imaging the volume.


Agree. :) And yeah, I pointed this out in the nova spec for 
volume-backed rebuild also. I think nova can basically handle this like 
it does for shelve today, and we'd do something like this:


1. disconnect the volume from the host
2. create a new empty volume attachment for the volume and instance - 
this is needed so the volume stays 'reserved' while we re-image it

3. delete the old volume attachment
4. call the new cinder re-image API
5. once the volume is available (TODO: how would we know?)
6. re-attach the volume by updating the attachment with the host 
connector, connect on the host, and complete the attachment (marks the 
volume as in-use again)


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-03-29 Thread Sean McGinnis
>   This is the spec [0] about rebuild the volumed backed server.
> The question raised in the spec is about how to bandle the root volume.
> Finally,in Nova team,we think that the cleanest / best solution to this is to
> add a volume action API to cinder for re-imaging the volume.Once that is
> available in a new cinder v3 microversion, nova can use it. The reason I
> ...
>   So Nova team want Cinder to achieve the re-image api.But, I see a spec
> about volume revert by snapshot[1].It is so good for rebuild operation.In
> short,I have two ideas,one is change the volume revert by snapshot spec to
> re-image spec,not only it can let the volume revert by snapshot,but also can
> re-image the volume which the image's size is greater than 0;another idea is
> add a only re-image spec,it only can re-image the volume which the image's
> size is greater than 0.
> 

I do not think changing the revert to snapshot implementation is appropriate
here. There may be some cases where this can get the desired result, but there
is no guarantee that there is a snapshot on the volume's base image state to
revert to. It also would not make sense to overload this functionality to
"revert to snapshot if you can, otherwise do all this other stuff instead."

This would need to be a new API (microversioned) to add a reimage call. I
wouldn't expect implementation to be too difficult as we already have that
functionality for new volumes. We would just need to figure out the most
appropriate way to take an already in-use volume, detach it, rewrite the image,
then reattach it.

Ideally, from my perspective, Nova would take care of the detach/attach portion
and Cinder would only need to take care of imaging the volume.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] about re-image the volume

2018-03-29 Thread 李杰
Hi,all


  This is the spec [0] about rebuild the volumed backed server.The question 
raised in the spec is about how to bandle the root volume.Finally,in Nova 
team,we think that the cleanest / best solution to this is to add a volume 
action API to cinder for re-imaging the volume.Once that is available in a new 
cinder v3 microversion, nova can use it. The reason I think this should be done 
in Cinder with re-imaging the volume there is (1) it's cleaner from the nova 
side and (2) then Cinder is in control of how that re-image should happen, 
along with any details it needs to update, e.g. the volume's 
"volume_image_metadata" information would need to be updated.We really aren't 
suitable to do the volume create/delete/swap orchestration thing since that 
entails issues with the volume type being gone, going over quota, what to do 
about deleting the old volume, etc.
  So Nova team want Cinder to achieve the re-image api.But, I see a spec 
about volume revert by snapshot[1].It is so good for rebuild operation.In 
short,I have two ideas,one is change the volume revert by snapshot spec to 
re-image spec,not only it can let the volume revert by snapshot,but also can 
re-image the volume which the image's size is greater than 0;another idea is 
add a only re-image spec,it only can re-image the volume which the image's size 
is greater than 0.
  What do you think of the two ideas?Any suggestion is welcome.Thank you!
  Note:the instance snapshot for image backed server's image size is 
greater than 0,but the volume backed server 's image size is equal 0.
  Re:
  [0]https://review.openstack.org/#/c/532407/
  
[1]https://specs.openstack.org/openstack/cinder-specs/specs/pike/cinder-volume-revert-by-snapshot.html














Best Regards
Rambo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-27 Thread Matt Riedemann

On 2/27/2018 6:34 PM, John Griffith wrote:
​ So replication is set on create of the volume, you could have a rule 
that keeps the two features mutually exclusive, but I'm still not quite 
sure why that would be a requirement here.  ​


Yeah I didn't think of that either, the attachment record has the 
instance uuid in it right? So cinder could just iterate the list of 
attachments for the volume and send multiple requests to nova.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-27 Thread John Griffith
On Tue, Feb 27, 2018 at 9:34 AM, Walter Boring  wrote:

> I think you might be able to get away with just calling os-brick's
> connect_volume again without the need to call disconnect_volume first.
>  calling disconnect_volume wouldn't be good for volumes that are being
> used, just to refresh the connection_info on that volume.
>
​Hmm... but then you'd have an orphaned connection left hanging around for
the old connection no?
​


>
> On Tue, Feb 27, 2018 at 2:56 PM, Matt Riedemann 
> wrote:
>
>> On 2/27/2018 10:02 AM, Matthew Booth wrote:
>>
>>> Sounds like the work Nova will have to do is identical to volume update
>>> (swap volume). i.e. Change where a disk's backing store is without actually
>>> changing the disk.
>>>
>>
>> That's not what I'm hearing. I'm hearing disconnect/reconnect. Only the
>> libvirt driver supports swap volume, but I assume all other virt drivers
>> could support this generically.
>>
>>
>>> Multi-attach! There might be more than 1 instance per volume, and we
>>> can't currently support volume update for multi-attached volumes.
>>>
>> ​Not sure I follow... why not?  It's just refreshing connections, only
difference is you might have to do this "n" times instead of once?​


>
>> Good point - cinder would likely need to reject a request to replicate an
>> in-use multiattach volume if the volume has more than one attachment.
>
> ​So replication is set on create of the volume, you could have a rule that
keeps the two features mutually exclusive, but I'm still not quite sure why
that would be a requirement here.  ​


>
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-27 Thread Walter Boring
I think you might be able to get away with just calling os-brick's
connect_volume again without the need to call disconnect_volume first.
 calling disconnect_volume wouldn't be good for volumes that are being
used, just to refresh the connection_info on that volume.

On Tue, Feb 27, 2018 at 2:56 PM, Matt Riedemann  wrote:

> On 2/27/2018 10:02 AM, Matthew Booth wrote:
>
>> Sounds like the work Nova will have to do is identical to volume update
>> (swap volume). i.e. Change where a disk's backing store is without actually
>> changing the disk.
>>
>
> That's not what I'm hearing. I'm hearing disconnect/reconnect. Only the
> libvirt driver supports swap volume, but I assume all other virt drivers
> could support this generically.
>
>
>> Multi-attach! There might be more than 1 instance per volume, and we
>> can't currently support volume update for multi-attached volumes.
>>
>
> Good point - cinder would likely need to reject a request to replicate an
> in-use multiattach volume if the volume has more than one attachment.
>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-27 Thread Matt Riedemann

On 2/27/2018 10:02 AM, Matthew Booth wrote:
Sounds like the work Nova will have to do is identical to volume update 
(swap volume). i.e. Change where a disk's backing store is without 
actually changing the disk.


That's not what I'm hearing. I'm hearing disconnect/reconnect. Only the 
libvirt driver supports swap volume, but I assume all other virt drivers 
could support this generically.




Multi-attach! There might be more than 1 instance per volume, and we 
can't currently support volume update for multi-attached volumes.


Good point - cinder would likely need to reject a request to replicate 
an in-use multiattach volume if the volume has more than one attachment.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-27 Thread Matthew Booth
Couple of thoughts:

Sounds like the work Nova will have to do is identical to volume update
(swap volume). i.e. Change where a disk's backing store is without actually
changing the disk.

Multi-attach! There might be more than 1 instance per volume, and we can't
currently support volume update for multi-attached volumes.

Matt

On 27 February 2018 at 09:45, Matt Riedemann  wrote:

> On 2/26/2018 9:52 PM, John Griffith wrote:
>
>> ​Yeah, it seems like this would be pretty handy with what's there.  So
>> are folks good with that?  Wanted to make sure there's nothing contentious
>> there before I propose a spec on the Nova and Cinder sides. If you think it
>> seems at least worth proposing I'll work on it and get something ready as a
>> welcome home from Dublin gift for everyone :)
>>
>
> I'll put it on the nova/cinder PTG etherpad agenda for Thursday morning.
> This seems like simple plumbing on the nova side, so not any major problems
> from me.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-27 Thread Matt Riedemann

On 2/26/2018 9:52 PM, John Griffith wrote:
​Yeah, it seems like this would be pretty handy with what's there.  So 
are folks good with that?  Wanted to make sure there's nothing 
contentious there before I propose a spec on the Nova and Cinder sides. 
If you think it seems at least worth proposing I'll work on it and get 
something ready as a welcome home from Dublin gift for everyone :)


I'll put it on the nova/cinder PTG etherpad agenda for Thursday morning. 
This seems like simple plumbing on the nova side, so not any major 
problems from me.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread John Griffith
On Mon, Feb 26, 2018 at 2:47 PM, Matt Riedemann  wrote:

> On 2/26/2018 9:28 PM, John Griffith wrote:
>
>> I'm also wondering how much of the extend actions we can leverage here,
>> but I haven't looked through all of that yet.​
>>
>
> The os-server-external-events API in nova is generic. We'd just add a new
> microversion to register a new tag for this event. Like the extend volume
> event, the volume ID would be provided as input to the API and nova would
> use that to identify the instance + volume to refresh on the compute host.
>
> We'd also register a new instance action / event record so that users
> could poll the os-instance-actions API for completion of the operation.

​Yeah, it seems like this would be pretty handy with what's there.  So are
folks good with that?  Wanted to make sure there's nothing contentious
there before I propose a spec on the Nova and Cinder sides.  If you think
it seems at least worth proposing I'll work on it and get something ready
as a welcome home from Dublin gift for everyone :)
​


>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread Matt Riedemann

On 2/26/2018 9:28 PM, John Griffith wrote:
I'm also wondering how much of the extend actions we can leverage here, 
but I haven't looked through all of that yet.​


The os-server-external-events API in nova is generic. We'd just add a 
new microversion to register a new tag for this event. Like the extend 
volume event, the volume ID would be provided as input to the API and 
nova would use that to identify the instance + volume to refresh on the 
compute host.


We'd also register a new instance action / event record so that users 
could poll the os-instance-actions API for completion of the operation.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread John Griffith
On Mon, Feb 26, 2018 at 2:13 PM, Matt Riedemann  wrote:

> On 2/26/2018 8:09 PM, John Griffith wrote:
>
>> I'm interested in looking at creating a mechanism to "refresh" all of the
>> existing/current attachments as part of the Cinder Failover process.
>>
>
> What would be involved on the nova side for the refresh? I'm guessing
> disconnect/connect the volume via os-brick (or whatever for non-libvirt
> drivers), resulting in a new host connector from os-brick that nova would
> use to update the existing volume attachment for the volume/server instance
> combo?

​Yep, that's pretty much exactly what I'm thinking about / looking at.  I'm
also wondering how much of the extend actions we can leverage here, but I
haven't looked through all of that yet.​


>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread Matt Riedemann

On 2/26/2018 8:09 PM, John Griffith wrote:
I'm interested in looking at creating a mechanism to "refresh" all of 
the existing/current attachments as part of the Cinder Failover process.


What would be involved on the nova side for the refresh? I'm guessing 
disconnect/connect the volume via os-brick (or whatever for non-libvirt 
drivers), resulting in a new host connector from os-brick that nova 
would use to update the existing volume attachment for the volume/server 
instance combo?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread John Griffith
Hey Everyone,

Something I've been looking at with Cinder's replication (sort of the next
step in the evolution if you will) is the ability to refresh/renew in-use
volumes that were part of a migration event.

We do something similar with extend-volume on the Nova side through the use
of Instance Actions I believe, and I'm wondering how folks would feel about
the same sort of thing being added upon failover/failback for replicated
Cinder volumes?

If you're not familiar, Cinder allows a volume to be replicated to multiple
physical backend devices, and in the case of a DR situation an Operator can
failover a backend device (or even a single volume).  This process results
in Cinder making some calls to the respective backend device, it doing it's
magic and updating the Cinder Volume Model with new attachment info.

This works great, except for the case of users that have a bunch of in-use
volumes on that particular backend.  We don't currently do anything to
refresh/update them, so it's a manual process of running through a
detach/attach loop.

I'm interested in looking at creating a mechanism to "refresh" all of the
existing/current attachments as part of the Cinder Failover process.

Curious if anybody has any thoughts on this, or if anyone has already done
something related to this topic?

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][castellan] Toward deprecating ConfKeyManager

2017-10-11 Thread Duncan Thomas
I'm not sure there's a general agreement about removing the fixed key
manager code in future. It serves several purposes, testing being the most
significant one, though it also covers some people's usecase from a
security PoV too where something better might not be worth the complexity
trade off. If this work is a backdoor effort to remove the functionality,
rather than purely a code cleanup effort then it should definitely be
clearly presented as such.

On 11 Oct 2017 9:50 pm, "Alan Bishop"  wrote:

> On Wed, Oct 11, 2017 at 1:17 PM, Dave McCowan (dmccowan)
>  wrote:
> > Hi Alan--
> > Since a fixed-key implementation is not secure, I would prefer not
> > adding it to Castellan.  Our desire is that Castellan can be a
> best-practice
> > project to encourage operators to use key management securely.
> > I'm all for consolidating code and providing good migration paths
> from
> > ConfKeyManager to Castellan.
> > Can we create a new oslo project to facilitate this?  Something like
> > oslo.fixed_key_manager.
> > I would rather keep a fixed_key implementation out of Castellan if
> > possible.
>
> Hi Dave,
>
> While I totally take your point about keeping the "deficient" (I'm being
> charitable) ConfKeyManager code out of Castellan, I view it as a short
> term tactical move. Everyone is looking forward to deprecating the code.
> The key (no pun intended) to getting there is providing a migration path
> for users (there are significant ones) that have existing deployments
> that use the fixed-key.
>
> Because of the circumstances, I feel there would be resistance to the
> idea of creating an entirely new oslo project that; a) consists of code
> that everyone knows to be deficient, and b) will be deprecated soon.
>
> I have another motive for temporarily moving the code into Castellan,
> and it pertains to providing a migration path to Barbican. With everything
> consolidated in Castellan, a wrapper class could provide a seamless way
> of handling KeyManager.get() requests for the all-zeros fixed-key ID,
> even when Barbican is the key manager. This would allow users to switch
> to Barbican, and still have any get() requests for the legacy fixed-key
> be resolved by the ConfKeyManager.
>
> All of this could be implemented wholely within Castellan, and be totally
> transparent to the the user, Nova, Cinder, and the Barbican implementation
> in barbican_key_manager.py.
>
> As a final note, we could add all sorts of warnings any to code added
> to Castellan, perhaps even name the file insecure_key_manager.py ;-)
>
> Alan
>
>
> > --Dave
> >
> > There is an ongoing effort to deprecate the ConfKeyManager, but care
> > must be taken when migrating existing ConfKeyManager deployments to
> > Barbican. The ConfKeyManager's fixed_key secret can be added to Barbican,
> > but the process of switching from one key manager to another will need
> > to be done smoothly to ensure encrypted volumes continue to function
> > during the migration period.
> >
> > One thing that will help the migration process is consolidating the
> > two ConfKeyManager implementations (one in Cinder and one in Nova).
> > The two are functionally identical, as dictated by the need to derive
> > the exact same secret from the fixed_key. While it may seem counter-
> > intuitive, adding a ConfKeyManager implementation to Castellan will
> > facilitate the process of deprecating them in Cinder and Nova.
> >
> > To that end, I identified a series of small steps to get us there:
> >
> > 1) Unify the "fixed_key" oslo_config definitions in Cinder and Nova
> > so they are identical (right now their help texts are slightly
> > different). This step avoids triggering a DuplicateOptError exception
> > in the next step.
> >
> > 2) Add a ConfKeyManager implementation to Castellan. This essentially
> > involves copying in one of the existing implementations (either Cinder's
> > or Nova's).
> >
> > 3) Replace Cinder's and Nova's implementations with references to the
> > one in Castellan. This can be done in a way that retains compatibility
> > with the key_manager "backend" (was "api_class") config options
> > currently used by Cinder and Nova. The code in
> > cinder/keymgr/conf_key_manager.py and nova/keymgr/conf_key_manager.py
> > will collapse down to this:
> >
> >   from castellan.key_manager import conf_key_manager
> >
> >   class ConfKeyManager(conf_key_manager.ConfKeyManager):
> >   pass
> >
> > Having a common ConfKeyManager implementation will make it much
> > easier to support migrating things to Barbican, and that's an important
> > step toward the goal of deprecating the ConfKeyManager entirely.
> >
> > Please let me know your thoughts, as I plan to begin proposing patches.
> >
> > Regards,
> >
> > Alan Bishop
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubs

Re: [openstack-dev] [cinder][nova][castellan] Toward deprecating ConfKeyManager

2017-10-11 Thread Alan Bishop
On Wed, Oct 11, 2017 at 1:17 PM, Dave McCowan (dmccowan)
 wrote:
> Hi Alan--
> Since a fixed-key implementation is not secure, I would prefer not
> adding it to Castellan.  Our desire is that Castellan can be a best-practice
> project to encourage operators to use key management securely.
> I'm all for consolidating code and providing good migration paths from
> ConfKeyManager to Castellan.
> Can we create a new oslo project to facilitate this?  Something like
> oslo.fixed_key_manager.
> I would rather keep a fixed_key implementation out of Castellan if
> possible.

Hi Dave,

While I totally take your point about keeping the "deficient" (I'm being
charitable) ConfKeyManager code out of Castellan, I view it as a short
term tactical move. Everyone is looking forward to deprecating the code.
The key (no pun intended) to getting there is providing a migration path
for users (there are significant ones) that have existing deployments
that use the fixed-key.

Because of the circumstances, I feel there would be resistance to the
idea of creating an entirely new oslo project that; a) consists of code
that everyone knows to be deficient, and b) will be deprecated soon.

I have another motive for temporarily moving the code into Castellan,
and it pertains to providing a migration path to Barbican. With everything
consolidated in Castellan, a wrapper class could provide a seamless way
of handling KeyManager.get() requests for the all-zeros fixed-key ID,
even when Barbican is the key manager. This would allow users to switch
to Barbican, and still have any get() requests for the legacy fixed-key
be resolved by the ConfKeyManager.

All of this could be implemented wholely within Castellan, and be totally
transparent to the the user, Nova, Cinder, and the Barbican implementation
in barbican_key_manager.py.

As a final note, we could add all sorts of warnings any to code added
to Castellan, perhaps even name the file insecure_key_manager.py ;-)

Alan


> --Dave
>
> There is an ongoing effort to deprecate the ConfKeyManager, but care
> must be taken when migrating existing ConfKeyManager deployments to
> Barbican. The ConfKeyManager's fixed_key secret can be added to Barbican,
> but the process of switching from one key manager to another will need
> to be done smoothly to ensure encrypted volumes continue to function
> during the migration period.
>
> One thing that will help the migration process is consolidating the
> two ConfKeyManager implementations (one in Cinder and one in Nova).
> The two are functionally identical, as dictated by the need to derive
> the exact same secret from the fixed_key. While it may seem counter-
> intuitive, adding a ConfKeyManager implementation to Castellan will
> facilitate the process of deprecating them in Cinder and Nova.
>
> To that end, I identified a series of small steps to get us there:
>
> 1) Unify the "fixed_key" oslo_config definitions in Cinder and Nova
> so they are identical (right now their help texts are slightly
> different). This step avoids triggering a DuplicateOptError exception
> in the next step.
>
> 2) Add a ConfKeyManager implementation to Castellan. This essentially
> involves copying in one of the existing implementations (either Cinder's
> or Nova's).
>
> 3) Replace Cinder's and Nova's implementations with references to the
> one in Castellan. This can be done in a way that retains compatibility
> with the key_manager "backend" (was "api_class") config options
> currently used by Cinder and Nova. The code in
> cinder/keymgr/conf_key_manager.py and nova/keymgr/conf_key_manager.py
> will collapse down to this:
>
>   from castellan.key_manager import conf_key_manager
>
>   class ConfKeyManager(conf_key_manager.ConfKeyManager):
>   pass
>
> Having a common ConfKeyManager implementation will make it much
> easier to support migrating things to Barbican, and that's an important
> step toward the goal of deprecating the ConfKeyManager entirely.
>
> Please let me know your thoughts, as I plan to begin proposing patches.
>
> Regards,
>
> Alan Bishop
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][castellan] Toward deprecating ConfKeyManager

2017-10-11 Thread Dave McCowan (dmccowan)
Hi Alan--
Since a fixed-key implementation is not secure, I would prefer not adding 
it to Castellan.  Our desire is that Castellan can be a best-practice project 
to encourage operators to use key management securely.
I'm all for consolidating code and providing good migration paths from 
ConfKeyManager to Castellan.
Can we create a new oslo project to facilitate this?  Something like 
oslo.fixed_key_manager.
I would rather keep a fixed_key implementation out of Castellan if possible.
--Dave

There is an ongoing effort to deprecate the ConfKeyManager, but care
must be taken when migrating existing ConfKeyManager deployments to
Barbican. The ConfKeyManager's fixed_key secret can be added to Barbican,
but the process of switching from one key manager to another will need
to be done smoothly to ensure encrypted volumes continue to function
during the migration period.

One thing that will help the migration process is consolidating the
two ConfKeyManager implementations (one in Cinder and one in Nova).
The two are functionally identical, as dictated by the need to derive
the exact same secret from the fixed_key. While it may seem counter-
intuitive, adding a ConfKeyManager implementation to Castellan will
facilitate the process of deprecating them in Cinder and Nova.

To that end, I identified a series of small steps to get us there:

1) Unify the "fixed_key" oslo_config definitions in Cinder and Nova
so they are identical (right now their help texts are slightly
different). This step avoids triggering a DuplicateOptError exception
in the next step.

2) Add a ConfKeyManager implementation to Castellan. This essentially
involves copying in one of the existing implementations (either Cinder's
or Nova's).

3) Replace Cinder's and Nova's implementations with references to the
one in Castellan. This can be done in a way that retains compatibility
with the key_manager "backend" (was "api_class") config options
currently used by Cinder and Nova. The code in
cinder/keymgr/conf_key_manager.py and nova/keymgr/conf_key_manager.py
will collapse down to this:

  from castellan.key_manager import conf_key_manager

  class ConfKeyManager(conf_key_manager.ConfKeyManager):
  pass

Having a common ConfKeyManager implementation will make it much
easier to support migrating things to Barbican, and that's an important
step toward the goal of deprecating the ConfKeyManager entirely.

Please let me know your thoughts, as I plan to begin proposing patches.

Regards,

Alan Bishop

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][castellan] Toward deprecating ConfKeyManager

2017-10-10 Thread Kendall Nelson
Seems really well thought out. Super impressed by the level of detail here.
Consolidation of duplicated code is pretty much always a good idea in my
book. Thanks for getting this rolling.

I am happy to review as you push patches. Might also be good to get Kaitlin
Farr involved too since she knows a lot in this area.

-Kendall (diablo_rojo)

On Tue, Oct 10, 2017 at 9:40 AM Alan Bishop  wrote:

> There is an ongoing effort to deprecate the ConfKeyManager, but care
> must be taken when migrating existing ConfKeyManager deployments to
> Barbican. The ConfKeyManager's fixed_key secret can be added to Barbican,
> but the process of switching from one key manager to another will need
> to be done smoothly to ensure encrypted volumes continue to function
> during the migration period.
>
> One thing that will help the migration process is consolidating the
> two ConfKeyManager implementations (one in Cinder and one in Nova).
> The two are functionally identical, as dictated by the need to derive
> the exact same secret from the fixed_key. While it may seem counter-
> intuitive, adding a ConfKeyManager implementation to Castellan will
> facilitate the process of deprecating them in Cinder and Nova.
>
> To that end, I identified a series of small steps to get us there:
>
> 1) Unify the "fixed_key" oslo_config definitions in Cinder and Nova
> so they are identical (right now their help texts are slightly
> different). This step avoids triggering a DuplicateOptError exception
> in the next step.
>
> 2) Add a ConfKeyManager implementation to Castellan. This essentially
> involves copying in one of the existing implementations (either Cinder's
> or Nova's).
>
> 3) Replace Cinder's and Nova's implementations with references to the
> one in Castellan. This can be done in a way that retains compatibility
> with the key_manager "backend" (was "api_class") config options
> currently used by Cinder and Nova. The code in
> cinder/keymgr/conf_key_manager.py and nova/keymgr/conf_key_manager.py
> will collapse down to this:
>
>   from castellan.key_manager import conf_key_manager
>
>   class ConfKeyManager(conf_key_manager.ConfKeyManager):
>   pass
>
> Having a common ConfKeyManager implementation will make it much
> easier to support migrating things to Barbican, and that's an important
> step toward the goal of deprecating the ConfKeyManager entirely.
>
> Please let me know your thoughts, as I plan to begin proposing patches.
>
> Regards,
>
> Alan Bishop
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][castellan] Toward deprecating ConfKeyManager

2017-10-10 Thread Sean McGinnis
> 
> To that end, I identified a series of small steps to get us there:
> 
> 1) Unify the "fixed_key" oslo_config definitions in Cinder and Nova
> so they are identical (right now their help texts are slightly
> different). This step avoids triggering a DuplicateOptError exception
> in the next step.
> 
> 2) Add a ConfKeyManager implementation to Castellan. This essentially
> involves copying in one of the existing implementations (either Cinder's
> or Nova's).
> 
> 3) Replace Cinder's and Nova's implementations with references to the
> one in Castellan. This can be done in a way that retains compatibility
> with the key_manager "backend" (was "api_class") config options
> currently used by Cinder and Nova. The code in
> cinder/keymgr/conf_key_manager.py and nova/keymgr/conf_key_manager.py
> will collapse down to this:
> 
>   from castellan.key_manager import conf_key_manager
> 
>   class ConfKeyManager(conf_key_manager.ConfKeyManager):
>   pass
> 
> Having a common ConfKeyManager implementation will make it much
> easier to support migrating things to Barbican, and that's an important
> step toward the goal of deprecating the ConfKeyManager entirely.
> 
> Please let me know your thoughts, as I plan to begin proposing patches.
> 
> Regards,
> 
> Alan Bishop

Makes sense to me Alan. Thanks for looking into this.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova][castellan] Toward deprecating ConfKeyManager

2017-10-10 Thread Alan Bishop
There is an ongoing effort to deprecate the ConfKeyManager, but care
must be taken when migrating existing ConfKeyManager deployments to
Barbican. The ConfKeyManager's fixed_key secret can be added to Barbican,
but the process of switching from one key manager to another will need
to be done smoothly to ensure encrypted volumes continue to function
during the migration period.

One thing that will help the migration process is consolidating the
two ConfKeyManager implementations (one in Cinder and one in Nova).
The two are functionally identical, as dictated by the need to derive
the exact same secret from the fixed_key. While it may seem counter-
intuitive, adding a ConfKeyManager implementation to Castellan will
facilitate the process of deprecating them in Cinder and Nova.

To that end, I identified a series of small steps to get us there:

1) Unify the "fixed_key" oslo_config definitions in Cinder and Nova
so they are identical (right now their help texts are slightly
different). This step avoids triggering a DuplicateOptError exception
in the next step.

2) Add a ConfKeyManager implementation to Castellan. This essentially
involves copying in one of the existing implementations (either Cinder's
or Nova's).

3) Replace Cinder's and Nova's implementations with references to the
one in Castellan. This can be done in a way that retains compatibility
with the key_manager "backend" (was "api_class") config options
currently used by Cinder and Nova. The code in
cinder/keymgr/conf_key_manager.py and nova/keymgr/conf_key_manager.py
will collapse down to this:

  from castellan.key_manager import conf_key_manager

  class ConfKeyManager(conf_key_manager.ConfKeyManager):
  pass

Having a common ConfKeyManager implementation will make it much
easier to support migrating things to Barbican, and that's an important
step toward the goal of deprecating the ConfKeyManager entirely.

Please let me know your thoughts, as I plan to begin proposing patches.

Regards,

Alan Bishop
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]May I run iscsiadm --op show & update 100 times?

2017-10-02 Thread Gorka Eguileor
On 02/10, Rikimaru Honjo wrote:
> Hello,
>
> I'd like to discuss about the following bug of os-brick.
>
> * os-brick's iscsi initiator unexpectedly reverts node.startup from 
> "automatic" to "manual".
>   https://bugs.launchpad.net/os-brick/+bug/1670237
>
> The important point of this bug is:
>
> When os-brick initializes iscsi connections:
> 1. os-brick will run "iscsiadm -m discovery" command if we use iscsi 
> multipath.

This only happens with a small number of cinder drivers, since most
drivers try to avoid the discovery path due to the number of
disadvantages it presents for a reliable deployment.  The most notorious
issue is that the path to the discovery portal on the attaching node is
down you cannot attach the volume no matter how many of the other paths
are up.



> 2. os-brick will update node.startup values to "automatic" if we use iscsi.
> 3. "iscsiadm -m discovery" command will recreate iscsi node repositories.[1]
>As a result, node.startup values of already attached volumes will be revert
>to default(=manual).
>
> Gorka Eguileor and I discussed how do I fix this bug[2].
> Our idea is this:
>
> 1. Confirm node.startup values of all the iscsi targets before running 
> discovery.
> 2. Re-update node.startup values of all the iscsi targets after running 
> discovery.
>
> But, I afraid that this operation will take a long time.
> I ran showing & updating node.startup values 100 times for researching.
> As a result, it took about 4 seconds.
> When I ran 200 times, it took about 8 seconds.
> I think this is a little long.
>
> If we use multipath and attach 25 volumes, 100 targets will be created.
> I think that updating 100 times is a possible use case.
>
> How do you think about it?
> Can I implement the above idea?
>

The approach I proposed is on the review is valid, the flaw is in the
specific implementation, you are doing 100 request where 4 would
suffice.

You don't need to do a request for each target-portal tuple, you only
need to do 1 request per portal, which reduces the number of calls to
iscsiadm from 100 to 4 in the case you mention.

You can check all targets for an IP with:
  iscsiadm -m node -p IP

This means that the performance hit from having 100 or 200 targets
should be negligible.

Cheers,
Gorka.



> [1]This is correct behavior of iscsiadm.
>https://github.com/open-iscsi/open-iscsi/issues/58#issuecomment-325528315
> [2]https://bugs.launchpad.net/os-brick/+bug/1670237
> --
> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
> Rikimaru Honjo
> E-mail:honjo.rikim...@po.ntt-tx.co.jp
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova]May I run iscsiadm --op show & update 100 times?

2017-10-01 Thread Rikimaru Honjo

Hello,

I'd like to discuss about the following bug of os-brick.

* os-brick's iscsi initiator unexpectedly reverts node.startup from "automatic" to 
"manual".
  https://bugs.launchpad.net/os-brick/+bug/1670237

The important point of this bug is:

When os-brick initializes iscsi connections:
1. os-brick will run "iscsiadm -m discovery" command if we use iscsi multipath.
2. os-brick will update node.startup values to "automatic" if we use iscsi.
3. "iscsiadm -m discovery" command will recreate iscsi node repositories.[1]
   As a result, node.startup values of already attached volumes will be revert
   to default(=manual).

Gorka Eguileor and I discussed how do I fix this bug[2].
Our idea is this:

1. Confirm node.startup values of all the iscsi targets before running 
discovery.
2. Re-update node.startup values of all the iscsi targets after running 
discovery.

But, I afraid that this operation will take a long time.
I ran showing & updating node.startup values 100 times for researching.
As a result, it took about 4 seconds.
When I ran 200 times, it took about 8 seconds.
I think this is a little long.

If we use multipath and attach 25 volumes, 100 targets will be created.
I think that updating 100 times is a possible use case.

How do you think about it?
Can I implement the above idea?

[1]This is correct behavior of iscsiadm.
   https://github.com/open-iscsi/open-iscsi/issues/58#issuecomment-325528315
[2]https://bugs.launchpad.net/os-brick/+bug/1670237
--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntt-tx.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Tony Breeds
On Mon, Jul 31, 2017 at 09:09:52PM -0500, Matt Riedemann wrote:
> On 7/31/2017 5:21 PM, Tony Breeds wrote:
> > We need a +1 from the release team (are they okay to accept a late
> > release of glance_store); and a +1 from glance (are they okay to do said
> > release)
> 
> Glance doesn't actually need this minimum version bump for os-brick, the fix
> is for some attached volume extend stuff, which isn't related to Glance, so
> does having the minimum bump in glance* matter?

Maybe it doesn't I can't think of a scenario where someone will end up
with 1.15.1 if the mix glance and nova.

If glance_store doesn't take the bump of os-brick before we release
pike then they're going to be faced with breaking the "we don't bump
minimums on stable branches" or opting out of requirements management for
that branch aren't they?

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Walter Boring
Do it +1

On Mon, Jul 31, 2017 at 7:37 AM, Sean McGinnis 
wrote:

> I am requesting a library release of os-brick during the feature freeze
> in order to fix an issue with the recently landed online volume extend
> feature across Nova and Cinder.
>
> Patches have landed in both projects to add this feature. It wasn't until
> later that Matt was able to get tempest tests in that found an issue with
> some of the logic in the os-brick library. That has now been fixed in the
> stable/pike branch in os-brick with this patch:
>
> https://review.openstack.org/#/c/489227/
>
> We can get a new library release out as soon as the freeze is over, but
> due to the fact that we do not raise global requirements for stable
> branches after release, there could be some deployments that would still
> use the old ("broken") lib. We would need to get this release out before
> the final pike branching of Cinder and Nova to be able to raise G-R to
> make sure the new release is used with this fix.
>
> I see this change as a low risk for other regression, and it would allow
> us to not ship a broken feature.
>
> Thanks,
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Matthew Thode
On 17-07-31 21:09:52, Matt Riedemann wrote:
> On 7/31/2017 5:21 PM, Tony Breeds wrote:
> > We need a +1 from the release team (are they okay to accept a late
> > release of glance_store); and a +1 from glance (are they okay to do said
> > release)
> 
> Glance doesn't actually need this minimum version bump for os-brick, the 
> fix is for some attached volume extend stuff, which isn't related to 
> Glance, so does having the minimum bump in glance* matter?
> 
> -- 
> 
> Thanks,
> 
> Matt
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

For co-installability between projects it'd be good to be in sync.  The
same could be said to many of the bumps that go through the requirements
project.  One of the things we've been working on is divergent
requirements, where the goal is to keep making sure all projects test
against one set of upper-constraints, but allow each project to manage
their requirements outside of that.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Matt Riedemann

On 7/31/2017 5:21 PM, Tony Breeds wrote:

We need a +1 from the release team (are they okay to accept a late
release of glance_store); and a +1 from glance (are they okay to do said
release)


Glance doesn't actually need this minimum version bump for os-brick, the 
fix is for some attached volume extend stuff, which isn't related to 
Glance, so does having the minimum bump in glance* matter?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Tony Breeds
On Mon, Jul 31, 2017 at 03:00:16PM -0500, Sean McGinnis wrote:
> > I am requesting a library release of os-brick during the feature freeze
> > in order to fix an issue with the recently landed online volume extend
> > feature across Nova and Cinder.
> > 
> 
> New os-brick 1.15.2 release has been requested here:
> 
> https://review.openstack.org/489370

From a requirements POV I'm fine with that.  It affects:

Package  : os-brick [os-brick>=1.15.1] (used by 8 projects)
Also affects : 8 projects
openstack/cinder  [tc:approved-release]
openstack/compute-hyperv  []
openstack/freezer []
openstack/fuxi[]
openstack/glance_store[]
openstack/nova[tc:approved-release]
openstack/nova-lxd[]
openstack/python-brick-cinderclient-ext   []

The one that is *most* problematic is glance_store  We already have an
FFE for glance_store it's probably ok.

We need a +1 from the release team (are they okay to accept a late
release of glance_store); and a +1 from glance (are they okay to do said
release)

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Sean McGinnis
> I am requesting a library release of os-brick during the feature freeze
> in order to fix an issue with the recently landed online volume extend
> feature across Nova and Cinder.
> 

New os-brick 1.15.2 release has been requested here:

https://review.openstack.org/489370

Thanks,
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Ivan Kolodyazhny
Sounds reasonable for me too.


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Mon, Jul 31, 2017 at 5:40 PM, Davanum Srinivas  wrote:

> I'd support this Sean. +1
>
> Thanks,
> Dims
>
> On Mon, Jul 31, 2017 at 10:37 AM, Sean McGinnis 
> wrote:
> > I am requesting a library release of os-brick during the feature freeze
> > in order to fix an issue with the recently landed online volume extend
> > feature across Nova and Cinder.
> >
> > Patches have landed in both projects to add this feature. It wasn't until
> > later that Matt was able to get tempest tests in that found an issue with
> > some of the logic in the os-brick library. That has now been fixed in the
> > stable/pike branch in os-brick with this patch:
> >
> > https://review.openstack.org/#/c/489227/
> >
> > We can get a new library release out as soon as the freeze is over, but
> > due to the fact that we do not raise global requirements for stable
> > branches after release, there could be some deployments that would still
> > use the old ("broken") lib. We would need to get this release out before
> > the final pike branching of Cinder and Nova to be able to raise G-R to
> > make sure the new release is used with this fix.
> >
> > I see this change as a low risk for other regression, and it would allow
> > us to not ship a broken feature.
> >
> > Thanks,
> > Sean
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Matthew Thode
Yep, please submit the review refrencing this thread, lgtm.

On 17-07-31 10:40:10, Davanum Srinivas wrote:
> I'd support this Sean. +1
> 
> Thanks,
> Dims
> 
> On Mon, Jul 31, 2017 at 10:37 AM, Sean McGinnis  wrote:
> > I am requesting a library release of os-brick during the feature freeze
> > in order to fix an issue with the recently landed online volume extend
> > feature across Nova and Cinder.
> >
> > Patches have landed in both projects to add this feature. It wasn't until
> > later that Matt was able to get tempest tests in that found an issue with
> > some of the logic in the os-brick library. That has now been fixed in the
> > stable/pike branch in os-brick with this patch:
> >
> > https://review.openstack.org/#/c/489227/
> >
> > We can get a new library release out as soon as the freeze is over, but
> > due to the fact that we do not raise global requirements for stable
> > branches after release, there could be some deployments that would still
> > use the old ("broken") lib. We would need to get this release out before
> > the final pike branching of Cinder and Nova to be able to raise G-R to
> > make sure the new release is used with this fix.
> >
> > I see this change as a low risk for other regression, and it would allow
> > us to not ship a broken feature.
> >
> > Thanks,
> > Sean
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Davanum Srinivas
I'd support this Sean. +1

Thanks,
Dims

On Mon, Jul 31, 2017 at 10:37 AM, Sean McGinnis  wrote:
> I am requesting a library release of os-brick during the feature freeze
> in order to fix an issue with the recently landed online volume extend
> feature across Nova and Cinder.
>
> Patches have landed in both projects to add this feature. It wasn't until
> later that Matt was able to get tempest tests in that found an issue with
> some of the logic in the os-brick library. That has now been fixed in the
> stable/pike branch in os-brick with this patch:
>
> https://review.openstack.org/#/c/489227/
>
> We can get a new library release out as soon as the freeze is over, but
> due to the fact that we do not raise global requirements for stable
> branches after release, there could be some deployments that would still
> use the old ("broken") lib. We would need to get this release out before
> the final pike branching of Cinder and Nova to be able to raise G-R to
> make sure the new release is used with this fix.
>
> I see this change as a low risk for other regression, and it would allow
> us to not ship a broken feature.
>
> Thanks,
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Sean McGinnis
I am requesting a library release of os-brick during the feature freeze
in order to fix an issue with the recently landed online volume extend
feature across Nova and Cinder.

Patches have landed in both projects to add this feature. It wasn't until
later that Matt was able to get tempest tests in that found an issue with
some of the logic in the os-brick library. That has now been fixed in the
stable/pike branch in os-brick with this patch:

https://review.openstack.org/#/c/489227/

We can get a new library release out as soon as the freeze is over, but
due to the fact that we do not raise global requirements for stable
branches after release, there could be some deployments that would still
use the old ("broken") lib. We would need to get this release out before
the final pike branching of Cinder and Nova to be able to raise G-R to
make sure the new release is used with this fix.

I see this change as a low risk for other regression, and it would allow
us to not ship a broken feature.

Thanks,
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-06-07 Thread Jiri Suchomel
V Wed, 31 May 2017 11:34:20 -0400
Eric Harney  napsáno:

> On 05/25/2017 05:51 AM, Jiri Suchomel wrote:
> > Hi,
> > it seems to me that the way of adding extra NFS options to the
> > cinder backend is somewhat confusing.
> > 
> > 1. There is  nfs_mount_options in cinder config file [1]
> > 
> > 2. Then I can put my options to the nfs_shares_config file - that
> > it could contain additional options mentiones [2] or the
> > commit message that adds the feature [3]
> > 
> > Now, when I put my options to both of these places, cinder-volume
> > actually uses them twice and executes the command like this
> > 
> > mount -t nfs -o nfsvers=3 -o nfsvers=3
> > 192.168.241.10:/srv/nfs/vi7/cinder 
> > /var/lib/cinder/mnt/f5689da9ea41a66eff2ce0ef89b37bce
> > 
> > BTW, the options coming from nfs_shares_config are called 'flags' by
> > cinder/volume/drivers/nfs ([4]).
> > 
> > Now, to make it more fun, when I actually want to attach a volume to
> > running instance, nova uses different way of realizing which NFS
> > options to use:
> > 
> > - It reads them from _nova_ config option of
> > libvirt.nfs_mount_options [5]
> > - or it uses those it gets them from cinder when creating cinder
> > connection [6] But these are only the options defined in
> > nfs_shares_config file, NOT those nfs_mount_options specified in
> > cinder config file.
> > 
> > 
> > So. If I put my options to both places, nfs_shares_config file and
> > nfs_mount_options, it actually works how I want it to work, as
> > current mount does not complain that the option was provided twice. 
> > 
> > But it looks ugly. And I'm wondering - am I doing it wrong, or
> > is there a problem with either cinder or nova (or both)?
> >   
> 
> This has gotten a bit more confusing than in necessary in Cinder due
> to how the configuration for the NFS and related drivers has been
> tweaked over time.
> 
> The method of putting a list of shares in the nfs_shares_config file
> is effectively deprecated, but still works for now.
> 
> The preferred method now is to set the following options:
>nas_host:  server address
>nas_share_path:  export path
>nas_mount_options:  options for mounting the export
> 
> So whereas before the nfs_shares_config file would have:
>127.0.0.1:/srv/nfs1 -o nfsvers=3
> 
> This would now translate to:
>nas_host=127.0.0.1
>nas_share_path=/srv/nfs1
>nas_mount_options = -o nfsvers=3
> 
> I believe if you try configuring the driver this way, you will get the
> desired result.

Nope, this does not work.
If I provide nas_mount_options, mount command tries to use them twice.

os_brick.remotefs.remotefs, _do_mount gets them in the form of "flags"
as well as "options" [1]:

- mount_flags are coming from the shares structure before
  cinder.volume.nfs calls _remotefsclient.mount [2]. They were written 
  there by cinder/volume/drivers/remotefs when the suggested nas_*
  configuration options were found[3]

- mount_options are created in the constructor of RemoteFsClient from
nfs_mount_options [4] which are passed from cinder.volume.drivers.nfs
[5][6] 

So this means that the mount command gets the options twice. 
But not only that! os-brick/remotefs.py's _do_mount adds an extra -o
option to the list of options (apparently because both
nas_mount_options and nfs_mount_options have different syntax!) which
makes the mount command fail for sure.


So, to summarize:

- nfs_mount_options in cinder conf is ignored when cinder connection
  is passed to nova
- nas_mount_options seems to be broken with NFS backend (as described
  above)
- options provided in nfs_shares_config file are passed to mount
  command twice, but this does not necessary hurt, so this solution
  actually works (and has additional bonus of possibility to define
  mount options per mount point)

Jiri

- 

[1]
https://github.com/openstack/os-brick/blob/stable/newton/os_brick/remotefs/remotefs.py#L114
[2]
https://github.com/openstack/cinder/blob/stable/newton/cinder/volume/drivers/nfs.py#L163
[3]
https://github.com/openstack/cinder/blob/stable/newton/cinder/volume/drivers/remotefs.py#L463
[4] 
https://github.com/openstack/os-brick/blob/stable/newton/os_brick/remotefs/remotefs.py#L60
[5]
https://github.com/openstack/cinder/blob/stable/newton/cinder/volume/drivers/nfs.py#L99
[6]
https://github.com/openstack/cinder/blob/stable/newton/cinder/volume/drivers/nfs.py#L103






https://github.com/openstack/os-brick/blob/stable/newton/os_brick/remotefs/remotefs.py#L114


-- 
Jiri Suchomel

SUSE LINUX, s.r.o.
CORSO IIa
Krizikova 148/34
18600 Praha 8

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][os-brick] Testing for proposed iSCSI OS-Brick code

2017-06-01 Thread Gorka Eguileor
On 31/05, Matt Riedemann wrote:
> On 5/31/2017 6:58 AM, Gorka Eguileor wrote:
> > Hi,
> >
> > As some of you may know I've been working on improving iSCSI connections
> > on OpenStack to make them more robust and prevent them from leaving
> > leftovers on attach/detach operations.
> >
> > There are a couple of posts [1][2] going in more detail, but a good
> > summary would be that to fix this issue we require a considerable rework
> > in OS-Brick, changes in Open iSCSI, Cinder, Nova and specific tests.
> >
> > Relevant changes for those projects are:
> >
> > - Open iSCSI: iscsid behavior is not a perfect fit for the OpenStack use
> >case, so a new feature was added to disable automatic scans that added
> >unintended devices to the systems.  Done and merged [3][4], it will be
> >available on RHEL with iscsi-initiator-utils-6.2.0.874-2.el7
> >
> > - OS-Brick: rework iSCSI to make it robust on unreliable networks, to
> >add a `force` detach option that prioritizes leaving a clean system
> >over possible data loss, and to support the new Open iSCSI feature.
> >Done and pending review [5][6][7]
> >
> > - Cinder: Handle some attach/detach errors a little better and add
> >support to the force detach option for some operations where data loss
> >on error is acceptable, ie: create volume from image, restore backup,
> >etc. Done and pending review [8][9]
> >
> > - Nova: I haven't looked into the code here, but I'm sure there will be
> >cases where using the force detach operation will be useful.
> >
> > - Tests: While we do have tempest tests that verify that attach/detach
> >operations work both in Nova and in cinder volume creation operations,
> >they are not meant to test the robustness of the system, so new tests
> >will be required to validate the code.  Done [10]
> >
> > Proposed tests are simplified versions of the ones I used to validate
> > the code; but hey, at least these are somewhat readable ;-)
> > Unfortunately they are not in line with the tempest mission since they
> > are not meant to be run in a production environment due to their
> > disruptive nature while injecting errors.  They need to be run
> > sequentially and without any other operations running on the deployment.
> > They also run sudo commands via local bash or SSH for the verification
> > and error generation bits.
> >
> > We are testing create volume from image and attaching a volume to an
> > instance under the following networking error scenarios:
> >
> >   - No errors
> >   - All paths have 10% incoming packets dropped
> >   - All paths have 20% incoming packets dropped
> >   - All paths have 100% incoming packets dropped
> >   - Half the paths have 20% incoming packets dropped
> >   - The other half of the paths have 20% incoming packets dropped
> >   - Half the paths have 100% incoming packets dropped
> >   - The other half of the paths have 100% incoming packets dropped
> >
> > There are single execution versions as well as 10 consecutive operations
> > variants.
> >
> > Since these are big changes I'm sure we would all feel a lot more
> > confident to merge them if storage vendors would run the new tests to
> > confirm that there are no issues with their backends.
> >
> > Unfortunately to fully test the solution you may need to build the
> > latest Open-iSCSI package and install it in the system, then you can
> > just use an all-in-one DevStack with a couple of changes in the local.conf:
> >
> > enable_service tempest
> >
> > CINDER_REPO=https://review.openstack.org/p/openstack/cinder
> > CINDER_BRANCH=refs/changes/45/469445/1
> >
> > LIBS_FROM_GIT=os-brick
> >
> > OS_BRICK_REPO=https://review.openstack.org/p/openstack/os-brick
> > OS_BRICK_BRANCH=refs/changes/94/455394/11
> >
> > [[post-config|$CINDER_CONF]]
> > [multipath-backend]
> > use_multipath_for_image_xfer=true
> >
> > [[post-config|$NOVA_CONF]]
> > [libvirt]
> > volume_use_multipath = True
> >
> > [[post-config|$KEYSTONE_CONF]]
> > [token]
> > expiration = 14400
> >
> > [[test-config|$TEMPEST_CONFIG]]
> > [volume-feature-enabled]
> > multipath = True
> > [volume]
> > build_interval = 10
> > multipath_type = $MULTIPATH_VOLUME_TYPE
> > backend_protocol_tcp_port = 3260
> > multipath_backend_addresses = $STORAGE_BACKEND_IP1,$STORAGE_BACKEND_IP2
> >
> > Multinode configurations are also supported using SSH with use/password or
> > private key to introduce the errors or check that the systems didn't leave 
> > any
> > leftovers, the tests can also run a cleanup command between tests, etc., but
> > that's beyond the scope of this email.
> >
> > Then you can run them all from /opt/stack/tempest with:
> >
> >   $ cd /opt/stack/tempest
> >   $ OS_TEST_TIMEOUT=7200 ostestr -r 
> > cinder.tests.tempest.scenario.test_multipath.*
> >
> > But I would recommend first running the simplest one without errors and
> > manually checking that the m

Re: [openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-05-31 Thread Jay S Bryant



On 5/31/2017 12:02 PM, Jiri Suchomel wrote:

V Wed, 31 May 2017 11:34:20 -0400
Eric Harney  napsáno:


On 05/25/2017 05:51 AM, Jiri Suchomel wrote:

Hi,
it seems to me that the way of adding extra NFS options to the
cinder backend is somewhat confusing.

...

This has gotten a bit more confusing than in necessary in Cinder due
to how the configuration for the NFS and related drivers has been
tweaked over time.

The method of putting a list of shares in the nfs_shares_config file
is effectively deprecated, but still works for now.
...

Thanks for answer!
I should definitely try with those specific host/path/options.

However, it seems that the (supposedly) deprecated way does not
really work how it should, so I filed a bug report:

https://bugs.launchpad.net/cinder/+bug/1694758

Jiri


Jiri,

Thanks.  Does seem, at a minimum, that the documentation needs to be 
updated and that there may be some other bugs here.  We will work this 
through the bug.


Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-05-31 Thread Jiri Suchomel
V Wed, 31 May 2017 11:34:20 -0400
Eric Harney  napsáno:

> On 05/25/2017 05:51 AM, Jiri Suchomel wrote:
> > Hi,
> > it seems to me that the way of adding extra NFS options to the
> > cinder backend is somewhat confusing.
> > 
> > ...

> This has gotten a bit more confusing than in necessary in Cinder due
> to how the configuration for the NFS and related drivers has been
> tweaked over time.
> 
> The method of putting a list of shares in the nfs_shares_config file
> is effectively deprecated, but still works for now.
> ...

Thanks for answer! 
I should definitely try with those specific host/path/options.

However, it seems that the (supposedly) deprecated way does not
really work how it should, so I filed a bug report: 

https://bugs.launchpad.net/cinder/+bug/1694758

Jiri

-- 
Jiri Suchomel

SUSE LINUX, s.r.o.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-05-31 Thread Eric Harney
On 05/25/2017 05:51 AM, Jiri Suchomel wrote:
> Hi,
> it seems to me that the way of adding extra NFS options to the cinder
> backend is somewhat confusing.
> 
> 1. There is  nfs_mount_options in cinder config file [1]
> 
> 2. Then I can put my options to the nfs_shares_config file - that
> it could contain additional options mentiones [2] or the
> commit message that adds the feature [3]
> 
> Now, when I put my options to both of these places, cinder-volume
> actually uses them twice and executes the command like this
> 
> mount -t nfs -o nfsvers=3 -o nfsvers=3
> 192.168.241.10:/srv/nfs/vi7/cinder 
> /var/lib/cinder/mnt/f5689da9ea41a66eff2ce0ef89b37bce
> 
> BTW, the options coming from nfs_shares_config are called 'flags' by
> cinder/volume/drivers/nfs ([4]).
> 
> Now, to make it more fun, when I actually want to attach a volume to
> running instance, nova uses different way of realizing which NFS options to 
> use:
> 
> - It reads them from _nova_ config option of libvirt.nfs_mount_options
> [5]
> - or it uses those it gets them from cinder when creating cinder
> connection [6] But these are only the options defined in
> nfs_shares_config file, NOT those nfs_mount_options specified in cinder
> config file.
> 
> 
> So. If I put my options to both places, nfs_shares_config file and
> nfs_mount_options, it actually works how I want it to work, as
> current mount does not complain that the option was provided twice. 
> 
> But it looks ugly. And I'm wondering - am I doing it wrong, or
> is there a problem with either cinder or nova (or both)?
> 

This has gotten a bit more confusing than in necessary in Cinder due to
how the configuration for the NFS and related drivers has been tweaked
over time.

The method of putting a list of shares in the nfs_shares_config file is
effectively deprecated, but still works for now.

The preferred method now is to set the following options:
   nas_host:  server address
   nas_share_path:  export path
   nas_mount_options:  options for mounting the export

So whereas before the nfs_shares_config file would have:
   127.0.0.1:/srv/nfs1 -o nfsvers=3

This would now translate to:
   nas_host=127.0.0.1
   nas_share_path=/srv/nfs1
   nas_mount_options = -o nfsvers=3

I believe if you try configuring the driver this way, you will get the
desired result.

The goal was to remove the nfs_shares_config config method, but this
hasn't happened yet -- I/we need to revisit this area and see about
doing this.

Eric

> 
> Jiri
> 
> 
> [1] https://docs.openstack.org/admin-guide/blockstorage-nfs-backend.html
> [2]
> https://docs.openstack.org/newton/config-reference/block-storage/drivers/nfs-volume-driver.html
> [3]
> https://github.com/openstack/cinder/commit/553e0d92c40c73aa1680743c4287f31770131c97
> [4]
> https://github.com/openstack/cinder/blob/stable/newton/cinder/volume/drivers/nfs.py#L163
> [5]
> https://github.com/openstack/nova/blob/stable/newton/nova/virt/libvirt/volume/nfs.py#L87
> [6] 
> https://github.com/openstack/nova/blob/stable/newton/nova/virt/libvirt/volume/nfs.py#L89
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][os-brick] Testing for proposed iSCSI OS-Brick code

2017-05-31 Thread Matt Riedemann

On 5/31/2017 6:58 AM, Gorka Eguileor wrote:

Hi,

As some of you may know I've been working on improving iSCSI connections
on OpenStack to make them more robust and prevent them from leaving
leftovers on attach/detach operations.

There are a couple of posts [1][2] going in more detail, but a good
summary would be that to fix this issue we require a considerable rework
in OS-Brick, changes in Open iSCSI, Cinder, Nova and specific tests.

Relevant changes for those projects are:

- Open iSCSI: iscsid behavior is not a perfect fit for the OpenStack use
   case, so a new feature was added to disable automatic scans that added
   unintended devices to the systems.  Done and merged [3][4], it will be
   available on RHEL with iscsi-initiator-utils-6.2.0.874-2.el7

- OS-Brick: rework iSCSI to make it robust on unreliable networks, to
   add a `force` detach option that prioritizes leaving a clean system
   over possible data loss, and to support the new Open iSCSI feature.
   Done and pending review [5][6][7]

- Cinder: Handle some attach/detach errors a little better and add
   support to the force detach option for some operations where data loss
   on error is acceptable, ie: create volume from image, restore backup,
   etc. Done and pending review [8][9]

- Nova: I haven't looked into the code here, but I'm sure there will be
   cases where using the force detach operation will be useful.

- Tests: While we do have tempest tests that verify that attach/detach
   operations work both in Nova and in cinder volume creation operations,
   they are not meant to test the robustness of the system, so new tests
   will be required to validate the code.  Done [10]

Proposed tests are simplified versions of the ones I used to validate
the code; but hey, at least these are somewhat readable ;-)
Unfortunately they are not in line with the tempest mission since they
are not meant to be run in a production environment due to their
disruptive nature while injecting errors.  They need to be run
sequentially and without any other operations running on the deployment.
They also run sudo commands via local bash or SSH for the verification
and error generation bits.

We are testing create volume from image and attaching a volume to an
instance under the following networking error scenarios:

  - No errors
  - All paths have 10% incoming packets dropped
  - All paths have 20% incoming packets dropped
  - All paths have 100% incoming packets dropped
  - Half the paths have 20% incoming packets dropped
  - The other half of the paths have 20% incoming packets dropped
  - Half the paths have 100% incoming packets dropped
  - The other half of the paths have 100% incoming packets dropped

There are single execution versions as well as 10 consecutive operations
variants.

Since these are big changes I'm sure we would all feel a lot more
confident to merge them if storage vendors would run the new tests to
confirm that there are no issues with their backends.

Unfortunately to fully test the solution you may need to build the
latest Open-iSCSI package and install it in the system, then you can
just use an all-in-one DevStack with a couple of changes in the local.conf:

enable_service tempest

CINDER_REPO=https://review.openstack.org/p/openstack/cinder
CINDER_BRANCH=refs/changes/45/469445/1

LIBS_FROM_GIT=os-brick

OS_BRICK_REPO=https://review.openstack.org/p/openstack/os-brick
OS_BRICK_BRANCH=refs/changes/94/455394/11

[[post-config|$CINDER_CONF]]
[multipath-backend]
use_multipath_for_image_xfer=true

[[post-config|$NOVA_CONF]]
[libvirt]
volume_use_multipath = True

[[post-config|$KEYSTONE_CONF]]
[token]
expiration = 14400

[[test-config|$TEMPEST_CONFIG]]
[volume-feature-enabled]
multipath = True
[volume]
build_interval = 10
multipath_type = $MULTIPATH_VOLUME_TYPE
backend_protocol_tcp_port = 3260
multipath_backend_addresses = $STORAGE_BACKEND_IP1,$STORAGE_BACKEND_IP2

Multinode configurations are also supported using SSH with use/password or
private key to introduce the errors or check that the systems didn't leave any
leftovers, the tests can also run a cleanup command between tests, etc., but
that's beyond the scope of this email.

Then you can run them all from /opt/stack/tempest with:

  $ cd /opt/stack/tempest
  $ OS_TEST_TIMEOUT=7200 ostestr -r 
cinder.tests.tempest.scenario.test_multipath.*

But I would recommend first running the simplest one without errors and
manually checking that the multipath is being created.

  $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_1

Then doing the same with one with errors and verify the presence of the
filters in iptables and that the packet drop for those filters is non zero:

  $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_2
  $ sudo iptables -nvL INPUT

Then doing the same with a Nova

[openstack-dev] [cinder][nova][os-brick] Testing for proposed iSCSI OS-Brick code

2017-05-31 Thread Gorka Eguileor
Hi,

As some of you may know I've been working on improving iSCSI connections
on OpenStack to make them more robust and prevent them from leaving
leftovers on attach/detach operations.

There are a couple of posts [1][2] going in more detail, but a good
summary would be that to fix this issue we require a considerable rework
in OS-Brick, changes in Open iSCSI, Cinder, Nova and specific tests.

Relevant changes for those projects are:

- Open iSCSI: iscsid behavior is not a perfect fit for the OpenStack use
  case, so a new feature was added to disable automatic scans that added
  unintended devices to the systems.  Done and merged [3][4], it will be
  available on RHEL with iscsi-initiator-utils-6.2.0.874-2.el7

- OS-Brick: rework iSCSI to make it robust on unreliable networks, to
  add a `force` detach option that prioritizes leaving a clean system
  over possible data loss, and to support the new Open iSCSI feature.
  Done and pending review [5][6][7]

- Cinder: Handle some attach/detach errors a little better and add
  support to the force detach option for some operations where data loss
  on error is acceptable, ie: create volume from image, restore backup,
  etc. Done and pending review [8][9]

- Nova: I haven't looked into the code here, but I'm sure there will be
  cases where using the force detach operation will be useful.

- Tests: While we do have tempest tests that verify that attach/detach
  operations work both in Nova and in cinder volume creation operations,
  they are not meant to test the robustness of the system, so new tests
  will be required to validate the code.  Done [10]

Proposed tests are simplified versions of the ones I used to validate
the code; but hey, at least these are somewhat readable ;-)
Unfortunately they are not in line with the tempest mission since they
are not meant to be run in a production environment due to their
disruptive nature while injecting errors.  They need to be run
sequentially and without any other operations running on the deployment.
They also run sudo commands via local bash or SSH for the verification
and error generation bits.

We are testing create volume from image and attaching a volume to an
instance under the following networking error scenarios:

 - No errors
 - All paths have 10% incoming packets dropped
 - All paths have 20% incoming packets dropped
 - All paths have 100% incoming packets dropped
 - Half the paths have 20% incoming packets dropped
 - The other half of the paths have 20% incoming packets dropped
 - Half the paths have 100% incoming packets dropped
 - The other half of the paths have 100% incoming packets dropped

There are single execution versions as well as 10 consecutive operations
variants.

Since these are big changes I'm sure we would all feel a lot more
confident to merge them if storage vendors would run the new tests to
confirm that there are no issues with their backends.

Unfortunately to fully test the solution you may need to build the
latest Open-iSCSI package and install it in the system, then you can
just use an all-in-one DevStack with a couple of changes in the local.conf:

   enable_service tempest

   CINDER_REPO=https://review.openstack.org/p/openstack/cinder
   CINDER_BRANCH=refs/changes/45/469445/1

   LIBS_FROM_GIT=os-brick

   OS_BRICK_REPO=https://review.openstack.org/p/openstack/os-brick
   OS_BRICK_BRANCH=refs/changes/94/455394/11

   [[post-config|$CINDER_CONF]]
   [multipath-backend]
   use_multipath_for_image_xfer=true

   [[post-config|$NOVA_CONF]]
   [libvirt]
   volume_use_multipath = True

   [[post-config|$KEYSTONE_CONF]]
   [token]
   expiration = 14400

   [[test-config|$TEMPEST_CONFIG]]
   [volume-feature-enabled]
   multipath = True
   [volume]
   build_interval = 10
   multipath_type = $MULTIPATH_VOLUME_TYPE
   backend_protocol_tcp_port = 3260
   multipath_backend_addresses = $STORAGE_BACKEND_IP1,$STORAGE_BACKEND_IP2

Multinode configurations are also supported using SSH with use/password or
private key to introduce the errors or check that the systems didn't leave any
leftovers, the tests can also run a cleanup command between tests, etc., but
that's beyond the scope of this email.

Then you can run them all from /opt/stack/tempest with:

 $ cd /opt/stack/tempest
 $ OS_TEST_TIMEOUT=7200 ostestr -r 
cinder.tests.tempest.scenario.test_multipath.*

But I would recommend first running the simplest one without errors and
manually checking that the multipath is being created.

 $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_1

Then doing the same with one with errors and verify the presence of the
filters in iptables and that the packet drop for those filters is non zero:

 $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_create_volume_with_errors_2
 $ sudo iptables -nvL INPUT

Then doing the same with a Nova test just to verify that it is correctly
configured to use multipathing:

 $ ostestr -n 
cinde

[openstack-dev] [cinder] [nova] How to provide additional options to NFS backend?

2017-05-25 Thread Jiri Suchomel
Hi,
it seems to me that the way of adding extra NFS options to the cinder
backend is somewhat confusing.

1. There is  nfs_mount_options in cinder config file [1]

2. Then I can put my options to the nfs_shares_config file - that
it could contain additional options mentiones [2] or the
commit message that adds the feature [3]

Now, when I put my options to both of these places, cinder-volume
actually uses them twice and executes the command like this

mount -t nfs -o nfsvers=3 -o nfsvers=3
192.168.241.10:/srv/nfs/vi7/cinder 
/var/lib/cinder/mnt/f5689da9ea41a66eff2ce0ef89b37bce

BTW, the options coming from nfs_shares_config are called 'flags' by
cinder/volume/drivers/nfs ([4]).

Now, to make it more fun, when I actually want to attach a volume to
running instance, nova uses different way of realizing which NFS options to use:

- It reads them from _nova_ config option of libvirt.nfs_mount_options
[5]
- or it uses those it gets them from cinder when creating cinder
connection [6] But these are only the options defined in
nfs_shares_config file, NOT those nfs_mount_options specified in cinder
config file.


So. If I put my options to both places, nfs_shares_config file and
nfs_mount_options, it actually works how I want it to work, as
current mount does not complain that the option was provided twice. 

But it looks ugly. And I'm wondering - am I doing it wrong, or
is there a problem with either cinder or nova (or both)?


Jiri


[1] https://docs.openstack.org/admin-guide/blockstorage-nfs-backend.html
[2]
https://docs.openstack.org/newton/config-reference/block-storage/drivers/nfs-volume-driver.html
[3]
https://github.com/openstack/cinder/commit/553e0d92c40c73aa1680743c4287f31770131c97
[4]
https://github.com/openstack/cinder/blob/stable/newton/cinder/volume/drivers/nfs.py#L163
[5]
https://github.com/openstack/nova/blob/stable/newton/nova/virt/libvirt/volume/nfs.py#L87
[6] 
https://github.com/openstack/nova/blob/stable/newton/nova/virt/libvirt/volume/nfs.py#L89

-- 
Jiri Suchomel

SUSE LINUX, s.r.o.
CORSO IIa
Krizikova 148/34
18600 Praha 8

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]nova client code

2017-04-27 Thread Gyorgy Szombathelyi
Hi!

As we're trying to use the InstanceLocalityFilter in cinder, I encountered some 
strange issues.
I've opened a bug report already:

https://bugs.launchpad.net/cinder/+bug/1686616

But further looking at the novaclient code in Cinder, cinder/nova.py smells a 
bit more. Seems the latest
modifications are forgot about the case where the user context is used. The 
problems (some are mentioned in the bug report):
-  It takes great efforts to get the Nova url from the service catalog. Then it 
passes this url to the constructor of the Keystone
Password plugin (which needs the keystone endpoint). Can be mitigated by 
setting nova_endpoint_template to the Keystone
endpoint (uhh). The plain nova endpoint is not required anywhere.
- Tries to create a Password plugin, even when the user context is requested. 
But it doesn't have a password. Creating the Password
plugin like this: password=context.auth_token is very broken.

What I suggest is to: 
- introduce a [nova] section, and use the keystone auth and session loader if a 
privileged user is requested, like in other components.
- use the Token plugin when the user context is used for the authentication.
- get rid of the service catalog reading code, [nova] should contain auth_url 
in all cases.
- (get a lightweight novaclient glue code from neutron, or any other 
components).

What do you think?

Br,
György


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova] Scheduling issue for the Summit

2017-04-21 Thread Sean McGinnis
On Thu, Apr 20, 2017 at 06:55:30PM -0500, Jay S Bryant wrote:
> Sean,
> 
> In the case that all the conflicts cannot be resolved I would be happy to
> cover the Onboarding session if you can keep me in the loop/take items for
> the Cinder Ephemeral session.
> 
> Let me know,
> 
> Jay
> 

Thanks Jay, that could work out well if it's too difficult to move things
around. Thanks!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova] Scheduling issue for the Summit

2017-04-20 Thread Jay S Bryant

Sean,

In the case that all the conflicts cannot be resolved I would be happy 
to cover the Onboarding session if you can keep me in the loop/take 
items for the Cinder Ephemeral session.


Let me know,

Jay



On 4/20/2017 9:55 AM, Sean McGinnis wrote:

Unfortunately I am way late at noticing this, but bringing it up in case
there's anything that can still be done about it.

Tuesday the 9th, from 11:15am-11:55am, is going to be a challenge for me.
The Track Chairs Recap, Using Cinder for Nova Ephemeral Storage, and the
Cinder - Project Onboarding sessions are all at this slot.

While onboarding is something I feel is really imortant, out of these as
it is now I think I would have to go with the Cinder Nova discussion. But
that really would be a shame to have to miss that. I also would really
like to be part of the Track Chair discussion, but if I have to rank
these, that one will have to be last. I'm guessing there's really no
good time for that session that is not going to cause a conflict for
somebody.

So my question for the scheduling powers that be - is there any chance
we can move either the Cinder Onboarding or the Cinder-Nova sessions?

Thanks,
Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova] Scheduling issue for the Summit

2017-04-20 Thread Sean McGinnis
Will do, thanks!

On Thu, Apr 20, 2017 at 09:59:39AM -0500, Jimmy McArthur wrote:
> Sean,
> 
> Can you send this request into speakersupp...@openstack.org and we'll see if
> we can move it around.
> 
> Thanks,
> Jimmy
> 
> >Sean McGinnis 
> >April 20, 2017 at 9:55 AM
> >Unfortunately I am way late at noticing this, but bringing it up in case
> >there's anything that can still be done about it.
> >
> >Tuesday the 9th, from 11:15am-11:55am, is going to be a challenge for me.
> >The Track Chairs Recap, Using Cinder for Nova Ephemeral Storage, and the
> >Cinder - Project Onboarding sessions are all at this slot.
> >
> >While onboarding is something I feel is really imortant, out of these as
> >it is now I think I would have to go with the Cinder Nova discussion. But
> >that really would be a shame to have to miss that. I also would really
> >like to be part of the Track Chair discussion, but if I have to rank
> >these, that one will have to be last. I'm guessing there's really no
> >good time for that session that is not going to cause a conflict for
> >somebody.
> >
> >So my question for the scheduling powers that be - is there any chance
> >we can move either the Cinder Onboarding or the Cinder-Nova sessions?
> >
> >Thanks,
> >Sean (smcginnis)
> >
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova] Scheduling issue for the Summit

2017-04-20 Thread Jimmy McArthur

Sean,

Can you send this request into speakersupp...@openstack.org and we'll 
see if we can move it around.


Thanks,
Jimmy


Sean McGinnis 
April 20, 2017 at 9:55 AM
Unfortunately I am way late at noticing this, but bringing it up in case
there's anything that can still be done about it.

Tuesday the 9th, from 11:15am-11:55am, is going to be a challenge for me.
The Track Chairs Recap, Using Cinder for Nova Ephemeral Storage, and the
Cinder - Project Onboarding sessions are all at this slot.

While onboarding is something I feel is really imortant, out of these as
it is now I think I would have to go with the Cinder Nova discussion. But
that really would be a shame to have to miss that. I also would really
like to be part of the Track Chair discussion, but if I have to rank
these, that one will have to be last. I'm guessing there's really no
good time for that session that is not going to cause a conflict for
somebody.

So my question for the scheduling powers that be - is there any chance
we can move either the Cinder Onboarding or the Cinder-Nova sessions?

Thanks,
Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][Nova] Scheduling issue for the Summit

2017-04-20 Thread Sean McGinnis
Unfortunately I am way late at noticing this, but bringing it up in case
there's anything that can still be done about it.

Tuesday the 9th, from 11:15am-11:55am, is going to be a challenge for me.
The Track Chairs Recap, Using Cinder for Nova Ephemeral Storage, and the
Cinder - Project Onboarding sessions are all at this slot.

While onboarding is something I feel is really imortant, out of these as
it is now I think I would have to go with the Cinder Nova discussion. But
that really would be a shame to have to miss that. I also would really
like to be part of the Track Chair discussion, but if I have to rank
these, that one will have to be last. I'm guessing there's really no
good time for that session that is not going to cause a conflict for
somebody.

So my question for the scheduling powers that be - is there any chance
we can move either the Cinder Onboarding or the Cinder-Nova sessions?

Thanks,
Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Cinder-Nova API meeting time slot change

2017-04-06 Thread Ildiko Vancsa
Hi All,

As of __today__ the Cinder-Nova API interactions meeting has a new time slot, 
__1600 UTC__.

The meeting channel is the same: __#openstack-meeting-cp__.

The patch [1] to change the slot officially is still under review with no 
conflicts.

See you soon!

Thanks and Best Regards,
Ildikó

[1] https://review.openstack.org/#/c/453199/ 
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-03 Thread TommyLike Hu
Hey, Mathieu Thanks for bringing us the solution, Jay and I are willing to help 
if you are looking for someone to coordinate at the Cinder side. TommyLike.Hu 
On 2017-04-04 08:50 , Mathieu Gagné Wrote: Hi, On Mon, Apr 3, 2017 at 8:40 PM, 
Jay S Bryant  wrote: > > Thank you for sharing this.  
Nice to see you have a solution that looks > agreeable to Matt.  Do you think 
you can get a spec pushed up and propose > your code? > I will go ahead, write 
the spec and contribute the implementation. Thanks -- Mathieu 
__ 
OpenStack Development Mailing List (not for usage questions) Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-03 Thread Jay S Bryant



On 4/3/2017 11:27 AM, Walter Boring wrote:

Actually, this is incorrect.

The sticking point of this all was doing the coordination and 
initiation of workflow from Nova.   Cinder already has the ability to 
call the driver to do the resize of the volume. Cinder just prevents 
this now, because there is work that has to be done on the attached 
side to make the new size actually show up.


What needs to happen is:
 A new Nova API needs to be created to initiate and coordinate this 
effort.   The API would call Cinder to extend the size, then get the 
connection information from Cinder for that volume, then call os-brick 
to extend the size, then update the domain xml to tell libvirt to 
extend the size. The end user inside the VM would have to issue the 
same SCSI bus rescan and refresh that happens inside of os-brick, to 
make the kernel and filesystem in the VM recognize the new size.


os-brick does all of the heavy lifting already on the host side of things.
The Connector API entry point:
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/initiator_connector.py#L153

iSCSI example:
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/iscsi.py#L370

os-brick's code works for single path and multipath attached volumes. 
  multipath has a bunch of added complexity with resize that should 
already be taken care of here:

https://github.com/openstack/os-brick/blob/master/os_brick/initiator/linuxscsi.py#L375


Walt

On Sat, Apr 1, 2017 at 10:17 AM, Jay Bryant > wrote:


Matt,

I think discussion on this goes all the way back to Tokyo. There
was work on the Cinder side to send the notification to Nova which
I believe all the pieces were in place for. The missing part
(sticking point) was doing a rescan of the SCSI bus in the node
that had the extended volume attached.

Has doing that been solved since Tokyo?

Jay


On 4/1/2017 10:34 AM, Matt Riedemann wrote:

On 3/31/2017 8:55 PM, TommyLike Hu wrote:

There was a time when this feature had been both proposed
in Cinder [1]
and Nova [2], but unfortunately no one (correct me if I am
wrong) is
going to handle this feature during Pike. We do think
extending an
online volume is a beneficial and mostly supported by
venders feature.
We really don't want this feature missed from OpenStack
and would like
to continue on. So anyone could share your knowledge of
how many works
are left till now and  where should I start with?

Thanks
TommyLike.Hu

[1] https://review.openstack.org/#/c/272524/

[2]

https://blueprints.launchpad.net/nova/+spec/nova-support-attached-volume-extend






__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The nova blueprint description does not contain much for
details, but from what is there it sounds a lot of like the
existing volume swap operation which is triggered from Cinder
by a volume migration or retype operation. How do those
existing operations not already solve this use case?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Walt,

Sorry for getting the info wrong.  Thank you for getting the right 
details out there!


Jay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-03 Thread Mathieu Gagné
Hi,

On Mon, Apr 3, 2017 at 8:40 PM, Jay S Bryant  wrote:
>
> Thank you for sharing this.  Nice to see you have a solution that looks
> agreeable to Matt.  Do you think you can get a spec pushed up and propose
> your code?
>

I will go ahead, write the spec and contribute the implementation.

Thanks

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-03 Thread Jay S Bryant

Mathieu,

Thank you for sharing this.  Nice to see you have a solution that looks 
agreeable to Matt.  Do you think you can get a spec pushed up and 
propose your code?


Jay



On 4/3/2017 2:21 PM, Mathieu Gagné wrote:

On Mon, Apr 3, 2017 at 12:27 PM, Walter Boring  wrote:

Actually, this is incorrect.

The sticking point of this all was doing the coordination and initiation of
workflow from Nova.   Cinder already has the ability to call the driver to
do the resize of the volume.  Cinder just prevents this now, because there
is work that has to be done on the attached side to make the new size
actually show up.

What needs to happen is:
  A new Nova API needs to be created to initiate and coordinate this effort.
The API would call Cinder to extend the size, then get the connection
information from Cinder for that volume, then call os-brick to extend the
size, then update the domain xml to tell libvirt to extend the size.   The
end user inside the VM would have to issue the same SCSI bus rescan and
refresh that happens inside of os-brick, to make the kernel and filesystem
in the VM recognize the new size.

os-brick does all of the heavy lifting already on the host side of things.
The Connector API entry point:
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/initiator_connector.py#L153

iSCSI example:
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/iscsi.py#L370

os-brick's code works for single path and multipath attached volumes.
multipath has a bunch of added complexity with resize that should already be
taken care of here:
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/linuxscsi.py#L375


I would like to share our private implementation (based on Mitaka):
https://gist.github.com/mgagne/9402089c11f8c80f6d6cd49f3db76512

The implementation makes it so Cinder leverages the existing Nova
external-events endpoint to trigger the BDM update and iSCSI rescan on
the host.

As always, the guest needs to update the partition table/filesystem if
it wants to benefit from the new free space.

Let me know if this is an implementation you want me to contribute upstream.

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-03 Thread Matt Riedemann

On 4/3/2017 5:30 PM, Matt Riedemann wrote:

On 4/3/2017 2:21 PM, Mathieu Gagné wrote:


I would like to share our private implementation (based on Mitaka):
https://gist.github.com/mgagne/9402089c11f8c80f6d6cd49f3db76512

The implementation makes it so Cinder leverages the existing Nova
external-events endpoint to trigger the BDM update and iSCSI rescan on
the host.


I like this a lot better than adding a new REST API to Nova to handle
orchestrating this from the start.



As always, the guest needs to update the partition table/filesystem if
it wants to benefit from the new free space.

Let me know if this is an implementation you want me to contribute
upstream.



I didn't read the patch in detail, but if you're interested in
contributing this upstream we could use a simple spec to start the
process. Note that nova spec freeze for Pike is April 13.



One thing we'd have to consider with this is if nova is new enough to 
handle the external event. I think we'd bump the microversion in the 
compute API and then on the cinder side, it asks the compute API for 
available versions and if the version available is new enough to handle 
the event, then you move forward, else you fail on the cinder API side 
since you know nova can't handle the external event.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-03 Thread Matt Riedemann

On 4/3/2017 2:21 PM, Mathieu Gagné wrote:


I would like to share our private implementation (based on Mitaka):
https://gist.github.com/mgagne/9402089c11f8c80f6d6cd49f3db76512

The implementation makes it so Cinder leverages the existing Nova
external-events endpoint to trigger the BDM update and iSCSI rescan on
the host.


I like this a lot better than adding a new REST API to Nova to handle 
orchestrating this from the start.




As always, the guest needs to update the partition table/filesystem if
it wants to benefit from the new free space.

Let me know if this is an implementation you want me to contribute upstream.



I didn't read the patch in detail, but if you're interested in 
contributing this upstream we could use a simple spec to start the 
process. Note that nova spec freeze for Pike is April 13.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-03 Thread Mathieu Gagné
On Mon, Apr 3, 2017 at 12:27 PM, Walter Boring  wrote:
> Actually, this is incorrect.
>
> The sticking point of this all was doing the coordination and initiation of
> workflow from Nova.   Cinder already has the ability to call the driver to
> do the resize of the volume.  Cinder just prevents this now, because there
> is work that has to be done on the attached side to make the new size
> actually show up.
>
> What needs to happen is:
>  A new Nova API needs to be created to initiate and coordinate this effort.
> The API would call Cinder to extend the size, then get the connection
> information from Cinder for that volume, then call os-brick to extend the
> size, then update the domain xml to tell libvirt to extend the size.   The
> end user inside the VM would have to issue the same SCSI bus rescan and
> refresh that happens inside of os-brick, to make the kernel and filesystem
> in the VM recognize the new size.
>
> os-brick does all of the heavy lifting already on the host side of things.
> The Connector API entry point:
> https://github.com/openstack/os-brick/blob/master/os_brick/initiator/initiator_connector.py#L153
>
> iSCSI example:
> https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/iscsi.py#L370
>
> os-brick's code works for single path and multipath attached volumes.
> multipath has a bunch of added complexity with resize that should already be
> taken care of here:
> https://github.com/openstack/os-brick/blob/master/os_brick/initiator/linuxscsi.py#L375
>

I would like to share our private implementation (based on Mitaka):
https://gist.github.com/mgagne/9402089c11f8c80f6d6cd49f3db76512

The implementation makes it so Cinder leverages the existing Nova
external-events endpoint to trigger the BDM update and iSCSI rescan on
the host.

As always, the guest needs to update the partition table/filesystem if
it wants to benefit from the new free space.

Let me know if this is an implementation you want me to contribute upstream.

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-03 Thread Walter Boring
Actually, this is incorrect.

The sticking point of this all was doing the coordination and initiation of
workflow from Nova.   Cinder already has the ability to call the driver to
do the resize of the volume.  Cinder just prevents this now, because there
is work that has to be done on the attached side to make the new size
actually show up.

What needs to happen is:
 A new Nova API needs to be created to initiate and coordinate this effort.
  The API would call Cinder to extend the size, then get the connection
information from Cinder for that volume, then call os-brick to extend the
size, then update the domain xml to tell libvirt to extend the size.   The
end user inside the VM would have to issue the same SCSI bus rescan and
refresh that happens inside of os-brick, to make the kernel and filesystem
in the VM recognize the new size.

os-brick does all of the heavy lifting already on the host side of things.
The Connector API entry point:
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/initiator_connector.py#L153

iSCSI example:
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/iscsi.py#L370

os-brick's code works for single path and multipath attached volumes.
multipath has a bunch of added complexity with resize that should already
be taken care of here:
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/linuxscsi.py#L375


Walt

On Sat, Apr 1, 2017 at 10:17 AM, Jay Bryant  wrote:

> Matt,
>
> I think discussion on this goes all the way back to Tokyo.  There was work
> on the Cinder side to send the notification to Nova which I believe all the
> pieces were in place for.  The missing part (sticking point) was doing a
> rescan of the SCSI bus in the node that had the extended volume attached.
>
> Has doing that been solved since Tokyo?
>
> Jay
>
>
> On 4/1/2017 10:34 AM, Matt Riedemann wrote:
>
>> On 3/31/2017 8:55 PM, TommyLike Hu wrote:
>>
>>> There was a time when this feature had been both proposed in Cinder [1]
>>> and Nova [2], but unfortunately no one (correct me if I am wrong) is
>>> going to handle this feature during Pike. We do think extending an
>>> online volume is a beneficial and mostly supported by venders feature.
>>> We really don't want this feature missed from OpenStack and would like
>>> to continue on. So anyone could share your knowledge of how many works
>>> are left till now and  where should I start with?
>>>
>>> Thanks
>>> TommyLike.Hu
>>>
>>> [1] https://review.openstack.org/#/c/272524/
>>> [2]
>>> https://blueprints.launchpad.net/nova/+spec/nova-support-att
>>> ached-volume-extend
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> The nova blueprint description does not contain much for details, but
>> from what is there it sounds a lot of like the existing volume swap
>> operation which is triggered from Cinder by a volume migration or retype
>> operation. How do those existing operations not already solve this use case?
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-01 Thread Jay Bryant


On 4/1/2017 4:07 PM, Matt Riedemann wrote:

On 4/1/2017 12:17 PM, Jay Bryant wrote:

Matt,

I think discussion on this goes all the way back to Tokyo. There was
work on the Cinder side to send the notification to Nova which I believe
all the pieces were in place for.  The missing part (sticking point) was
doing a rescan of the SCSI bus in the node that had the extended volume
attached.

Has doing that been solved since Tokyo?

Jay



I wasn't in Tokyo so this is all news to me. I don't remember hearing 
about anything like this though.


Ok, I am pretty sure I have notes on this somewhere.  I just need to 
find them.  I will work on doing that as a starting point.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-01 Thread Matt Riedemann

On 4/1/2017 12:17 PM, Jay Bryant wrote:

Matt,

I think discussion on this goes all the way back to Tokyo.  There was
work on the Cinder side to send the notification to Nova which I believe
all the pieces were in place for.  The missing part (sticking point) was
doing a rescan of the SCSI bus in the node that had the extended volume
attached.

Has doing that been solved since Tokyo?

Jay



I wasn't in Tokyo so this is all news to me. I don't remember hearing 
about anything like this though.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   >