Re: [openstack-dev] [cinder][manila] Cinder and Friends Dinner at Berlin Summit ...

2018-11-12 Thread Jay S Bryant

Ivan,

Yeah, I saw that was the case but it seems like there is not a point in 
time where there isn't a conflict.  Need to get some food at some point 
so anyone who wants to join can, and then we can head to the party if 
people want.


Jay


On 11/10/2018 8:07 AM, Ivan Kolodyazhny wrote:

Thanks for organizing this, Jay,

Just in case if you missed it, Matrix Party hosted by Trilio + Red Hat 
will be on Tuesday too.



Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Thu, Nov 8, 2018 at 12:43 AM Jay S Bryant > wrote:


All,

I am working on scheduling a dinner for the Cinder team (and our
extended family that work on and around Cinder) during the Summit
in Berlin.  I have created an etherpad for people to RSVP for
dinner [1].

It seemed like Tuesday night after the Marketplace Mixer was the
best time for most people.

So, it will be a little later dinner ... 8 pm.  Here is the place:

Location: http://www.dicke-wirtin.de/
Address: Carmerstraße 9, 10623 Berlin, Germany

It looks like the kind of place that will fit for our usual group.

If planning to attend please add your name to the etherpad and I
will get a reservation in over the weekend.

Hope to see you all on Tuesday!

Jay

[1] https://etherpad.openstack.org/p/BER-cinder-outing-planning

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][manila] Cinder and Friends Dinner at Berlin Summit ...

2018-11-10 Thread Ivan Kolodyazhny
Thanks for organizing this, Jay,

Just in case if you missed it, Matrix Party hosted by Trilio + Red Hat will
be on Tuesday too.


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Thu, Nov 8, 2018 at 12:43 AM Jay S Bryant  wrote:

> All,
>
> I am working on scheduling a dinner for the Cinder team (and our extended
> family that work on and around Cinder) during the Summit in Berlin.  I have
> created an etherpad for people to RSVP for dinner [1].
>
> It seemed like Tuesday night after the Marketplace Mixer was the best time
> for most people.
>
> So, it will be a little later dinner ... 8 pm.  Here is the place:
> Location:  http://www.dicke-wirtin.de/
> Address:  Carmerstraße 9, 10623 Berlin, Germany
>
> It looks like the kind of place that will fit for our usual group.
>
> If planning to attend please add your name to the etherpad and I will get
> a reservation in over the weekend.
>
> Hope to see you all on Tuesday!
>
> Jay
>
> [1]  https://etherpad.openstack.org/p/BER-cinder-outing-planning
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot

2018-11-04 Thread Rambo
Sorry , I mean use the NFS driver as the cinder-backup_driver.I see the 
remotefs code achieve the create_volume_from snapshot[1],in this function the 
snapshot.status must be available. But before this in the api part, the 
snapshot.status was changed to the backing_up status[2].Is there something 
wrong?Can you tell me more about this?Thank you very much.




[1]https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/remotefs.py#L1259
[2]https://github.com/openstack/cinder/blob/master/cinder/backup/api.py#L292
-- Original --
From:  "Eric Harney";
Date:  Fri, Nov 2, 2018 10:00 PM
To:  "jsbryant"; "OpenStack 
Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about use nfs driver to backup the 
volume snapshot

 
On 11/1/18 4:44 PM, Jay Bryant wrote:
> On Thu, Nov 1, 2018, 10:44 AM Rambo  wrote:
> 
>> Hi,all
>>
>>   Recently, I use the nfs driver as the cinder-backup backend, when I
>> use it to backup the volume snapshot, the result is return the
>> NotImplementedError[1].And the nfs.py doesn't has the
>> create_volume_from_snapshot function. Does the community plan to achieve
>> it which is as nfs as the cinder-backup backend?Can you tell me about
>> this?Thank you very much!
>>
>> Rambo,
> 
> The NFS driver doesn't have full snapshot support. I am not sure if that
> function missing was an oversight or not. I would reach out to Eric Harney
> as he implemented that code.
> 
> Jay
> 

create_volume_from_snapshot is implemented in the NFS driver.  It is in 
the remotefs code that the NFS driver inherits from.

But, I'm not sure I understand what's being asked here -- how is this 
related to using NFS as the backup backend?


>>
>>
>> [1]
>> https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142
>>
>>
>>
>>
>>
>>
>>
>>
>> Best Regards
>> Rambo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot

2018-11-02 Thread Eric Harney

On 11/1/18 4:44 PM, Jay Bryant wrote:

On Thu, Nov 1, 2018, 10:44 AM Rambo  wrote:


Hi,all

  Recently, I use the nfs driver as the cinder-backup backend, when I
use it to backup the volume snapshot, the result is return the
NotImplementedError[1].And the nfs.py doesn't has the
create_volume_from_snapshot function. Does the community plan to achieve
it which is as nfs as the cinder-backup backend?Can you tell me about
this?Thank you very much!

Rambo,


The NFS driver doesn't have full snapshot support. I am not sure if that
function missing was an oversight or not. I would reach out to Eric Harney
as he implemented that code.

Jay



create_volume_from_snapshot is implemented in the NFS driver.  It is in 
the remotefs code that the NFS driver inherits from.


But, I'm not sure I understand what's being asked here -- how is this 
related to using NFS as the backup backend?






[1]
https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142








Best Regards
Rambo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot

2018-11-01 Thread Jay Bryant
On Thu, Nov 1, 2018, 10:44 AM Rambo  wrote:

> Hi,all
>
>  Recently, I use the nfs driver as the cinder-backup backend, when I
> use it to backup the volume snapshot, the result is return the
> NotImplementedError[1].And the nfs.py doesn't has the
> create_volume_from_snapshot function. Does the community plan to achieve
> it which is as nfs as the cinder-backup backend?Can you tell me about
> this?Thank you very much!
>
> Rambo,

The NFS driver doesn't have full snapshot support. I am not sure if that
function missing was an oversight or not. I would reach out to Eric Harney
as he implemented that code.

Jay

>
>
> [1]
> https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142
>
>
>
>
>
>
>
>
> Best Regards
> Rambo
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-25 Thread Boxiang Zhu


Great, Jon. Thanks for your reply. I am looking forward to your report.


Cheers,
Boxiang
On 10/23/2018 22:01,Jon Bernard wrote:
* melanie witt  wrote:
On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
host dev@rbd-2#ceph, but it failed with the exception
'NotImplementedError(_("Swap only supports host devices")'.

So that, my real problem is that is there any work to migrate
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is
*available*(the spec) and another is *in-use*(my scope).


[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150

Ah, I think I understand now, thank you for providing all of those details.
And I think you explained it in your first email, that cinder supports
migration of ceph volumes if they are 'available' but not if they are
'in-use'. Apologies that I didn't get your meaning the first time.

I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
LOG.debug('Only available volumes can be migrated using backend '
'assisted migration. Falling back to generic migration.')
return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance',
it's falling back to generic migration, which will end up with an error in
nova because the source_path is not set in the volume config.

Can anyone from the cinder team chime in about whether the ceph volume
migration could be expanded to allow migration of 'in-use' volumes? Is there
a reason not to allow migration of 'in-use' volumes?

Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.

--
Jon


[3] 
https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621

Cheers,
-melanie






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-24 Thread melanie witt

On Tue, 23 Oct 2018 10:01:42 -0400, Jon Bernard wrote:

* melanie witt  wrote:

On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:

I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
host dev@rbd-2#ceph, but it failed with the exception
'NotImplementedError(_("Swap only supports host devices")'.

So that, my real problem is that is there any work to migrate
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is
*available*(the spec) and another is *in-use*(my scope).


[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Ah, I think I understand now, thank you for providing all of those details.
And I think you explained it in your first email, that cinder supports
migration of ceph volumes if they are 'available' but not if they are
'in-use'. Apologies that I didn't get your meaning the first time.

I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
 LOG.debug('Only available volumes can be migrated using backend '
   'assisted migration. Falling back to generic migration.')
 return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance',
it's falling back to generic migration, which will end up with an error in
nova because the source_path is not set in the volume config.

Can anyone from the cinder team chime in about whether the ceph volume
migration could be expanded to allow migration of 'in-use' volumes? Is there
a reason not to allow migration of 'in-use' volumes?


Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.


OK, thanks for this info, Jon. I'll be interested in your findings.

Cheers,
-melanie




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-24 Thread Jay S. Bryant



On 10/23/2018 9:01 AM, Jon Bernard wrote:

* melanie witt  wrote:

On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:

I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
host dev@rbd-2#ceph, but it failed with the exception
'NotImplementedError(_("Swap only supports host devices")'.

So that, my real problem is that is there any work to migrate
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is
*available*(the spec) and another is *in-use*(my scope).


[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150

Ah, I think I understand now, thank you for providing all of those details.
And I think you explained it in your first email, that cinder supports
migration of ceph volumes if they are 'available' but not if they are
'in-use'. Apologies that I didn't get your meaning the first time.

I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
 LOG.debug('Only available volumes can be migrated using backend '
   'assisted migration. Falling back to generic migration.')
 return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance',
it's falling back to generic migration, which will end up with an error in
nova because the source_path is not set in the volume config.

Can anyone from the cinder team chime in about whether the ceph volume
migration could be expanded to allow migration of 'in-use' volumes? Is there
a reason not to allow migration of 'in-use' volumes?

Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.

Jon,

Thanks for the explanation and investigation!

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-23 Thread Jon Bernard
* melanie witt  wrote:
> On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote:
> > 
> > The version of my cinder and nova is Rocky. The scope of the cinder spec[1]
> > is only for available volume migration between two pools from the same
> > ceph cluster.
> > If the volume is in-use status[2], it will call the generic migration
> > function. So that as you
> > describe it, on the nova side, it raises NotImplementedError(_("Swap
> > only supports host devices").
> > The get_config of net volume[3] has not source_path.
> 
> Ah, OK, so you're trying to migrate a volume across two separate ceph
> clusters, and that is not supported.
> 
> > So does anyone try to succeed to migrate volume(in-use) with ceph
> > backend or is anyone doing something of it?
> 
> Hopefully someone can share their experience with trying to migrate volumes
> across separate ceph clusters. I unfortunately don't know anything about it.

If this is the case, then Cinder cannot request a storage-specific
migration which is typically more efficient.  The migration will require
a complete copy of each allocated block.  Whether the volume is attached
or not will determine who (cinder or nova) will perform the operation.

-- 
Jon

> 
> Best,
> -melanie
> 
> > [1] https://review.openstack.org/#/c/296150
> > [2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
> > [3] 
> > https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-23 Thread Jon Bernard
* melanie witt  wrote:
> On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
> > I created a new vm and a new volume with type 'ceph'[So that the volume
> > will be created on one of two hosts. I assume that the volume created on
> > host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
> > vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
> > host dev@rbd-2#ceph, but it failed with the exception
> > 'NotImplementedError(_("Swap only supports host devices")'.
> > 
> > So that, my real problem is that is there any work to migrate
> > volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
> > in the same ceph cluster?
> > The difference between the spec[2] with my scope is only one is
> > *available*(the spec) and another is *in-use*(my scope).
> > 
> > 
> > [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> > [2] https://review.openstack.org/#/c/296150
> 
> Ah, I think I understand now, thank you for providing all of those details.
> And I think you explained it in your first email, that cinder supports
> migration of ceph volumes if they are 'available' but not if they are
> 'in-use'. Apologies that I didn't get your meaning the first time.
> 
> I see now the code you were referring to is this [3]:
> 
> if volume.status not in ('available', 'retyping', 'maintenance'):
> LOG.debug('Only available volumes can be migrated using backend '
>   'assisted migration. Falling back to generic migration.')
> return refuse_to_migrate
> 
> So because your volume is not 'available', 'retyping', or 'maintenance',
> it's falling back to generic migration, which will end up with an error in
> nova because the source_path is not set in the volume config.
> 
> Can anyone from the cinder team chime in about whether the ceph volume
> migration could be expanded to allow migration of 'in-use' volumes? Is there
> a reason not to allow migration of 'in-use' volumes?

Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.

-- 
Jon

> 
> [3] 
> https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621
> 
> Cheers,
> -melanie
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-22 Thread melanie witt

On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
I created a new vm and a new volume with type 'ceph'[So that the volume 
will be created on one of two hosts. I assume that the volume created on 
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the 
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to 
host dev@rbd-2#ceph, but it failed with the exception 
'NotImplementedError(_("Swap only supports host devices")'.


So that, my real problem is that is there any work to migrate 
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool) 
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is 
*available*(the spec) and another is *in-use*(my scope).



[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Ah, I think I understand now, thank you for providing all of those 
details. And I think you explained it in your first email, that cinder 
supports migration of ceph volumes if they are 'available' but not if 
they are 'in-use'. Apologies that I didn't get your meaning the first time.


I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
LOG.debug('Only available volumes can be migrated using backend '
  'assisted migration. Falling back to generic migration.')
return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance', 
it's falling back to generic migration, which will end up with an error 
in nova because the source_path is not set in the volume config.


Can anyone from the cinder team chime in about whether the ceph volume 
migration could be expanded to allow migration of 'in-use' volumes? Is 
there a reason not to allow migration of 'in-use' volumes?


[3] 
https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621


Cheers,
-melanie






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-21 Thread Boxiang Zhu


Jay and Melanie, It's my fault to let you misunderstand the problem. I should 
describe my problem more clearly. My problem is not to migrate volumes between 
two ceph clusters. 


I have two clusters, one is openstack cluster(allinone env, hostname is dev) 
and another is ceph cluster. Omit the integrated configurations for openstack 
and ceph.[1] The special config of cinder.conf is as followed:


[DEFAULT]
enabled_backends = rbd-1,rbd-2
..
[rbd-1]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes001
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 2
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81
[rbd-2]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes002
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 2
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81


There will be two hosts named dev@rbd-1#ceph and dev@rbd-2#ceph.
Then I create a volume type named 'ceph' with the command 'cinder type-create 
ceph' and add extra_spec 'volume_backend_name=ceph' for it with the command 
'cinder type-key  set volume_backend_name=ceph'. 


I created a new vm and a new volume with type 'ceph'[So that the volume will be 
created on one of two hosts. I assume that the volume created on host 
dev@rbd-1#ceph this time]. Next step is to attach the volume to the vm. At last 
I want to migrate the volume from host dev@rbd-1#ceph to host dev@rbd-2#ceph, 
but it failed with the exception 'NotImplementedError(_("Swap only supports 
host devices")'.


So that, my real problem is that is there any work to migrate 
volume(in-use)(ceph rbd) from one host(pool) to another host(pool) in the same 
ceph cluster?
The difference between the spec[2] with my scope is only one is available(the 
spec) and another is in-use(my scope).




[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Cheers,
Boxiang
On 10/21/2018 23:19,Jay S. Bryant wrote:

Boxiang,

I have not herd any discussion of extending this functionality for Ceph to work 
between different Ceph Clusters.  I wasn't aware, however, that the existing 
spec was limited to one Ceph cluster.  So, that is good to know.

I would recommend reaching out to Jon Bernard or Eric Harney for guidance on 
how to proceed.  They work closely with the Ceph driver and could provide 
insight.

Jay




On 10/19/2018 10:21 AM, Boxiang Zhu wrote:



Hi melanie, thanks for your reply.


The version of my cinder and nova is Rocky. The scope of the cinder spec[1] 
is only for available volume migration between two pools from the same ceph 
cluster.
If the volume is in-use status[2], it will call the generic migration function. 
So that as you 
describe it, on the nova side, it raises NotImplementedError(_("Swap only 
supports host devices"). 
The get_config of net volume[3] has not source_path.


So does anyone try to succeed to migrate volume(in-use) with ceph backend or is 
anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101




Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt wrote:
On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm.
I can migrate the volume(in-use) from one host to another. The nova
libvirt will call the 'rebase' to finish it. But if using ceph backend,
it raises exception 'Swap only supports host devices'. So now it does
not support to migrate volume(in-use). Does anyone do this work now? Or
Is there any way to let me migrate volume(in-use) with ceph backend?

What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be something
additional missing that we would need to do to make the migration work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [cinder]ceph rbd replication group support

2018-10-21 Thread Jay S. Bryant
I would reach out to Lisa Li (lixiaoy1) on Cinder to see if this is 
something they may pick back up.  She has been more active in the 
community lately and may be able to look at this again or at least have 
good guidance for you.


Thanks!

Jay



On 10/19/2018 1:14 AM, 王俊 wrote:


Hi:

I have a question about rbd replication group, I want to know the plan 
or roadmap about it? Anybody work on it?


Blueprint: 
https://blueprints.launchpad.net/cinder/+spec/ceph-rbd-replication-group-support


Thanks



保密:本函仅供收件人使用,如阁下并非抬头标明的收件人,请您即刻删除本函,勿以任何方式使用及传播,并请您能将此误发情形通知发件人,谢谢! 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-21 Thread Jay S. Bryant

Boxiang,

I have not herd any discussion of extending this functionality for Ceph 
to work between different Ceph Clusters.  I wasn't aware, however, that 
the existing spec was limited to one Ceph cluster. So, that is good to know.


I would recommend reaching out to Jon Bernard or Eric Harney for 
guidance on how to proceed.  They work closely with the Ceph driver and 
could provide insight.


Jay


On 10/19/2018 10:21 AM, Boxiang Zhu wrote:


Hi melanie, thanks for your reply.

The version of my cinder and nova is Rocky. The scope of the cinder 
spec[1]
is only for available volume migration between two pools from the same 
ceph cluster.
If the volume is in-use status[2], it will call the generic migration 
function. So that as you
describe it, on the nova side, it raises NotImplementedError(_("Swap 
only supports host devices").

The get_config of net volume[3] has not source_path.

So does anyone try to succeed to migrate volume(in-use) with ceph 
backend or is anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] 
https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101



Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt 
 wrote:


On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:

When I use the LVM backend to create the volume, then attach
it to a vm.
I can migrate the volume(in-use) from one host to another. The
nova
libvirt will call the 'rebase' to finish it. But if using ceph
backend,
it raises exception 'Swap only supports host devices'. So now
it does
not support to migrate volume(in-use). Does anyone do this
work now? Or
Is there any way to let me migrate volume(in-use) with ceph
backend?


What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:


https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be
something
additional missing that we would need to do to make the migration
work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread melanie witt

On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote:


The version of my cinder and nova is Rocky. The scope of the cinder spec[1]
is only for available volume migration between two pools from the same 
ceph cluster.
If the volume is in-use status[2], it will call the generic migration 
function. So that as you
describe it, on the nova side, it raises NotImplementedError(_("Swap 
only supports host devices").

The get_config of net volume[3] has not source_path.


Ah, OK, so you're trying to migrate a volume across two separate ceph 
clusters, and that is not supported.


So does anyone try to succeed to migrate volume(in-use) with ceph 
backend or is anyone doing something of it?


Hopefully someone can share their experience with trying to migrate 
volumes across separate ceph clusters. I unfortunately don't know 
anything about it.


Best,
-melanie


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread Boxiang Zhu


Hi melanie, thanks for your reply.


The version of my cinder and nova is Rocky. The scope of the cinder spec[1] 
is only for available volume migration between two pools from the same ceph 
cluster.
If the volume is in-use status[2], it will call the generic migration function. 
So that as you 
describe it, on the nova side, it raises NotImplementedError(_("Swap only 
supports host devices"). 
The get_config of net volume[3] has not source_path.


So does anyone try to succeed to migrate volume(in-use) with ceph backend or is 
anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101




Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt wrote:
On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm.
I can migrate the volume(in-use) from one host to another. The nova
libvirt will call the 'rebase' to finish it. But if using ceph backend,
it raises exception 'Swap only supports host devices'. So now it does
not support to migrate volume(in-use). Does anyone do this work now? Or
Is there any way to let me migrate volume(in-use) with ceph backend?

What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be something
additional missing that we would need to do to make the migration work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread melanie witt

On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm. 
I can migrate the volume(in-use) from one host to another. The nova 
libvirt will call the 'rebase' to finish it. But if using ceph backend, 
it raises exception 'Swap only supports host devices'. So now it does 
not support to migrate volume(in-use). Does anyone do this work now? Or 
Is there any way to let me migrate volume(in-use) with ceph backend?


What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to 
enable migration of in-use volumes with ceph semi-recently (Queens).


On the nova side, the code looks for the source_path in the volume 
config, and if there is not one present, it raises 
NotImplementedError(_("Swap only supports host devices"). So in your 
environment, the volume configs must be missing a source_path.


If you are using at least Queens version, then there must be something 
additional missing that we would need to do to make the migration work.


[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][qa] Enabling online volume_extend tests by default

2018-10-09 Thread Erlon Cruz
Hi Ghanshyam,


Though I have concern over running those tests by default(making config
> options True by default), because it is not confirmed all cinder backends
> implements this functionality and it only works for nova libvirt driver. We
> need to keep config options default as False and Devstack/CI can make it
> True to run the tests.
>
>
The discussion on the PTG was about whether we should run this on gate to
actually break the CIs. Once that happens, vendors will have 3 options:

#1: fix their drivers by properly implementing  volume_extend and run
the positive tests
#2: fix their drivers by reporting that they not support volume_extend
and run the negative tests
#3: disable volume extend tests at all (not recommendable), but this
still give us a hint on whether the vendor supports this or not


> If this feature becomes mandatory functionality (or cinder say standard
> feature i think) to implement for every backends and it work with all nova
> driver also(in term of instance action events) then, we can enable this
> feature tests by default. But until then, we should keep them disable by
> default in Tempest but we can enable them on gate via Devstack (patch you
> mentioned) and test them daily on integrated-gate.
>

Its not mandatory that the driver must implement online_extend, but if the
driver does not support it, the driver should report as so.


> Overall, I am ok with Devstack change to make these tests enable for every
> Cinder backends but we need to keep the config options false in Tempest.
>

So, the outcome from the PTG was that we would first merge the tempest test
and give time for vendors to get the drivers fixed. Then we would change it
in devstack so we push vendor to fix their drivers in case they hadn't done
that.

Erlon



>
> I will review those patch and leave comments on gerrit (i saw those patch
> introduce new config option than using the existing one)
>
> -gmann
>
>  > Please let us know if you have any question or concerns about it.
>  > Kind regards,Erlon_[1]
> https://review.openstack.org/#/c/572188/[2]
> https://review.openstack.org/#/c/578463/
> __
>  > OpenStack Development Mailing List (not for usage questions)
>  > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  >
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-09 Thread Jay S Bryant



On 10/8/2018 8:54 AM, Sean McGinnis wrote:

On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote:

In Denver, we agree to add a new "re-image" API in cinder to support upport
volume-backed server rebuild with a new image.

An initial blueprint has been drafted in [3], welcome to review it, thanks.
: )

[snip]

The "force" parameter idea comes from [4], means that
1. we can re-image an "available" volume directly.
2. we can't re-image "in-use"/"reserved" volume directly.
3. we can only re-image an "in-use"/"reserved" volume with "force"
parameter.

And it means nova need to always call re-image API with an extra "force"
parameter,
because the volume status is "in-use" or "reserve" when we rebuild the
server.

*So, what's you idea? Do we really want to add this "force" parameter?*


I would prefer we have the "force" parameter, even if it is something that will
always be defaulted to True from Nova.

Having this exposed as a REST API means anyone could call it, not just Nova
code. So as protection from someone doing something that they are not really
clear on the full implications of, having a flag in there to guard volumes that
are already attached or reserved for shelved instances is worth the very minor
extra overhead.
I concur with Sean's assessment.  I think putting a safety switch in 
place in this design is important to ensure that people using the API 
directly are less likely to do something that they may not actually want 
to do.


Jay

[1] https://etherpad.openstack.org/p/nova-ptg-stein L483
[2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12
[3] https://review.openstack.org/#/c/605317
[4]
https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75

Regards,
Yikun

Jiang Yikun(Kero)
Mail: yikunk...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-09 Thread Matt Riedemann

On 10/9/2018 8:04 AM, Erlon Cruz wrote:
If you are planning to re-image an image on a bootable volume then yes 
you should use a force parameter. I have lost the discussion about this 
on PTG. What is the main use cases? This seems to me something that 
could be leveraged with the current revert-to-snapshot API, which would 
be even better. The flow would be:


1 - create a volume from image
2 - create an snapshot
3 - do whatever you wan't
4 - revert the snapshot

Would that help in your the use cases?


As the spec mentions, this is for enabling re-imaging the root volume on 
a server when nova rebuilds the server. That is not allowed today 
because the compute service can't re-image the root volume. We don't 
want to jump through a bunch of gross alternative hoops to create a new 
root volume with the new image and swap them out (the reasons why are in 
the spec, and have been discussed previously in the ML). So nova is 
asking cinder to provide an API to change the image in a volume which 
the nova rebuild operation will use to re-image the root volume on a 
volume-backed server. I don't know if revert-to-snapshot solves that use 
case, but it doesn't sound like it. With the nova rebuild API, the user 
provides an image reference and that is used to re-image the root disk 
on the server. So it might not be a snapshot, it could be something new.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-09 Thread Erlon Cruz
If you are planning to re-image an image on a bootable volume then yes you
should use a force parameter. I have lost the discussion about this on PTG.
What is the main use cases? This seems to me something that could be
leveraged with the current revert-to-snapshot API, which would be even
better. The flow would be:

1 - create a volume from image
2 - create an snapshot
3 - do whatever you wan't
4 - revert the snapshot

Would that help in your the use cases?

Em seg, 8 de out de 2018 às 10:54, Sean McGinnis 
escreveu:

> On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote:
> > In Denver, we agree to add a new "re-image" API in cinder to support
> upport
> > volume-backed server rebuild with a new image.
> >
> > An initial blueprint has been drafted in [3], welcome to review it,
> thanks.
> > : )
> >
> > [snip]
> >
> > The "force" parameter idea comes from [4], means that
> > 1. we can re-image an "available" volume directly.
> > 2. we can't re-image "in-use"/"reserved" volume directly.
> > 3. we can only re-image an "in-use"/"reserved" volume with "force"
> > parameter.
> >
> > And it means nova need to always call re-image API with an extra "force"
> > parameter,
> > because the volume status is "in-use" or "reserve" when we rebuild the
> > server.
> >
> > *So, what's you idea? Do we really want to add this "force" parameter?*
> >
>
> I would prefer we have the "force" parameter, even if it is something that
> will
> always be defaulted to True from Nova.
>
> Having this exposed as a REST API means anyone could call it, not just Nova
> code. So as protection from someone doing something that they are not
> really
> clear on the full implications of, having a flag in there to guard volumes
> that
> are already attached or reserved for shelved instances is worth the very
> minor
> extra overhead.
>
> > [1] https://etherpad.openstack.org/p/nova-ptg-stein L483
> > [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild
> L12
> > [3] https://review.openstack.org/#/c/605317
> > [4]
> >
> https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75
> >
> > Regards,
> > Yikun
> > 
> > Jiang Yikun(Kero)
> > Mail: yikunk...@gmail.com
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-08 Thread Sean McGinnis
On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote:
> In Denver, we agree to add a new "re-image" API in cinder to support upport
> volume-backed server rebuild with a new image.
> 
> An initial blueprint has been drafted in [3], welcome to review it, thanks.
> : )
> 
> [snip]
> 
> The "force" parameter idea comes from [4], means that
> 1. we can re-image an "available" volume directly.
> 2. we can't re-image "in-use"/"reserved" volume directly.
> 3. we can only re-image an "in-use"/"reserved" volume with "force"
> parameter.
> 
> And it means nova need to always call re-image API with an extra "force"
> parameter,
> because the volume status is "in-use" or "reserve" when we rebuild the
> server.
> 
> *So, what's you idea? Do we really want to add this "force" parameter?*
> 

I would prefer we have the "force" parameter, even if it is something that will
always be defaulted to True from Nova.

Having this exposed as a REST API means anyone could call it, not just Nova
code. So as protection from someone doing something that they are not really
clear on the full implications of, having a flag in there to guard volumes that
are already attached or reserved for shelved instances is worth the very minor
extra overhead.

> [1] https://etherpad.openstack.org/p/nova-ptg-stein L483
> [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12
> [3] https://review.openstack.org/#/c/605317
> [4]
> https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75
> 
> Regards,
> Yikun
> 
> Jiang Yikun(Kero)
> Mail: yikunk...@gmail.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][qa] Enabling online volume_extend tests by default

2018-10-07 Thread Ghanshyam Mann
  On Sat, 06 Oct 2018 01:42:11 +0900 Erlon Cruz  wrote 
 
 > Hey folks,
 > Following up on the discussions that we had on the Denver PTG, the Cinder 
 > teamis planning to enable online volume_extend tests[1] to be run by 
 > default. Currently,those tests are only run by some CI systems and infra 
 > jobs that explicitly set it tobe so.
 > We are also adding a negative test and an associated option  in tempest[2] 
 > to allowvendor drivers that does not support online extending to be tested. 
 > This patch willbe merged first and after a reasonable time for people check 
 > whether their backends supports that or not, we will proceed and merge the 
 > devstack patch[1]triggering the tests in all CIs and infra jobs.

Thanks Erlon. +1 on running those tests on gate.  

Though I have concern over running those tests by default(making config options 
True by default), because it is not confirmed all cinder backends implements 
this functionality and it only works for nova libvirt driver. We need to keep 
config options default as False and Devstack/CI can make it True to run the 
tests. 

If this feature becomes mandatory functionality (or cinder say standard feature 
i think) to implement for every backends and it work with all nova driver 
also(in term of instance action events) then, we can enable this feature tests 
by default. But until then, we should keep them disable by default in Tempest 
but we can enable them on gate via Devstack (patch you mentioned) and test them 
daily on integrated-gate. 

Overall, I am ok with Devstack change to make these tests enable for every 
Cinder backends but we need to keep the config options false in Tempest. 

I will review those patch and leave comments on gerrit (i saw those patch 
introduce new config option than using the existing one)

-gmann

 > Please let us know if you have any question or concerns about it.
 > Kind regards,Erlon_[1] 
 > https://review.openstack.org/#/c/572188/[2] 
 > https://review.openstack.org/#/c/578463/ 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ...

2018-10-03 Thread Matt Riedemann

On 10/3/2018 9:45 AM, Jay S. Bryant wrote:

Team,

We had discussed the possibility of adding Gorka to the stable core team 
during the PTG.  He does review a number of our backport patches and is 
active in that area.


If there are no objections in the next week I will add him to the list.

Thanks!

Jay (jungleboyj)


+1 from me in the stable-maint-core peanut gallery.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ...

2018-10-03 Thread Sean McGinnis
On Wed, Oct 03, 2018 at 09:45:25AM -0500, Jay S. Bryant wrote:
> Team,
> 
> We had discussed the possibility of adding Gorka to the stable core team
> during the PTG.  He does review a number of our backport patches and is
> active in that area.
> 
> If there are no objections in the next week I will add him to the list.
> 
> Thanks!
> 
> Jay (jungleboyj)
> 

+1 from me. Gorka has shown to understand the stable policies and I think his
coming from a company that has a vested interest in stable backports would make
him a good candidate for stable core.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ...

2018-10-03 Thread Ivan Kolodyazhny
+1 from me to Gorka!



Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Wed, Oct 3, 2018 at 5:47 PM Jay S. Bryant  wrote:

> Team,
>
> We had discussed the possibility of adding Gorka to the stable core team
> during the PTG.  He does review a number of our backport patches and is
> active in that area.
>
> If there are no objections in the next week I will add him to the list.
>
> Thanks!
>
> Jay (jungleboyj)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming

2018-09-28 Thread Tobias Urdin

Thanks Sean!

I did a quick sanity check on the backup part in the puppet-cinder 
module and there is no opinionated

default value there which needs to be changed.

Best regards

On 09/27/2018 08:37 PM, Sean McGinnis wrote:

This probably applies to all deployment tools, so hopefully this reaches the
right folks.

In Havana, Cinder deprecated the use of specifying the module for configuring
backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed
the backwards compatibility handling for configs that still used the old way.

Looking through a quick search, it appears there may be some tools that are
still defaulting to setting the backup driver name using the patch. If your
project does not specify the full driver class path, please update these to do
so now.

Any questions, please reach out here or in the #openstack-cinder channel.

Thanks!
Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming

2018-09-27 Thread Mohammed Naser
Thanks for the email Sean.

https://review.openstack.org/605846 Fix Cinder backup to use full paths

I think this should cover us, please let me know if we did things right.

FYI: the docs all still seem to point at the old paths..

https://docs.openstack.org/cinder/latest/configuration/block-storage/backup-drivers.html
On Thu, Sep 27, 2018 at 2:33 PM Sean McGinnis  wrote:
>
> This probably applies to all deployment tools, so hopefully this reaches the
> right folks.
>
> In Havana, Cinder deprecated the use of specifying the module for configuring
> backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed
> the backwards compatibility handling for configs that still used the old way.
>
> Looking through a quick search, it appears there may be some tools that are
> still defaulting to setting the backup driver name using the patch. If your
> project does not specify the full driver class path, please update these to do
> so now.
>
> Any questions, please reach out here or in the #openstack-cinder channel.
>
> Thanks!
> Sean
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Lance Bragstad
For those who may be following along and are not familiar with what we mean
by federated auto-provisioning [0].

[0]
https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning

On Wed, Sep 26, 2018 at 9:06 AM Morgan Fainberg 
wrote:

> This discussion was also not about user assigned IDs, but predictable IDs
> with the auto provisioning. We still want it to be something keystone
> controls (locally). It might be hash domain ID and value from assertion (
> similar.to the LDAP user ID generator). As long as within an environment,
> the IDs are predictable when auto provisioning via federation, we should be
> good. And the problem of the totally unknown ID until provisioning could be
> made less of an issue for someone working within a massively federated edge
> environment.
>
> I don't want user/explicit admin set IDs.
>
> On Wed, Sep 26, 2018, 04:43 Jay Pipes  wrote:
>
>> On 09/26/2018 05:10 AM, Colleen Murphy wrote:
>> > Thanks for the summary, Ildiko. I have some questions inline.
>> >
>> > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
>> >
>> > 
>> >
>> >>
>> >> We agreed to prefer federation for Keystone and came up with two work
>> >> items to cover missing functionality:
>> >>
>> >> * Keystone to trust a token from an ID Provider master and when the
>> auth
>> >> method is called, perform an idempotent creation of the user, project
>> >> and role assignments according to the assertions made in the token
>> >
>> > This sounds like it is based on the customizations done at Oath, which
>> to my recollection did not use the actual federation implementation in
>> keystone due to its reliance on Athenz (I think?) as an identity manager.
>> Something similar can be accomplished in standard keystone with the mapping
>> API in keystone which can cause dynamic generation of a shadow user,
>> project and role assignments.
>> >
>> >> * Keystone should support the creation of users and projects with
>> >> predictable UUIDs (eg.: hash of the name of the users and projects).
>> >> This greatly simplifies Image federation and telemetry gathering
>> >
>> > I was in and out of the room and don't recall this discussion exactly.
>> We have historically pushed back hard against allowing setting a project ID
>> via the API, though I can see predictable-but-not-settable as less
>> problematic. One of the use cases from the past was being able to use the
>> same token in different regions, which is problematic from a security
>> perspective. Is that that idea here? Or could someone provide more details
>> on why this is needed?
>>
>> Hi Colleen,
>>
>> I wasn't in the room for this conversation either, but I believe the
>> "use case" wanted here is mostly a convenience one. If the edge
>> deployment is composed of hundreds of small Keystone installations and
>> you have a user (e.g. an NFV MANO user) which should have visibility
>> across all of those Keystone installations, it becomes a hassle to need
>> to remember (or in the case of headless users, store some lookup of) all
>> the different tenant and user UUIDs for what is essentially the same
>> user across all of those Keystone installations.
>>
>> I'd argue that as long as it's possible to create a Keystone tenant and
>> user with a unique name within a deployment, and as long as it's
>> possible to authenticate using the tenant and user *name* (i.e. not the
>> UUID), then this isn't too big of a problem. However, I do know that a
>> bunch of scripts and external tools rely on setting the tenant and/or
>> user via the UUID values and not the names, so that might be where this
>> feature request is coming from.
>>
>> Hope that makes sense?
>>
>> Best,
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread James Penick
Hey Colleen,

>This sounds like it is based on the customizations done at Oath, which to
my recollection did not use the actual federation implementation in
keystone due to its reliance on Athenz (I think?) as an identity manager.
Something similar can be accomplished in standard keystone with the mapping
API in keystone which can cause dynamic generation of a shadow user,
project and role assignments.

You're correct, this was more about the general design of asymmetrical
token based authentication rather that our exact implementation with
Athenz. We didn't use the shadow users because Athenz authentication in our
implementation is done via an 'ntoken'  which is Athenz' older method for
identification, so it was it more straightforward for us to resurrect the
PKI driver. The new way is via mTLS, where the user can identify themselves
via a client cert. I imagine we'll need to move our implementation to use
shadow users as a part of that change.

>We have historically pushed back hard against allowing setting a project
ID via the API, though I can see predictable-but-not-settable as less
problematic.

Yup, predictable-but-not-settable is what we need. Basically as long as the
uuid is a hash of the string, we're good. I definitely don't want to be
able to set a user ID or project ID via API, because of the security and
operability problems that could arise. In my mind this would just be a
config setting.

>One of the use cases from the past was being able to use the same token in
different regions, which is problematic from a security perspective. Is
that that idea here? Or could someone provide more details on why this is
needed?

Well, sorta. As far as we're concerned you can get authenticate to keystone
in each region independently using your credential from the IdP. Our use
cases are more about simplifying federation of other systems, like Glance.
Say I create an image and a member list for that image. I'd like to be able
to copy that image *and* all of its metadata straight across to another
cluster and have things Just Work without needing to look up and resolve
the new UUIDs on the new cluster.

However, for deployers who wish to use Keystone as their IdP, then in that
case they'll need to use that keystone credential to establish a credential
in the keystone cluster in that region.

-James

On Wed, Sep 26, 2018 at 2:10 AM Colleen Murphy  wrote:

> Thanks for the summary, Ildiko. I have some questions inline.
>
> On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
>
> 
>
> >
> > We agreed to prefer federation for Keystone and came up with two work
> > items to cover missing functionality:
> >
> > * Keystone to trust a token from an ID Provider master and when the auth
> > method is called, perform an idempotent creation of the user, project
> > and role assignments according to the assertions made in the token
>
> This sounds like it is based on the customizations done at Oath, which to
> my recollection did not use the actual federation implementation in
> keystone due to its reliance on Athenz (I think?) as an identity manager.
> Something similar can be accomplished in standard keystone with the mapping
> API in keystone which can cause dynamic generation of a shadow user,
> project and role assignments.
>
> > * Keystone should support the creation of users and projects with
> > predictable UUIDs (eg.: hash of the name of the users and projects).
> > This greatly simplifies Image federation and telemetry gathering
>
> I was in and out of the room and don't recall this discussion exactly. We
> have historically pushed back hard against allowing setting a project ID
> via the API, though I can see predictable-but-not-settable as less
> problematic. One of the use cases from the past was being able to use the
> same token in different regions, which is problematic from a security
> perspective. Is that that idea here? Or could someone provide more details
> on why this is needed?
>
> Were there any volunteers to help write up specs and work on the
> implementations in keystone?
>
> 
>
> Colleen (cmurphy)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Giulio Fidente
hi,

thanks for sharing this!

At TripleO we're looking at implementing in Stein deployment of at least
1 regional DC and N edge zones. More comments below.

On 9/25/18 11:21 AM, Ildiko Vancsa wrote:
> Hi,
>
> Hereby I would like to give you a short summary on the discussions
that happened at the PTG in the area of edge.
>
> The Edge Computing Group sessions took place on Tuesday where our main
activity was to draw an overall architecture diagram to capture the
basic setup and requirements of edge towards a set of OpenStack
services. Our main and initial focus was around Keystone and Glance, but
discussion with other project teams such as Nova, Ironic and Cinder also
happened later during the week.
>
> The edge architecture diagrams we drew are part of a so called Minimum
Viable Product (MVP) which refers to the minimalist nature of the setup
where we didn’t try to cover all aspects but rather define a minimum set
of services and requirements to get to a functional system. This
architecture will evolve further as we collect more use cases and
requirements.
>
> To describe edge use cases on a higher level with Mobile Edge as a use
case in the background we identified three main building blocks:
>
> * Main or Regional Datacenter (DC)
> * Edge Sites
> * Far Edge Sites or Cloudlets
>
> We examined the architecture diagram with the following user stories
in mind:
>
> * As a deployer of OpenStack I want to minimize the number of control
planes I need to manage across a large geographical region.
> * As a user of OpenStack I expect instance autoscale continues to
function in an edge site if connectivity is lost to the main datacenter.
> * As a deployer of OpenStack I want disk images to be pulled to a
cluster on demand, without needing to sync every disk image everywhere.
> * As a user of OpenStack I want to manage all of my instances in a
region (from regional DC to far edge cloudlets) via a single API endpoint.
>
> We concluded to talk about service requirements in two major categories:
>
> 1. The Edge sites are fully operational in case of a connection loss
between the Regional DC and the Edge site which requires control plane
services running on the Edge site
> 2. Having full control on the Edge site is not critical in case a
connection loss between the Regional DC and an Edge site which can be
satisfied by having the control plane services running only in the
Regional DC
>
> In the first case the orchestration of the services becomes harder and
is not necessarily solved yet, while in the second case you have
centralized control but losing functionality on the Edge sites in the
event of a connection loss.
>
> We did not discuss things such as HA at the PTG and we did not go into
details on networking during the architectural discussion either.

while TripleO used to rely on pacemaker to manage cinder-volume A/P in
the controlplane, we'd like to push for cinder-volume A/A in the edge
zone and avoid the deployment of pacemaker in the edge zones

the safety of cinder-volume A/A seems to depend mostly on the backend
driver and for RBD we should be good

> We agreed to prefer federation for Keystone and came up with two work
items to cover missing functionality:
>
> * Keystone to trust a token from an ID Provider master and when the
auth method is called, perform an idempotent creation of the user,
project and role assignments according to the assertions made in the token
> * Keystone should support the creation of users and projects with
predictable UUIDs (eg.: hash of the name of the users and projects).
This greatly simplifies Image federation and telemetry gathering
>
> For Glance we explored image caching and spent some time discussing
the option to also cache metadata so a user can boot new instances at
the edge in case of a network connection loss which would result in
being disconnected from the registry:
>
> * I as a user of Glance, want to upload an image in the main
datacenter and boot that image in an edge datacenter. Fetch the image to
the edge datacenter with its metadata
>
> We are still in the progress of documenting the discussions and draw
the architecture diagrams and flows for Keystone and Glance.

for glance we'd like to deploy only one glance-api in the regional dc
and configure glance/cache in each edge zone ... pointing all instances
to a shared database

this should solve the metadata problem and also provide for storage
"locality" into every edge zone

> In addition to the above we went through Dublin PTG wiki
(https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG)
capturing requirements:
>
> * we agreed to consider the list of requirements on the wiki finalized
for now
> * agreed to move there the additional requirements listed on the Use
Cases (https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases)
wiki page
>
> For the details on the discussions with related OpenStack projects you
can check the following etherpads for notes:
>
> * Cinder:
https://etherpad.opens

Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Morgan Fainberg
This discussion was also not about user assigned IDs, but predictable IDs
with the auto provisioning. We still want it to be something keystone
controls (locally). It might be hash domain ID and value from assertion (
similar.to the LDAP user ID generator). As long as within an environment,
the IDs are predictable when auto provisioning via federation, we should be
good. And the problem of the totally unknown ID until provisioning could be
made less of an issue for someone working within a massively federated edge
environment.

I don't want user/explicit admin set IDs.

On Wed, Sep 26, 2018, 04:43 Jay Pipes  wrote:

> On 09/26/2018 05:10 AM, Colleen Murphy wrote:
> > Thanks for the summary, Ildiko. I have some questions inline.
> >
> > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
> >
> > 
> >
> >>
> >> We agreed to prefer federation for Keystone and came up with two work
> >> items to cover missing functionality:
> >>
> >> * Keystone to trust a token from an ID Provider master and when the auth
> >> method is called, perform an idempotent creation of the user, project
> >> and role assignments according to the assertions made in the token
> >
> > This sounds like it is based on the customizations done at Oath, which
> to my recollection did not use the actual federation implementation in
> keystone due to its reliance on Athenz (I think?) as an identity manager.
> Something similar can be accomplished in standard keystone with the mapping
> API in keystone which can cause dynamic generation of a shadow user,
> project and role assignments.
> >
> >> * Keystone should support the creation of users and projects with
> >> predictable UUIDs (eg.: hash of the name of the users and projects).
> >> This greatly simplifies Image federation and telemetry gathering
> >
> > I was in and out of the room and don't recall this discussion exactly.
> We have historically pushed back hard against allowing setting a project ID
> via the API, though I can see predictable-but-not-settable as less
> problematic. One of the use cases from the past was being able to use the
> same token in different regions, which is problematic from a security
> perspective. Is that that idea here? Or could someone provide more details
> on why this is needed?
>
> Hi Colleen,
>
> I wasn't in the room for this conversation either, but I believe the
> "use case" wanted here is mostly a convenience one. If the edge
> deployment is composed of hundreds of small Keystone installations and
> you have a user (e.g. an NFV MANO user) which should have visibility
> across all of those Keystone installations, it becomes a hassle to need
> to remember (or in the case of headless users, store some lookup of) all
> the different tenant and user UUIDs for what is essentially the same
> user across all of those Keystone installations.
>
> I'd argue that as long as it's possible to create a Keystone tenant and
> user with a unique name within a deployment, and as long as it's
> possible to authenticate using the tenant and user *name* (i.e. not the
> UUID), then this isn't too big of a problem. However, I do know that a
> bunch of scripts and external tools rely on setting the tenant and/or
> user via the UUID values and not the names, so that might be where this
> feature request is coming from.
>
> Hope that makes sense?
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Jay Pipes

On 09/26/2018 05:10 AM, Colleen Murphy wrote:

Thanks for the summary, Ildiko. I have some questions inline.

On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:





We agreed to prefer federation for Keystone and came up with two work
items to cover missing functionality:

* Keystone to trust a token from an ID Provider master and when the auth
method is called, perform an idempotent creation of the user, project
and role assignments according to the assertions made in the token


This sounds like it is based on the customizations done at Oath, which to my 
recollection did not use the actual federation implementation in keystone due 
to its reliance on Athenz (I think?) as an identity manager. Something similar 
can be accomplished in standard keystone with the mapping API in keystone which 
can cause dynamic generation of a shadow user, project and role assignments.


* Keystone should support the creation of users and projects with
predictable UUIDs (eg.: hash of the name of the users and projects).
This greatly simplifies Image federation and telemetry gathering


I was in and out of the room and don't recall this discussion exactly. We have 
historically pushed back hard against allowing setting a project ID via the 
API, though I can see predictable-but-not-settable as less problematic. One of 
the use cases from the past was being able to use the same token in different 
regions, which is problematic from a security perspective. Is that that idea 
here? Or could someone provide more details on why this is needed?


Hi Colleen,

I wasn't in the room for this conversation either, but I believe the 
"use case" wanted here is mostly a convenience one. If the edge 
deployment is composed of hundreds of small Keystone installations and 
you have a user (e.g. an NFV MANO user) which should have visibility 
across all of those Keystone installations, it becomes a hassle to need 
to remember (or in the case of headless users, store some lookup of) all 
the different tenant and user UUIDs for what is essentially the same 
user across all of those Keystone installations.


I'd argue that as long as it's possible to create a Keystone tenant and 
user with a unique name within a deployment, and as long as it's 
possible to authenticate using the tenant and user *name* (i.e. not the 
UUID), then this isn't too big of a problem. However, I do know that a 
bunch of scripts and external tools rely on setting the tenant and/or 
user via the UUID values and not the names, so that might be where this 
feature request is coming from.


Hope that makes sense?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Colleen Murphy
Thanks for the summary, Ildiko. I have some questions inline.

On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:



> 
> We agreed to prefer federation for Keystone and came up with two work 
> items to cover missing functionality:
> 
> * Keystone to trust a token from an ID Provider master and when the auth 
> method is called, perform an idempotent creation of the user, project 
> and role assignments according to the assertions made in the token

This sounds like it is based on the customizations done at Oath, which to my 
recollection did not use the actual federation implementation in keystone due 
to its reliance on Athenz (I think?) as an identity manager. Something similar 
can be accomplished in standard keystone with the mapping API in keystone which 
can cause dynamic generation of a shadow user, project and role assignments.

> * Keystone should support the creation of users and projects with 
> predictable UUIDs (eg.: hash of the name of the users and projects). 
> This greatly simplifies Image federation and telemetry gathering

I was in and out of the room and don't recall this discussion exactly. We have 
historically pushed back hard against allowing setting a project ID via the 
API, though I can see predictable-but-not-settable as less problematic. One of 
the use cases from the past was being able to use the same token in different 
regions, which is problematic from a security perspective. Is that that idea 
here? Or could someone provide more details on why this is needed?

Were there any volunteers to help write up specs and work on the 
implementations in keystone?



Colleen (cmurphy)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposed Changes to the Core Team ...

2018-09-21 Thread Jay S Bryant



On 9/21/2018 12:06 PM, John Griffith wrote:




On Fri, Sep 21, 2018 at 11:00 AM Sean McGinnis > wrote:


On Wed, Sep 19, 2018 at 08:43:24PM -0500, Jay S Bryant wrote:
> All,
>
> In the last year we have had some changes to Core team
participation.  This
> was a topic of discussion at the PTG in Denver last week.  Based
on that
> discussion I have reached out to John Griffith and Winston D
(Huang Zhiteng)
> and asked if they felt they could continue to be a part of the
Core Team.
> Both agreed that it was time to relinquish their titles.
>
> So, I am proposing to remove John Griffith and Winston D from
Cinder Core.
> If I hear no concerns with this plan in the next week I will
remove them.
>
> It is hard to remove people who have been so instrumental to the
early days
> of Cinder.  Your past contributions are greatly appreciated and
the team
> would be happy to have you back if circumstances every change.
>
> Sincerely,
> Jay Bryant
>

Really sad to see Winston go as he's been a long time member, but
I think over
the last several releases it's been obvious he's had other
priorities to
compete with. It would be great if that were to change some day.
He's made a
lot of great contributions to Cinder over the years.

I'm a little reluctant to make any changes with John though. We've
spoken
briefly. He definitely is off to other things now, but with how
deeply he has
been involved up until recently with things like the multiattach
implementation, replication, and other significant things, I would
much rather
have him around but less active than completely gone. Having a few
good reviews
is worth a lot.



I would propose we hold off on changing John's status for at least
a cycle. He
has indicated to me he would be willing to devote a little time to
still doing
reviews as his time allows, and I would hate to lose out on his
expertise on
changes to some things. Maybe we can give it a little more time
and see if his
other demands keep him too busy to participate and reevaluate later?

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hey Everyone,

Now that I'm settling in on my other things I think I can still 
contribute a bit to Cinder on my own time.  I'm still pretty fond of 
OpenStack and Cinder so would love the opportunity to give it a cycle 
to see if I can balance things and still be helpful.


Thanks,
John

Sean,

Thank you for your input on this and for following up with John.

John,

Glad that you are settling into your new position and think some time 
will free up for Cinder again.  I would be happy to have your continued 
input.


I am removing you from consideration for removal.

Jay
(jungleboyj)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposed Changes to the Core Team ...

2018-09-21 Thread John Griffith
On Fri, Sep 21, 2018 at 11:00 AM Sean McGinnis 
wrote:

> On Wed, Sep 19, 2018 at 08:43:24PM -0500, Jay S Bryant wrote:
> > All,
> >
> > In the last year we have had some changes to Core team participation.
> This
> > was a topic of discussion at the PTG in Denver last week.  Based on that
> > discussion I have reached out to John Griffith and Winston D (Huang
> Zhiteng)
> > and asked if they felt they could continue to be a part of the Core
> Team.
> > Both agreed that it was time to relinquish their titles.
> >
> > So, I am proposing to remove John Griffith and Winston D from Cinder
> Core.
> > If I hear no concerns with this plan in the next week I will remove them.
> >
> > It is hard to remove people who have been so instrumental to the early
> days
> > of Cinder.  Your past contributions are greatly appreciated and the team
> > would be happy to have you back if circumstances every change.
> >
> > Sincerely,
> > Jay Bryant
> >
>
> Really sad to see Winston go as he's been a long time member, but I think
> over
> the last several releases it's been obvious he's had other priorities to
> compete with. It would be great if that were to change some day. He's made
> a
> lot of great contributions to Cinder over the years.
>
> I'm a little reluctant to make any changes with John though. We've spoken
> briefly. He definitely is off to other things now, but with how deeply he
> has
> been involved up until recently with things like the multiattach
> implementation, replication, and other significant things, I would much
> rather
> have him around but less active than completely gone. Having a few good
> reviews
> is worth a lot.
>


> I would propose we hold off on changing John's status for at least a
> cycle. He
> has indicated to me he would be willing to devote a little time to still
> doing
> reviews as his time allows, and I would hate to lose out on his expertise
> on
> changes to some things. Maybe we can give it a little more time and see if
> his
> other demands keep him too busy to participate and reevaluate later?
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hey Everyone,

Now that I'm settling in on my other things I think I can still contribute
a bit to Cinder on my own time.  I'm still pretty fond of OpenStack and
Cinder so would love the opportunity to give it a cycle to see if I can
balance things and still be helpful.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposed Changes to the Core Team ...

2018-09-21 Thread Sean McGinnis
On Wed, Sep 19, 2018 at 08:43:24PM -0500, Jay S Bryant wrote:
> All,
> 
> In the last year we have had some changes to Core team participation.  This
> was a topic of discussion at the PTG in Denver last week.  Based on that
> discussion I have reached out to John Griffith and Winston D (Huang Zhiteng)
> and asked if they felt they could continue to be a part of the Core Team. 
> Both agreed that it was time to relinquish their titles.
> 
> So, I am proposing to remove John Griffith and Winston D from Cinder Core. 
> If I hear no concerns with this plan in the next week I will remove them.
> 
> It is hard to remove people who have been so instrumental to the early days
> of Cinder.  Your past contributions are greatly appreciated and the team
> would be happy to have you back if circumstances every change.
> 
> Sincerely,
> Jay Bryant
> 

Really sad to see Winston go as he's been a long time member, but I think over
the last several releases it's been obvious he's had other priorities to
compete with. It would be great if that were to change some day. He's made a
lot of great contributions to Cinder over the years.

I'm a little reluctant to make any changes with John though. We've spoken
briefly. He definitely is off to other things now, but with how deeply he has
been involved up until recently with things like the multiattach
implementation, replication, and other significant things, I would much rather
have him around but less active than completely gone. Having a few good reviews
is worth a lot.

I would propose we hold off on changing John's status for at least a cycle. He
has indicated to me he would be willing to devote a little time to still doing
reviews as his time allows, and I would hate to lose out on his expertise on
changes to some things. Maybe we can give it a little more time and see if his
other demands keep him too busy to participate and reevaluate later?

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Berlin Forum Proposals

2018-09-19 Thread Gorka Eguileor
On 19/09, Jay S Bryant wrote:
> Gorka,
>
> Oh man!  Sorry for the duplication.  I will update the link on the Forum
> page if you are able to move your content over.  Think it will confused
> people less if we use the page I most recently sent out.  Does that make
> sense?
>
Hi Jay,

Yup, it makes sense.

I moved the contents and updated the wiki to point to your etherpad.

> Thanks for catching this mistake!
>

It was my mistake for not mentioning the existing etherpad during the
PTG... XD

Cheers,
Gorka.


> Jay
>
>
> On 9/19/2018 4:42 AM, Gorka Eguileor wrote:
> > On 18/09, Jay S Bryant wrote:
> > > Team,
> > >
> > > I have created an etherpad for our Forum Topic Planning:
> > > https://etherpad.openstack.org/p/cinder-berlin-forum-proposals
> > >
> > > Please add your ideas to the etherpad.  Thank you!
> > >
> > > Jay
> > >
> > Hi Jay,
> >
> > After our last IRC meeting, a couple of weeks ago, I created an etherpad
> > [1] and added it to the Forum wiki [2] (though I failed to mention it).
> >
> > I had added a possible topic to this etherpad [1], but I can move it to
> > yours and update the wiki if you like.
> >
> > Cheers,
> > Gorka.
> >
> >
> > [1]: https://etherpad.openstack.org/p/cinder-forum-stein
> > [2]: https://wiki.openstack.org/wiki/Forum/Berlin2018
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Berlin Forum Proposals

2018-09-19 Thread Jay S Bryant

Gorka,

Oh man!  Sorry for the duplication.  I will update the link on the Forum 
page if you are able to move your content over.  Think it will confused 
people less if we use the page I most recently sent out.  Does that make 
sense?


Thanks for catching this mistake!

Jay


On 9/19/2018 4:42 AM, Gorka Eguileor wrote:

On 18/09, Jay S Bryant wrote:

Team,

I have created an etherpad for our Forum Topic Planning:
https://etherpad.openstack.org/p/cinder-berlin-forum-proposals

Please add your ideas to the etherpad.  Thank you!

Jay


Hi Jay,

After our last IRC meeting, a couple of weeks ago, I created an etherpad
[1] and added it to the Forum wiki [2] (though I failed to mention it).

I had added a possible topic to this etherpad [1], but I can move it to
yours and update the wiki if you like.

Cheers,
Gorka.


[1]: https://etherpad.openstack.org/p/cinder-forum-stein
[2]: https://wiki.openstack.org/wiki/Forum/Berlin2018



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Berlin Forum Proposals

2018-09-19 Thread Gorka Eguileor
On 18/09, Jay S Bryant wrote:
> Team,
>
> I have created an etherpad for our Forum Topic Planning:
> https://etherpad.openstack.org/p/cinder-berlin-forum-proposals
>
> Please add your ideas to the etherpad.  Thank you!
>
> Jay
>

Hi Jay,

After our last IRC meeting, a couple of weeks ago, I created an etherpad
[1] and added it to the Forum wiki [2] (though I failed to mention it).

I had added a possible topic to this etherpad [1], but I can move it to
yours and update the wiki if you like.

Cheers,
Gorka.


[1]: https://etherpad.openstack.org/p/cinder-forum-stein
[2]: https://wiki.openstack.org/wiki/Forum/Berlin2018

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch

2018-09-17 Thread Clark Boylan
On Mon, Sep 17, 2018, at 8:53 AM, Jay S Bryant wrote:
> 
> 
> On 9/17/2018 10:46 AM, Sean McGinnis wrote:
> >>> Plan
> >>> 
> >>> We would now like to have the driverfixes/ocata branch deleted so there 
> >>> is no
> >>> confusion about where backports should go and we don't accidentally get 
> >>> these
> >>> out of sync again.
> >>>
> >>> Infra team, please delete this branch or let me know if there is a process
> >>> somewhere I should follow to have this removed.
> >> The first step is to make sure that all changes on the branch are in a non 
> >> open state (merged or abandoned). 
> >> https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open
> >>  shows that there are no open changes.
> >>
> >> Next you will want to make sure that the commits on this branch are 
> >> preserved somehow. Git garbage collection will delete and cleanup commits 
> >> if they are not discoverable when working backward from some ref. This is 
> >> why our old stable branch deletion process required we tag the stable 
> >> branch as $release-eol first. Looking at `git log origin/driverfixes/ocata 
> >> ^origin/stable/ocata --no-merges --oneline` there are quite a few commits 
> >> on the driverfixes branch that are not on the stable branch, but that 
> >> appears to be due to cherry pick writing new commits. You have indicated 
> >> above that you believe the two branches are in sync at this point. A quick 
> >> sampling of commits seems to confirm this as well.
> >>
> >> If you can go ahead and confirm that you are ready to delete the 
> >> driverfixes/ocata branch I will go ahead and remove it.
> >>
> >> Clark
> >>
> > I did another spot check too to make sure I hadn't missed anything, but it 
> > does
> > appear to be as you stated that the cherry pick resulted in new commits and
> > they actually are in sync for our purposes.
> >
> > I believe we are ready to proceed.
> Sean,
> 
> Thank you for following up on this.  I agee it is a good idea to remove 
> the old driverfixes/ocata branch to avoid possible confusion in the future.
> 
> Clark,
> 
> Sean, myself and the team worked to carefully cherry-pick everything 
> that was needed in stable/ocata so I am confident that we are ready to 
> remove driverfixes/ocata.
> 

I have removed openstack/cinder driverfixes/ocata branch with HEAD 
a37cc259f197e1a515cf82deb342739a125b65c6.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch

2018-09-17 Thread Jay S Bryant



On 9/17/2018 10:46 AM, Sean McGinnis wrote:

Plan

We would now like to have the driverfixes/ocata branch deleted so there is no
confusion about where backports should go and we don't accidentally get these
out of sync again.

Infra team, please delete this branch or let me know if there is a process
somewhere I should follow to have this removed.

The first step is to make sure that all changes on the branch are in a non open 
state (merged or abandoned). 
https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open
 shows that there are no open changes.

Next you will want to make sure that the commits on this branch are preserved 
somehow. Git garbage collection will delete and cleanup commits if they are not 
discoverable when working backward from some ref. This is why our old stable 
branch deletion process required we tag the stable branch as $release-eol 
first. Looking at `git log origin/driverfixes/ocata ^origin/stable/ocata 
--no-merges --oneline` there are quite a few commits on the driverfixes branch 
that are not on the stable branch, but that appears to be due to cherry pick 
writing new commits. You have indicated above that you believe the two branches 
are in sync at this point. A quick sampling of commits seems to confirm this as 
well.

If you can go ahead and confirm that you are ready to delete the 
driverfixes/ocata branch I will go ahead and remove it.

Clark


I did another spot check too to make sure I hadn't missed anything, but it does
appear to be as you stated that the cherry pick resulted in new commits and
they actually are in sync for our purposes.

I believe we are ready to proceed.

Sean,

Thank you for following up on this.  I agee it is a good idea to remove 
the old driverfixes/ocata branch to avoid possible confusion in the future.


Clark,

Sean, myself and the team worked to carefully cherry-pick everything 
that was needed in stable/ocata so I am confident that we are ready to 
remove driverfixes/ocata.


Thanks!
Jay



Thanks for your help.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch

2018-09-17 Thread Sean McGinnis
> > 
> > Plan
> > 
> > We would now like to have the driverfixes/ocata branch deleted so there is 
> > no
> > confusion about where backports should go and we don't accidentally get 
> > these
> > out of sync again.
> > 
> > Infra team, please delete this branch or let me know if there is a process
> > somewhere I should follow to have this removed.
> 
> The first step is to make sure that all changes on the branch are in a non 
> open state (merged or abandoned). 
> https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open
>  shows that there are no open changes.
> 
> Next you will want to make sure that the commits on this branch are preserved 
> somehow. Git garbage collection will delete and cleanup commits if they are 
> not discoverable when working backward from some ref. This is why our old 
> stable branch deletion process required we tag the stable branch as 
> $release-eol first. Looking at `git log origin/driverfixes/ocata 
> ^origin/stable/ocata --no-merges --oneline` there are quite a few commits on 
> the driverfixes branch that are not on the stable branch, but that appears to 
> be due to cherry pick writing new commits. You have indicated above that you 
> believe the two branches are in sync at this point. A quick sampling of 
> commits seems to confirm this as well.
> 
> If you can go ahead and confirm that you are ready to delete the 
> driverfixes/ocata branch I will go ahead and remove it.
> 
> Clark
> 

I did another spot check too to make sure I hadn't missed anything, but it does
appear to be as you stated that the cherry pick resulted in new commits and
they actually are in sync for our purposes.

I believe we are ready to proceed.

Thanks for your help.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch

2018-09-17 Thread Clark Boylan
On Mon, Sep 17, 2018, at 8:00 AM, Sean McGinnis wrote:
> Hello Cinder and Infra teams. Cinder needs some help from infra or some
> pointers on how to proceed.
> 
> tl;dr - The openstack/cinder repo had a driverfixes/ocata branch created for
> fixes that no longer met the more restrictive phase II stable policy criteria.
> Extended maintenance has changed that and we want to delete driverfixes/ocata
> to make sure patches are going to the right place.
> 
> Background
> --
> Before the extended maintenance changes, the Cinder team found a lot of 
> vendors
> were maintaining their own forks to keep backported driver fixes that we were
> not allowing upstream due to the stable policy being more restrictive for 
> older
> (or deleted) branches. We created the driverfixes/* branches as a central 
> place
> for these to go so distros would have one place to grab these fixes, if they
> chose to do so.
> 
> This has worked great IMO, and we do occasionally still have things that need
> to go to driverfixes/mitaka and driverfixes/newton. We had also pushed a lot 
> of
> fixes to driverfixes/ocata, but with the changes to stable policy with 
> extended
> maintenance, that is no longer needed.
> 
> Extended Maintenance Changes
> 
> With things being somewhat relaxed with the extended maintenance changes, we
> are now able to backport bug fixes to stable/ocata that we couldn't before and
> we don't have to worry as much about that branch being deleted.
> 
> I had gone through and identified all patches backported to driverfixes/ocata
> but not stable/ocata and cherry-picked them over to get the two branches in
> sync. The stable/ocata should now be identical or ahead of driverfixes/ocata
> and we want to make sure nothing more gets accidentally merged to
> driverfixes/ocata instead of the official stable branch.
> 
> Plan
> 
> We would now like to have the driverfixes/ocata branch deleted so there is no
> confusion about where backports should go and we don't accidentally get these
> out of sync again.
> 
> Infra team, please delete this branch or let me know if there is a process
> somewhere I should follow to have this removed.

The first step is to make sure that all changes on the branch are in a non open 
state (merged or abandoned). 
https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open
 shows that there are no open changes.

Next you will want to make sure that the commits on this branch are preserved 
somehow. Git garbage collection will delete and cleanup commits if they are not 
discoverable when working backward from some ref. This is why our old stable 
branch deletion process required we tag the stable branch as $release-eol 
first. Looking at `git log origin/driverfixes/ocata ^origin/stable/ocata 
--no-merges --oneline` there are quite a few commits on the driverfixes branch 
that are not on the stable branch, but that appears to be due to cherry pick 
writing new commits. You have indicated above that you believe the two branches 
are in sync at this point. A quick sampling of commits seems to confirm this as 
well.

If you can go ahead and confirm that you are ready to delete the 
driverfixes/ocata branch I will go ahead and remove it.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][ptg] Topics scheduled for next week ...

2018-09-11 Thread Gorka Eguileor
On 07/09, Jay S Bryant wrote:
> Team,
>
> I have created an etherpad for each of the days of the PTG and split out the
> proposed topics from the planning etherpad into the individual days for
> discussion: [1] [2] [3]
>
> If you want to add an additional topic please add it to Friday or find some
> time on one of the other days.
>
> I look forward to discussing all these topics with you all next week.
>
> Thanks!
>
> Jay

Thanks Jay.

I have added to the Cinder general etherpad the shared_target discussion
topic, as I believe we should be discussing it in the Cinder room first
before Thursday's meeting with Nova.

I saw that on Wednesday the 2:30 to 3:00 privsep topic is a duplicate of
the 12:00 to 12:30 slot, so I have taken the liberty of replacing it
with the shared_targets one.  I hope that's alright.

Cheers,
Gorka.

>
> [1] https://etherpad.openstack.org/p/cinder-ptg-stein-wednesday
>
> [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday
>
> [3] https://etherpad.openstack.org/p/cinder-ptg-stein-friday
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] How to mount NFS volume?

2018-08-17 Thread Ivan Kolodyazhny
Hi Clay,

Unfortunately, local-attach doesn't support NFS-based volumes due to the
security reasons. We haven't the good solution now for multi-tenant
environments.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Fri, Aug 17, 2018 at 12:03 PM, Chang, Clay (HPS OE-Linux TDC) <
cl...@hpe.com> wrote:

> Hi,
>
>
>
> I have Cinder configured with NFS backend. On one bare metal node, I can
> use ‘cinder create’ to create the volume with specified size – I saw a
> volume file create on the NFS server, so I suppose the NFS was configured
> correctly.
>
>
>
> My question is, how could I mount the NFS volume on the bare metal node?
>
>
>
> I tried:
>
>
>
> cinder local-attach 3f66c360-e2e1-471e-aa36-57db3fcf3bdb –mountpoint
> /mnt/tmp
>
>
>
> it says:
>
>
>
> “ERROR: Connect to volume via protocol NFS not supported”
>
>
>
> I looked at https://github.com/openstack/python-brick-cinderclient-ext/b
> lob/master/brick_cinderclient_ext/volume_actions.py, found only iSCSI,
> RBD and FIBRE_CHANNEL were supported.
>
>
>
> Wondering if there are ways to mount the NFS volume?
>
>
>
> Thanks,
>
> Clay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-14 Thread Ben Nemec

Okay, thanks.  There's no Sigyn in openstack-oslo so I think we're good. :-)

On 08/14/2018 10:37 AM, Jay S Bryant wrote:

Ben,

Don't fully understand why it was kicking me.  I guess one of the 
behaviors that is considered suspicious is trying to message a bunch of 
nicks at once.  I had tried reducing the number of people in my ping but 
it still kicked me and so I decided to not risk it again.


Sounds like the moral of the story is if sigyn is in the channel, be 
careful.  :-)


Jay


On 8/13/2018 4:06 PM, Ben Nemec wrote:



On 08/08/2018 12:04 PM, Jay S Bryant wrote:

Team,

A reminder that we have our weekly Cinder meeting on Wednesdays at 
16:00 UTC.  I bring this up as I can no longer send the courtesy 
pings without being kicked from IRC.  So, if you wish to join the 
meeting please add a reminder to your calendar of choice.


Do you have any idea why you're being kicked?  I'm wondering how to 
avoid getting into this situation with the Oslo pings.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-14 Thread Amy Marrich
That bot is indeed missing from the channel

Amy (spotz)

On Mon, Aug 13, 2018 at 5:44 PM, Jeremy Stanley  wrote:

> On 2018-08-13 16:29:27 -0500 (-0500), Amy Marrich wrote:
> > I know we did a ping last week in #openstack-ansible for our meeting no
> > issue. I wonder if it's a length of names thing or a channel setting.
> [...]
>
> Freenode's Sigyn bot may not have been invited to
> #openstack-ansible. We might want to consider kicking it from
> channels while they have nick registration enforced.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-14 Thread Jay S Bryant



On 8/13/2018 5:44 PM, Jeremy Stanley wrote:

On 2018-08-13 16:29:27 -0500 (-0500), Amy Marrich wrote:

I know we did a ping last week in #openstack-ansible for our meeting no
issue. I wonder if it's a length of names thing or a channel setting.

[...]

Freenode's Sigyn bot may not have been invited to
#openstack-ansible. We might want to consider kicking it from
channels while they have nick registration enforced.

It does seem that we don't really need the monitoring if registration is 
enforced.  I would be up for doing this.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-14 Thread Jay S Bryant

Ben,

Don't fully understand why it was kicking me.  I guess one of the 
behaviors that is considered suspicious is trying to message a bunch of 
nicks at once.  I had tried reducing the number of people in my ping but 
it still kicked me and so I decided to not risk it again.


Sounds like the moral of the story is if sigyn is in the channel, be 
careful.  :-)


Jay


On 8/13/2018 4:06 PM, Ben Nemec wrote:



On 08/08/2018 12:04 PM, Jay S Bryant wrote:

Team,

A reminder that we have our weekly Cinder meeting on Wednesdays at 
16:00 UTC.  I bring this up as I can no longer send the courtesy 
pings without being kicked from IRC.  So, if you wish to join the 
meeting please add a reminder to your calendar of choice.


Do you have any idea why you're being kicked?  I'm wondering how to 
avoid getting into this situation with the Oslo pings.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-13 Thread Jeremy Stanley
On 2018-08-13 16:29:27 -0500 (-0500), Amy Marrich wrote:
> I know we did a ping last week in #openstack-ansible for our meeting no
> issue. I wonder if it's a length of names thing or a channel setting.
[...]

Freenode's Sigyn bot may not have been invited to
#openstack-ansible. We might want to consider kicking it from
channels while they have nick registration enforced.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-13 Thread Amy Marrich
I know we did a ping last week in #openstack-ansible for our meeting no
issue. I wonder if it's a length of names thing or a channel setting.

Amy (spotz)

On Mon, Aug 13, 2018 at 4:25 PM, Eric Fried  wrote:

> Are you talking about the nastygram from "Sigyn" saying:
>
> "Your actions in # tripped automated anti-spam measures
> (nicks/hilight spam), but were ignored based on your time in channel;
> stop now, or automated action will still be taken. If you have any
> questions, please don't hesitate to contact a member of staff"
>
> I'm getting this too, and (despite the implication to the contrary) it
> sometimes cuts off my messages in an unpredictable spot.
>
> I'm contacting "a member of staff" to see if there's any way to get
> "whitelisted" for big messages. In the meantime, the only solution I'm
> aware of is to chop your pasteypaste up into smaller chunks, and wait a
> couple seconds between pastes.
>
> -efried
>
> On 08/13/2018 04:06 PM, Ben Nemec wrote:
> >
> >
> > On 08/08/2018 12:04 PM, Jay S Bryant wrote:
> >> Team,
> >>
> >> A reminder that we have our weekly Cinder meeting on Wednesdays at
> >> 16:00 UTC.  I bring this up as I can no longer send the courtesy pings
> >> without being kicked from IRC.  So, if you wish to join the meeting
> >> please add a reminder to your calendar of choice.
> >
> > Do you have any idea why you're being kicked?  I'm wondering how to
> > avoid getting into this situation with the Oslo pings.
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-13 Thread Eric Fried
Are you talking about the nastygram from "Sigyn" saying:

"Your actions in # tripped automated anti-spam measures
(nicks/hilight spam), but were ignored based on your time in channel;
stop now, or automated action will still be taken. If you have any
questions, please don't hesitate to contact a member of staff"

I'm getting this too, and (despite the implication to the contrary) it
sometimes cuts off my messages in an unpredictable spot.

I'm contacting "a member of staff" to see if there's any way to get
"whitelisted" for big messages. In the meantime, the only solution I'm
aware of is to chop your pasteypaste up into smaller chunks, and wait a
couple seconds between pastes.

-efried

On 08/13/2018 04:06 PM, Ben Nemec wrote:
> 
> 
> On 08/08/2018 12:04 PM, Jay S Bryant wrote:
>> Team,
>>
>> A reminder that we have our weekly Cinder meeting on Wednesdays at
>> 16:00 UTC.  I bring this up as I can no longer send the courtesy pings
>> without being kicked from IRC.  So, if you wish to join the meeting
>> please add a reminder to your calendar of choice.
> 
> Do you have any idea why you're being kicked?  I'm wondering how to
> avoid getting into this situation with the Oslo pings.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-13 Thread Ben Nemec



On 08/08/2018 12:04 PM, Jay S Bryant wrote:

Team,

A reminder that we have our weekly Cinder meeting on Wednesdays at 16:00 
UTC.  I bring this up as I can no longer send the courtesy pings without 
being kicked from IRC.  So, if you wish to join the meeting please add a 
reminder to your calendar of choice.


Do you have any idea why you're being kicked?  I'm wondering how to 
avoid getting into this situation with the Oslo pings.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-08 Thread Sean McGinnis
On Wed, Aug 08, 2018 at 05:15:26PM +, Sean McGinnis wrote:
> On Tue, Aug 07, 2018 at 05:27:06PM -0500, Monty Taylor wrote:
> > On 08/07/2018 05:03 PM, Akihiro Motoki wrote:
> > >Hi Cinder and API-SIG folks,
> > >
> > >During reviewing a horizon bug [0], I noticed the behavior of Cinder API
> > >3.0 was changed.
> > >Cinder introduced more strict schema validation for creating/updating
> > >volume encryption type
> > >during Rocky and a new micro version 3.53 was introduced[1].
> > >
> > >Previously, Cinder API like 3.0 accepts unused fields in POST requests
> > >but after [1] landed unused fields are now rejected even when Cinder API
> > >3.0 is used.
> > >In my understanding on the microversioning, the existing behavior for
> > >older versions should be kept.
> > >Is it correct?
> > 
> > I agree with your assessment that 3.0 was used there - and also that I would
> > expect the api validation to only change if 3.53 microversion was used.
> > 
> 
> I filed a bug to track this:
> 
> https://bugs.launchpad.net/cinder/+bug/1786054
> 

Sorry, between lack of attention to detail (lack of coffee?) and an incorrect
link, I think I went down the wrong rabbit hole.

The change was actually introduced in [0]. I have submitted [1] to allow the
additional parameters in the volume type encryption API. This was definitely an
oversight when we allowed that one through.

Apologies for the hassle this has caused.

[0] https://review.openstack.org/#/c/561140/
[1] https://review.openstack.org/#/c/590014/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-08 Thread Sean McGinnis
On Tue, Aug 07, 2018 at 05:27:06PM -0500, Monty Taylor wrote:
> On 08/07/2018 05:03 PM, Akihiro Motoki wrote:
> >Hi Cinder and API-SIG folks,
> >
> >During reviewing a horizon bug [0], I noticed the behavior of Cinder API
> >3.0 was changed.
> >Cinder introduced more strict schema validation for creating/updating
> >volume encryption type
> >during Rocky and a new micro version 3.53 was introduced[1].
> >
> >Previously, Cinder API like 3.0 accepts unused fields in POST requests
> >but after [1] landed unused fields are now rejected even when Cinder API
> >3.0 is used.
> >In my understanding on the microversioning, the existing behavior for
> >older versions should be kept.
> >Is it correct?
> 
> I agree with your assessment that 3.0 was used there - and also that I would
> expect the api validation to only change if 3.53 microversion was used.
> 

I filed a bug to track this:

https://bugs.launchpad.net/cinder/+bug/1786054

But something doesn't seem right from what I've seen. I've put up a patch to
add some extra unit testing around this. I expected some of those unit tests to
fail, but everything seemed happy and working the way it is supposed to with
prior to 3.53 accepting anything and 3.53 or later rejecting extra parameters.

Since that didn't work, I tried reproducing this against a running system using
curl. With no version specified (defaulting to the base 3.0 microversion)
creation succeeded:

curl -g -i -X POST
http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H
"Accept: "Content-Type: application/json" -H "User-Agent: python-cinderclient"
-H "X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description":
null, "multiattach": false, "source_volid": null, "consistencygroup_id": null,
"snapshot_id": null, "size": 1, "name": "New", "imageRef": null,
"availability_zone": null, "volume_type": null, "metadata": {}, "project_id":
"testing", "junk": "garbage"}}'

I then tried specifying the microversion that introduces the strict schema
checking to make sure I was able to get the appropriate failure, which worked
as expected:

curl -g -i -X POST
http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H
"Accept: "Content-Type: application/json" -H "User-Agent: python-cinderclient"
-H "X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description":
null, "multiattach": false, "source_volid": null, "consistencygroup_id": null,
"snapshot_id": null, "size": 1, "name": "New-mv353", "imageRef": null,
"availability_zone": null, "volume_type": null, "metadata": {}, "project_id":
"testing", "junk": "garbage"}}' -H "OpenStack-API-Version: volume 3.53"
HTTP/1.1 400 Bad Request
...

And to test boundary conditions, I then specified the microversion just prior
to the one that enabled strict checking:

curl -g -i -X POST
http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H "Ac
"Content-Type: application/json" -H "User-Agent: python-cinderclient" -H
"X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description":
null, "multiattach": false, "source_volid": null, "consistencygroup_id": null,
"snapshot_id": null, "size": 1, "name": "New-mv352", "imageRef": null,
"availability_zone": null, "volume_type": null, "metadata": {}, "project_id":
"testing", "junk": "garbage"}}' -H "OpenStack-API-Version: volume 3.52"
HTTP/1.1 202 Accepted

In all cases except the strict checking one, the volume was created
successfully even though the junk extra parameters ("project_id": "testing",
"junk": "garbage") were provided.

So I'm missing something here. Is it possible horizon is requesting the latest
API version and not defaulting to 3.0?

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-08 Thread Sean McGinnis
>  > > 
>  > > Previously, Cinder API like 3.0 accepts unused fields in POST requests
>  > > but after [1] landed unused fields are now rejected even when Cinder API 
>  > > 3.0 is used.
>  > > In my understanding on the microversioning, the existing behavior for 
>  > > older versions should be kept.
>  > > Is it correct?
>  > 
>  > I agree with your assessment that 3.0 was used there - and also that I 
>  > would expect the api validation to only change if 3.53 microversion was 
>  > used.
> 
> +1. As you know, neutron also implemented strict validation in Rocky but with 
> discovery via config option and extensions mechanism. Same way Cinder should 
> make it with backward compatible way till 3.53 version. 
> 

I agree. I _thought_ that was the way it was implemented, but apparently
something was missed.

I will try to look at this soon and see what would need to be changed to get
this behaving correctly. Unless someone else has the time and can beat me to
it, which would be very much appreciated.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-07 Thread Ghanshyam Mann



  On Wed, 08 Aug 2018 07:27:06 +0900 Monty Taylor  
wrote  
 > On 08/07/2018 05:03 PM, Akihiro Motoki wrote:
 > > Hi Cinder and API-SIG folks,
 > > 
 > > During reviewing a horizon bug [0], I noticed the behavior of Cinder API 
 > > 3.0 was changed.
 > > Cinder introduced more strict schema validation for creating/updating 
 > > volume encryption type
 > > during Rocky and a new micro version 3.53 was introduced[1].
 > > 
 > > Previously, Cinder API like 3.0 accepts unused fields in POST requests
 > > but after [1] landed unused fields are now rejected even when Cinder API 
 > > 3.0 is used.
 > > In my understanding on the microversioning, the existing behavior for 
 > > older versions should be kept.
 > > Is it correct?
 > 
 > I agree with your assessment that 3.0 was used there - and also that I 
 > would expect the api validation to only change if 3.53 microversion was 
 > used.

+1. As you know, neutron also implemented strict validation in Rocky but with 
discovery via config option and extensions mechanism. Same way Cinder should 
make it with backward compatible way till 3.53 version. 

-gmann 

 > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-07 Thread Monty Taylor

On 08/07/2018 05:03 PM, Akihiro Motoki wrote:

Hi Cinder and API-SIG folks,

During reviewing a horizon bug [0], I noticed the behavior of Cinder API 
3.0 was changed.
Cinder introduced more strict schema validation for creating/updating 
volume encryption type

during Rocky and a new micro version 3.53 was introduced[1].

Previously, Cinder API like 3.0 accepts unused fields in POST requests
but after [1] landed unused fields are now rejected even when Cinder API 
3.0 is used.
In my understanding on the microversioning, the existing behavior for 
older versions should be kept.

Is it correct?


I agree with your assessment that 3.0 was used there - and also that I 
would expect the api validation to only change if 3.53 microversion was 
used.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-08-01 Thread John Griffith
On Fri, Jul 27, 2018 at 8:44 AM Matt Riedemann  wrote:

> On 7/16/2018 4:20 AM, Gorka Eguileor wrote:
> > If I remember correctly the driver was deprecated because it had no
> > maintainer or CI.  In Cinder we require our drivers to have both,
> > otherwise we can't guarantee that they actually work or that anyone will
> > fix it if it gets broken.
>
> Would this really require 3rd party CI if it's just local block storage
> on the compute node (in devstack)? We could do that with an upstream CI
> job right? We already have upstream CI jobs for things like rbd and nfs.
> The 3rd party CI requirements generally are for proprietary storage
> backends.
>
> I'm only asking about the CI side of this, the other notes from Sean
> about tweaking the LVM volume backend and feature parity are good
> reasons for removal of the unmaintained driver.
>
> Another option is using the nova + libvirt + lvm image backend for local
> (to the VM) ephemeral disk:
>
>
> https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/virt/libvirt/imagebackend.py#L653
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


We've had this conversation multiple times, here were the results from past
conversations and the reasons we deprecated:
1. Driver was not being tested at all (no CI, no upstream tests etc)
2. We sent out numerous requests trying to determine if anybody was using
the driver, didn't receive much feedback
3. The driver didn't work for an entire release, this indicated that
perhaps it wasn't that valuable
4. The driver is unable to implement a number of the required features for
a Cinder Block Device
5. Digging deeper into performance tests most comparisons were doing things
like
a. Using the shared single nic that's used for all of the cluster
communications (ie DB, APIs, Rabbit etc)
b. Misconfigured deployment, ie using a 1Gig Nic for iSCSI connections
(also see above)

The decision was that raw-block was not by definition a "Cinder Device",
and given that it wasn't really tested or
maintained that it should be removed.  LVM is actually quite good, we did
some pretty extensive testing and even
presented it as a session in Barcelona that showed perf within
approximately 10%.  I'm skeptical any time I see
dramatic comparisons of 1/2 performance, but I could be completely wrong.

I would be much more interested in putting efforts towards trying to figure
out why you have such a large perf
delta and see if we can address that as opposed to trying to bring back and
maintain a driver that only half
works.

Or as Jay Pipes mentioned, don't use Cinder in your case.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-27 Thread Matt Riedemann

On 7/16/2018 4:20 AM, Gorka Eguileor wrote:

If I remember correctly the driver was deprecated because it had no
maintainer or CI.  In Cinder we require our drivers to have both,
otherwise we can't guarantee that they actually work or that anyone will
fix it if it gets broken.


Would this really require 3rd party CI if it's just local block storage 
on the compute node (in devstack)? We could do that with an upstream CI 
job right? We already have upstream CI jobs for things like rbd and nfs. 
The 3rd party CI requirements generally are for proprietary storage 
backends.


I'm only asking about the CI side of this, the other notes from Sean 
about tweaking the LVM volume backend and feature parity are good 
reasons for removal of the unmaintained driver.


Another option is using the nova + libvirt + lvm image backend for local 
(to the VM) ephemeral disk:


https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/virt/libvirt/imagebackend.py#L653

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-24 Thread Sean McGinnis
On Tue, Jul 24, 2018 at 06:07:24PM +0800, Rambo wrote:
> Hi,all
> 
> 
>  In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
> being deprecated, and was eventually be removed with the Queens release.
> 
> 
> https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
>  
> 
> 
>  However,I want to use it out of tree,but I don't know how to use it out 
> of tree,Can you share me a doc? Thank you very much!
> 

I don't think we have any community documentation on how to use out of tree
drivers, but it's fairly straightforward.

You can just drop in that block_device.py file in the cinder/volumes/drivers
directory and configure its use in cinder.conf using the same volume_driver
setting as before.

I'm not sure if anything has been changed since Ocata that would require
updates to the driver, but I would expect most base functionality should still
work. But just a word of warning that there may be some updates to the driver
needed if you find issues with it.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-24 Thread Lee Yarwood
On 20-07-18 08:10:37, Erlon Cruz wrote:
> Nice, good to know. Thanks all for the feedback. We will fix that in our
> drivers.

FWIW Nova does not and AFAICT never has called os-force_detach.

We previously used os-terminate_connection with v2 where the connector
was optional. Even then we always provided one, even providing the
destination connector during an evacuation when the source connector
wasn't stashed in connection_info.
 
> @Walter, so, in this case, if Cinder has the connector, it should not need
> to call the driver passing a None object right?

Yeah I don't think this is an issue with v3 given the connector is
stashed with the attachment, so all we require is a reference to the
attachment to cleanup the connection during evacuations etc.

Lee
 
> Erlon
> 
> Em qua, 18 de jul de 2018 às 12:56, Walter Boring 
> escreveu:
> 
> > The whole purpose of this test is to simulate the case where Nova doesn't
> > know where the vm is anymore,
> > or may simply not exist, but we need to clean up the cinder side of
> > things.   That being said, with the new
> > attach API, the connector is being saved in the cinder database for each
> > volume attachment.
> >
> > Walt
> >
> > On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor 
> > wrote:
> >
> >> On 17/07, Sean McGinnis wrote:
> >> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> >> > > Hi Cinder and Nova folks,
> >> > >
> >> > > Working on some tests for our drivers, I stumbled upon this tempest
> >> test
> >> > > 'force_detach_volume'
> >> > > that is calling Cinder API passing a 'None' connector. At the time
> >> this was
> >> > > added several CIs
> >> > > went down, and people started discussing whether this
> >> (accepting/sending a
> >> > > None connector)
> >> > > would be the proper behavior for what is expected to a driver to
> >> do[1]. So,
> >> > > some of CIs started
> >> > > just skipping that test[2][3][4] and others implemented fixes that
> >> made the
> >> > > driver to disconnected
> >> > > the volume from all hosts if a None connector was received[5][6][7].
> >> >
> >> > Right, it was determined the correct behavior for this was to
> >> disconnect the
> >> > volume from all hosts. The CIs that are skipping this test should stop
> >> doing so
> >> > (once their drivers are fixed of course).
> >> >
> >> > >
> >> > > While implementing this fix seems to be straightforward, I feel that
> >> just
> >> > > removing the volume
> >> > > from all hosts is not the correct thing to do mainly considering that
> >> we
> >> > > can have multi-attach.
> >> > >
> >> >
> >> > I don't think multiattach makes a difference here. Someone is forcibly
> >> > detaching the volume and not specifying an individual connection. So
> >> based on
> >> > that, Cinder should be removing any connections, whether that is to one
> >> or
> >> > several hosts.
> >> >
> >>
> >> Hi,
> >>
> >> I agree with Sean, drivers should remove all connections for the volume.
> >>
> >> Even without multiattach there are cases where you'll have multiple
> >> connections for the same volume, like in a Live Migration.
> >>
> >> It's also very useful when Nova and Cinder get out of sync and your
> >> volume has leftover connections. In this case if you try to delete the
> >> volume you get a "volume in use" error from some drivers.
> >>
> >> Cheers,
> >> Gorka.
> >>
> >>
> >> > > So, my questions are: What is the best way to fix this problem? Should
> >> > > Cinder API continue to
> >> > > accept detachments with None connectors? If, so, what would be the
> >> effects
> >> > > on other Nova
> >> > > attachments for the same volume? Is there any side effect if the
> >> volume is
> >> > > not multi-attached?
> >> > >
> >> > > Additionally to this thread here, I should bring this topic to
> >> tomorrow's
> >> > > Cinder's meeting,
> >> > > so please join if you have something to share.
> >> > >
> >> >
> >> > +1 - good plan.
> >> >
> >> >
> >> >
> >> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questi

Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-20 Thread Erlon Cruz
Nice, good to know. Thanks all for the feedback. We will fix that in our
drivers.

@Walter, so, in this case, if Cinder has the connector, it should not need
to call the driver passing a None object right?

Erlon

Em qua, 18 de jul de 2018 às 12:56, Walter Boring 
escreveu:

> The whole purpose of this test is to simulate the case where Nova doesn't
> know where the vm is anymore,
> or may simply not exist, but we need to clean up the cinder side of
> things.   That being said, with the new
> attach API, the connector is being saved in the cinder database for each
> volume attachment.
>
> Walt
>
> On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor 
> wrote:
>
>> On 17/07, Sean McGinnis wrote:
>> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
>> > > Hi Cinder and Nova folks,
>> > >
>> > > Working on some tests for our drivers, I stumbled upon this tempest
>> test
>> > > 'force_detach_volume'
>> > > that is calling Cinder API passing a 'None' connector. At the time
>> this was
>> > > added several CIs
>> > > went down, and people started discussing whether this
>> (accepting/sending a
>> > > None connector)
>> > > would be the proper behavior for what is expected to a driver to
>> do[1]. So,
>> > > some of CIs started
>> > > just skipping that test[2][3][4] and others implemented fixes that
>> made the
>> > > driver to disconnected
>> > > the volume from all hosts if a None connector was received[5][6][7].
>> >
>> > Right, it was determined the correct behavior for this was to
>> disconnect the
>> > volume from all hosts. The CIs that are skipping this test should stop
>> doing so
>> > (once their drivers are fixed of course).
>> >
>> > >
>> > > While implementing this fix seems to be straightforward, I feel that
>> just
>> > > removing the volume
>> > > from all hosts is not the correct thing to do mainly considering that
>> we
>> > > can have multi-attach.
>> > >
>> >
>> > I don't think multiattach makes a difference here. Someone is forcibly
>> > detaching the volume and not specifying an individual connection. So
>> based on
>> > that, Cinder should be removing any connections, whether that is to one
>> or
>> > several hosts.
>> >
>>
>> Hi,
>>
>> I agree with Sean, drivers should remove all connections for the volume.
>>
>> Even without multiattach there are cases where you'll have multiple
>> connections for the same volume, like in a Live Migration.
>>
>> It's also very useful when Nova and Cinder get out of sync and your
>> volume has leftover connections. In this case if you try to delete the
>> volume you get a "volume in use" error from some drivers.
>>
>> Cheers,
>> Gorka.
>>
>>
>> > > So, my questions are: What is the best way to fix this problem? Should
>> > > Cinder API continue to
>> > > accept detachments with None connectors? If, so, what would be the
>> effects
>> > > on other Nova
>> > > attachments for the same volume? Is there any side effect if the
>> volume is
>> > > not multi-attached?
>> > >
>> > > Additionally to this thread here, I should bring this topic to
>> tomorrow's
>> > > Cinder's meeting,
>> > > so please join if you have something to share.
>> > >
>> >
>> > +1 - good plan.
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-18 Thread Walter Boring
The whole purpose of this test is to simulate the case where Nova doesn't
know where the vm is anymore,
or may simply not exist, but we need to clean up the cinder side of
things.   That being said, with the new
attach API, the connector is being saved in the cinder database for each
volume attachment.

Walt

On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor  wrote:

> On 17/07, Sean McGinnis wrote:
> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> > > Hi Cinder and Nova folks,
> > >
> > > Working on some tests for our drivers, I stumbled upon this tempest
> test
> > > 'force_detach_volume'
> > > that is calling Cinder API passing a 'None' connector. At the time
> this was
> > > added several CIs
> > > went down, and people started discussing whether this
> (accepting/sending a
> > > None connector)
> > > would be the proper behavior for what is expected to a driver to
> do[1]. So,
> > > some of CIs started
> > > just skipping that test[2][3][4] and others implemented fixes that
> made the
> > > driver to disconnected
> > > the volume from all hosts if a None connector was received[5][6][7].
> >
> > Right, it was determined the correct behavior for this was to disconnect
> the
> > volume from all hosts. The CIs that are skipping this test should stop
> doing so
> > (once their drivers are fixed of course).
> >
> > >
> > > While implementing this fix seems to be straightforward, I feel that
> just
> > > removing the volume
> > > from all hosts is not the correct thing to do mainly considering that
> we
> > > can have multi-attach.
> > >
> >
> > I don't think multiattach makes a difference here. Someone is forcibly
> > detaching the volume and not specifying an individual connection. So
> based on
> > that, Cinder should be removing any connections, whether that is to one
> or
> > several hosts.
> >
>
> Hi,
>
> I agree with Sean, drivers should remove all connections for the volume.
>
> Even without multiattach there are cases where you'll have multiple
> connections for the same volume, like in a Live Migration.
>
> It's also very useful when Nova and Cinder get out of sync and your
> volume has leftover connections. In this case if you try to delete the
> volume you get a "volume in use" error from some drivers.
>
> Cheers,
> Gorka.
>
>
> > > So, my questions are: What is the best way to fix this problem? Should
> > > Cinder API continue to
> > > accept detachments with None connectors? If, so, what would be the
> effects
> > > on other Nova
> > > attachments for the same volume? Is there any side effect if the
> volume is
> > > not multi-attached?
> > >
> > > Additionally to this thread here, I should bring this topic to
> tomorrow's
> > > Cinder's meeting,
> > > so please join if you have something to share.
> > >
> >
> > +1 - good plan.
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-18 Thread Gorka Eguileor
On 17/07, Sean McGinnis wrote:
> On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> > Hi Cinder and Nova folks,
> >
> > Working on some tests for our drivers, I stumbled upon this tempest test
> > 'force_detach_volume'
> > that is calling Cinder API passing a 'None' connector. At the time this was
> > added several CIs
> > went down, and people started discussing whether this (accepting/sending a
> > None connector)
> > would be the proper behavior for what is expected to a driver to do[1]. So,
> > some of CIs started
> > just skipping that test[2][3][4] and others implemented fixes that made the
> > driver to disconnected
> > the volume from all hosts if a None connector was received[5][6][7].
>
> Right, it was determined the correct behavior for this was to disconnect the
> volume from all hosts. The CIs that are skipping this test should stop doing 
> so
> (once their drivers are fixed of course).
>
> >
> > While implementing this fix seems to be straightforward, I feel that just
> > removing the volume
> > from all hosts is not the correct thing to do mainly considering that we
> > can have multi-attach.
> >
>
> I don't think multiattach makes a difference here. Someone is forcibly
> detaching the volume and not specifying an individual connection. So based on
> that, Cinder should be removing any connections, whether that is to one or
> several hosts.
>

Hi,

I agree with Sean, drivers should remove all connections for the volume.

Even without multiattach there are cases where you'll have multiple
connections for the same volume, like in a Live Migration.

It's also very useful when Nova and Cinder get out of sync and your
volume has leftover connections. In this case if you try to delete the
volume you get a "volume in use" error from some drivers.

Cheers,
Gorka.


> > So, my questions are: What is the best way to fix this problem? Should
> > Cinder API continue to
> > accept detachments with None connectors? If, so, what would be the effects
> > on other Nova
> > attachments for the same volume? Is there any side effect if the volume is
> > not multi-attached?
> >
> > Additionally to this thread here, I should bring this topic to tomorrow's
> > Cinder's meeting,
> > so please join if you have something to share.
> >
>
> +1 - good plan.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-17 Thread Sean McGinnis
On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> Hi Cinder and Nova folks,
> 
> Working on some tests for our drivers, I stumbled upon this tempest test
> 'force_detach_volume'
> that is calling Cinder API passing a 'None' connector. At the time this was
> added several CIs
> went down, and people started discussing whether this (accepting/sending a
> None connector)
> would be the proper behavior for what is expected to a driver to do[1]. So,
> some of CIs started
> just skipping that test[2][3][4] and others implemented fixes that made the
> driver to disconnected
> the volume from all hosts if a None connector was received[5][6][7].

Right, it was determined the correct behavior for this was to disconnect the
volume from all hosts. The CIs that are skipping this test should stop doing so
(once their drivers are fixed of course).

> 
> While implementing this fix seems to be straightforward, I feel that just
> removing the volume
> from all hosts is not the correct thing to do mainly considering that we
> can have multi-attach.
> 

I don't think multiattach makes a difference here. Someone is forcibly
detaching the volume and not specifying an individual connection. So based on
that, Cinder should be removing any connections, whether that is to one or
several hosts.

> So, my questions are: What is the best way to fix this problem? Should
> Cinder API continue to
> accept detachments with None connectors? If, so, what would be the effects
> on other Nova
> attachments for the same volume? Is there any side effect if the volume is
> not multi-attached?
> 
> Additionally to this thread here, I should bring this topic to tomorrow's
> Cinder's meeting,
> so please join if you have something to share.
> 

+1 - good plan.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Rambo
yes
 
 
-- Original --
From:  "Ivan Kolodyazhny";
Date:  Tue, Jul 17, 2018 05:00 PM
To:  "OpenStack Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about block device driver

 
Do you use the volumes on the same nodes where instances are located?

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/



 
On Tue, Jul 17, 2018 at 11:52 AM, Rambo  wrote:
yes,My cinder driver is  LVM+LIO.I have upload the test result in  appendix.Can 
you show me your test results?Thank you!



 
 
-- Original --
From:  "Ivan Kolodyazhny";
Date:  Tue, Jul 17, 2018 04:09 PM
To:  "OpenStack Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about block device driver



 
Rambo,

Did you try to use LVM+LIO target driver? It shows pretty good performance 
comparing to BlockDeviceDriver,


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/



 
On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:
Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a 
viable option - benchmarked them several times, unsatisfactory 
results.Sometimes it's IOPS is twice as bad,could you show me your test 
data?Thank you!





Cheers,
Rambo
 
 
-- Original --
From: "Sean McGinnis"; 
Date: 2018年7月16日(星期一) 晚上9:32
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 





__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

D7C81B68@5B350B78.B0B54D5B
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Ivan Kolodyazhny
Do you use the volumes on the same nodes where instances are located?

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Tue, Jul 17, 2018 at 11:52 AM, Rambo  wrote:

> yes,My cinder driver is  LVM+LIO.I have upload the test result in
> appendix.Can you show me your test results?Thank you!
>
>
>
> -- Original --
> *From: * "Ivan Kolodyazhny";
> *Date: * Tue, Jul 17, 2018 04:09 PM
> *To: * "OpenStack Developmen";
> *Subject: * Re: [openstack-dev] [cinder] about block device driver
>
> Rambo,
>
> Did you try to use LVM+LIO target driver? It shows pretty good performance
> comparing to BlockDeviceDriver,
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:
>
>> Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is
>> not a viable option - benchmarked them several times, unsatisfactory
>> results.Sometimes it's IOPS is twice as bad,could you show me your test
>> data?Thank you!
>>
>>
>>
>> Cheers,
>> Rambo
>>
>>
>> ---------- Original --
>> *From:* "Sean McGinnis";
>> *Date:* 2018年7月16日(星期一) 晚上9:32
>> *To:* "OpenStack Developmen";
>> *Subject:* Re: [openstack-dev] [cinder] about block device driver
>>
>> On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
>> > On 16/07, Rambo wrote:
>> > > Well,in my opinion,the BlockDeviceDriver is more suitable than any
>> other solution for data processing scenarios.Does the community will agree
>> to merge the BlockDeviceDriver to the Cinder repository again if our
>> company hold the maintainer and CI?
>> > >
>> >
>> > Hi,
>> >
>> > I'm sure the community will be happy to merge the driver back into the
>> > repository.
>> >
>>
>> The other reason for its removal was its inability to meet the minimum
>> feature
>> set required for Cinder drivers along with benchmarks showing the LVM and
>> iSCSI
>> driver could be tweaked to have similar or better performance.
>>
>> The other option would be to not use Cinder volumes so you just use local
>> storage on your compute nodes.
>>
>> Readding the block device driver is not likely an option.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


99BB7964@8738C509.52AE4D5B.png
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Rambo
yes,My cinder driver is  LVM+LIO.I have upload the test result in  appendix.Can 
you show me your test results?Thank you!



 
 
-- Original --
From:  "Ivan Kolodyazhny";
Date:  Tue, Jul 17, 2018 04:09 PM
To:  "OpenStack Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about block device driver

 
Rambo,

Did you try to use LVM+LIO target driver? It shows pretty good performance 
comparing to BlockDeviceDriver,


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/



 
On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:
Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a 
viable option - benchmarked them several times, unsatisfactory 
results.Sometimes it's IOPS is twice as bad,could you show me your test 
data?Thank you!





Cheers,
Rambo
 
 
-- Original --
From: "Sean McGinnis"; 
Date: 2018年7月16日(星期一) 晚上9:32
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

99BB7964@8738C509.52AE4D5B.png
Description: Binary data


test2639.png
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Ivan Kolodyazhny
Rambo,

Did you try to use LVM+LIO target driver? It shows pretty good performance
comparing to BlockDeviceDriver,

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:

> Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is
> not a viable option - benchmarked them several times, unsatisfactory
> results.Sometimes it's IOPS is twice as bad,could you show me your test
> data?Thank you!
>
>
>
> Cheers,
> Rambo
>
>
> -- Original --
> *From:* "Sean McGinnis";
> *Date:* 2018年7月16日(星期一) 晚上9:32
> *To:* "OpenStack Developmen";
> *Subject:* Re: [openstack-dev] [cinder] about block device driver
>
> On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> > On 16/07, Rambo wrote:
> > > Well,in my opinion,the BlockDeviceDriver is more suitable than any
> other solution for data processing scenarios.Does the community will agree
> to merge the BlockDeviceDriver to the Cinder repository again if our
> company hold the maintainer and CI?
> > >
> >
> > Hi,
> >
> > I'm sure the community will be happy to merge the driver back into the
> > repository.
> >
>
> The other reason for its removal was its inability to meet the minimum
> feature
> set required for Cinder drivers along with benchmarks showing the LVM and
> iSCSI
> driver could be tweaked to have similar or better performance.
>
> The other option would be to not use Cinder volumes so you just use local
> storage on your compute nodes.
>
> Readding the block device driver is not likely an option.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Rambo
Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a 
viable option - benchmarked them several times, unsatisfactory 
results.Sometimes it's IOPS is twice as bad,could you show me your test 
data?Thank you!





Cheers,
Rambo
 
 
-- Original --
From: "Sean McGinnis"; 
Date: 2018年7月16日(星期一) 晚上9:32
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Rambo
But I want to create a volume backed server for data processing scenarios,maybe 
the BlockDeviceDriver is more suitable. 
 
-- Original --
From: "Sean McGinnis"; 
Date: 2018年7月16日(星期一) 晚上9:32
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Jay Pipes

On 07/16/2018 10:15 AM, arkady.kanev...@dell.com wrote:

Is this for ephemeral storage handling?


For both ephemeral as well as root disk.

In other words, just act like Cinder isn't there and attach a big local 
root disk to the instance.


Best,
-jay


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, July 16, 2018 8:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] about block device driver

On 07/16/2018 09:32 AM, Sean McGinnis wrote:

The other option would be to not use Cinder volumes so you just use
local storage on your compute nodes.


^^ yes, this.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Arkady.Kanevsky
Is this for ephemeral storage handling?

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Monday, July 16, 2018 8:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] about block device driver

On 07/16/2018 09:32 AM, Sean McGinnis wrote:
> The other option would be to not use Cinder volumes so you just use 
> local storage on your compute nodes.

^^ yes, this.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Jay Pipes

On 07/16/2018 09:32 AM, Sean McGinnis wrote:

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.


^^ yes, this.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Sean McGinnis
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Gorka Eguileor
On 16/07, Rambo wrote:
> Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> solution for data processing scenarios.Does the community will agree to merge 
> the BlockDeviceDriver to the Cinder repository again if our company hold the 
> maintainer and CI?
>

Hi,

I'm sure the community will be happy to merge the driver back into the
repository.

Still, I would recommend you looking at the "How To Contribute a driver to
Cinder" guide [1] and the "Third Party CI Requirement Policy"
documentation [2], and then adding this topic to Wednesday's meeting [3]
and go to the meeting to ensure that everybody is on board with it.

Best regards,
Gorka.


[1]: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver
[2]: https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
[3]: https://etherpad.openstack.org/p/cinder-rocky-meeting-agendas

>
> -- Original --
> From: "Gorka Eguileor";
> Date: 2018年7月16日(星期一) 下午5:20
> To: "OpenStack Developmen";
> Subject: Re: [openstack-dev] [cinder] about block device driver
>
>
> On 16/07, Rambo wrote:
> > Hi,all
> >
> >
> >  In the Cinder repository, I noticed that the BlockDeviceDriver driver 
> > is being deprecated, and was eventually be removed with the Queens release.
> >
> >
> > https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
> >
> >
> > In my use case, the instances using Cinder perform intense I/O, thus iSCSI 
> > or LVM is not a viable option - benchmarked them several times, since Juno, 
> > unsatisfactory results.For data processing scenarios is always better to 
> > use local storage than any SAN/NAS solution.
> >
> >
> > So I felt a great need to know why we deprecated it.If there has any better 
> > one to replace it? What do you suggest to use once BlockDeviceDriver is 
> > removed?Can you tell me about this?Thank you very much!
> >
> > Best Regards
> > Rambo
>
> Hi,
>
> If I remember correctly the driver was deprecated because it had no
> maintainer or CI.  In Cinder we require our drivers to have both,
> otherwise we can't guarantee that they actually work or that anyone will
> fix it if it gets broken.
>
> Cheers,
> Gorka.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Rambo
Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
solution for data processing scenarios.Does the community will agree to merge 
the BlockDeviceDriver to the Cinder repository again if our company hold the 
maintainer and CI?
 
 
-- Original --
From: "Gorka Eguileor"; 
Date: 2018年7月16日(星期一) 下午5:20
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On 16/07, Rambo wrote:
> Hi,all
>
>
>  In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
> being deprecated, and was eventually be removed with the Queens release.
>
>
> https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
>
>
> In my use case, the instances using Cinder perform intense I/O, thus iSCSI or 
> LVM is not a viable option - benchmarked them several times, since Juno, 
> unsatisfactory results.For data processing scenarios is always better to use 
> local storage than any SAN/NAS solution.
>
>
> So I felt a great need to know why we deprecated it.If there has any better 
> one to replace it? What do you suggest to use once BlockDeviceDriver is 
> removed?Can you tell me about this?Thank you very much!
>
> Best Regards
> Rambo

Hi,

If I remember correctly the driver was deprecated because it had no
maintainer or CI.  In Cinder we require our drivers to have both,
otherwise we can't guarantee that they actually work or that anyone will
fix it if it gets broken.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Gorka Eguileor
On 16/07, Rambo wrote:
> Hi,all
>
>
>  In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
> being deprecated, and was eventually be removed with the Queens release.
>
>
> https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
>
>
> In my use case, the instances using Cinder perform intense I/O, thus iSCSI or 
> LVM is not a viable option - benchmarked them several times, since Juno, 
> unsatisfactory results.For data processing scenarios is always better to use 
> local storage than any SAN/NAS solution.
>
>
> So I felt a great need to know why we deprecated it.If there has any better 
> one to replace it? What do you suggest to use once BlockDeviceDriver is 
> removed?Can you tell me about this?Thank you very much!
>
> Best Regards
> Rambo

Hi,

If I remember correctly the driver was deprecated because it had no
maintainer or CI.  In Cinder we require our drivers to have both,
otherwise we can't guarantee that they actually work or that anyone will
fix it if it gets broken.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Planning Etherpad for Denver 2018 PTG

2018-07-10 Thread Erlon Cruz
Thanks Jay!

Em sex, 6 de jul de 2018 às 14:30, Jay S Bryant 
escreveu:

> All,
>
> I have created an etherpad to start planning for the Denver PTG in
> September. [1]  Please start adding topics to the etherpad.
>
> Look forward to seeing you all there!
>
> Jay
>
> (jungleboyj)
>
> [1] https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][security][api-wg] Adding http security headers

2018-07-06 Thread Luke Hinds
On Thu, Jul 5, 2018 at 6:17 PM, Doug Hellmann  wrote:

> Excerpts from Jim Rollenhagen's message of 2018-07-05 12:53:34 -0400:
> > On Thu, Jul 5, 2018 at 12:40 PM, Nishant Kumar E <
> > nishant.e.ku...@ericsson.com> wrote:
> >
> > > Hi,
> > >
> > >
> > >
> > > I have registered a blueprint for adding http security headers -
> > > https://blueprints.launchpad.net/cinder/+spec/http-security-headers
> > >
> > >
> > >
> > > Reason for introducing this change - I work for AT&T cloud project –
> > > Network Cloud (Earlier known as AT&T integrated Cloud). As part of
> working
> > > there we have introduced this change within all the services as kind
> of a
> > > downstream change but would like to see it a part of upstream
> community.
> > > While we did not face any major threats without this change but during
> our
> > > investigation process we found that if dealing with web services we
> should
> > > maximize the security as much as possible and came up with a list of
> HTTP
> > > security headers that we should include as part of the OpenStack
> services.
> > > I would like to introduce this change as part of cinder to start off
> and
> > > then propagate this to all the services.
> > >
> > >
> > >
> > > Some reference links which might give more insight into this:
> > >
> > >- https://www.owasp.org/index.php/OWASP_Secure_Headers_
> > >Project#tab=Headers
> > >- https://www.keycdn.com/blog/http-security-headers/
> > >- https://securityintelligence.com/an-introduction-to-http-
> > >response-headers-for-security/
> > >
> > > Please let me know if this looks good and whether it can be included as
> > > part of Cinder followed by other services. More details on how the
> > > implementation will be done is mentioned as part of the blueprint but
> any
> > > better ideas for implementation is welcomed too !!
> > >
> >
> > Wouldn't this be a job for the HTTP server in front of cinder (or
> whatever
> > service)? Especially "Strict-Transport-Security" as one shouldn't be
> > enabling that without ensuring a correct TLS config.
> >
> > Bonus points in that upstream wouldn't need any changes, and we won't
> need
> > to change every project. :)
> >
> > // jim
>
> Yes, this feels very much like something the deployment tools should
> do when they set up Apache or uWSGI or whatever service is in front
> of each API WSGI service.
>
> Doug
>
>
I agree, this should all be set within an installer, rather then the base
project itself. Horizon (or rather django) has directives to enable many of
the common security header fields, but rather than set these directly in
horizons local_settings, we patched the openstack puppet-horizon module.
Take for the following for example around X-Frame disabling:

https://github.com/openstack/puppet-horizon/blob/218c35ea7bc08dd88d936ab79b14e5ce2b94ea44/releasenotes/notes/disallow_iframe_embed-f0ffa1cabeca5b1e.yaml#L2

The same approach should be used elsewhere, with whatever the preferred
deployment tool is (puppet, chef, ansible etc).  That way if a decision is
made to roll out out TLS then can also toggle in certificate pinning etc in
the same tool flow.



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][security][api-wg] Adding http security headers

2018-07-05 Thread Doug Hellmann
Excerpts from Jim Rollenhagen's message of 2018-07-05 12:53:34 -0400:
> On Thu, Jul 5, 2018 at 12:40 PM, Nishant Kumar E <
> nishant.e.ku...@ericsson.com> wrote:
> 
> > Hi,
> >
> >
> >
> > I have registered a blueprint for adding http security headers -
> > https://blueprints.launchpad.net/cinder/+spec/http-security-headers
> >
> >
> >
> > Reason for introducing this change - I work for AT&T cloud project –
> > Network Cloud (Earlier known as AT&T integrated Cloud). As part of working
> > there we have introduced this change within all the services as kind of a
> > downstream change but would like to see it a part of upstream community.
> > While we did not face any major threats without this change but during our
> > investigation process we found that if dealing with web services we should
> > maximize the security as much as possible and came up with a list of HTTP
> > security headers that we should include as part of the OpenStack services.
> > I would like to introduce this change as part of cinder to start off and
> > then propagate this to all the services.
> >
> >
> >
> > Some reference links which might give more insight into this:
> >
> >- https://www.owasp.org/index.php/OWASP_Secure_Headers_
> >Project#tab=Headers
> >- https://www.keycdn.com/blog/http-security-headers/
> >- https://securityintelligence.com/an-introduction-to-http-
> >response-headers-for-security/
> >
> > Please let me know if this looks good and whether it can be included as
> > part of Cinder followed by other services. More details on how the
> > implementation will be done is mentioned as part of the blueprint but any
> > better ideas for implementation is welcomed too !!
> >
> 
> Wouldn't this be a job for the HTTP server in front of cinder (or whatever
> service)? Especially "Strict-Transport-Security" as one shouldn't be
> enabling that without ensuring a correct TLS config.
> 
> Bonus points in that upstream wouldn't need any changes, and we won't need
> to change every project. :)
> 
> // jim

Yes, this feels very much like something the deployment tools should
do when they set up Apache or uWSGI or whatever service is in front
of each API WSGI service.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][security][api-wg] Adding http security headers

2018-07-05 Thread Jim Rollenhagen
On Thu, Jul 5, 2018 at 12:40 PM, Nishant Kumar E <
nishant.e.ku...@ericsson.com> wrote:

> Hi,
>
>
>
> I have registered a blueprint for adding http security headers -
> https://blueprints.launchpad.net/cinder/+spec/http-security-headers
>
>
>
> Reason for introducing this change - I work for AT&T cloud project –
> Network Cloud (Earlier known as AT&T integrated Cloud). As part of working
> there we have introduced this change within all the services as kind of a
> downstream change but would like to see it a part of upstream community.
> While we did not face any major threats without this change but during our
> investigation process we found that if dealing with web services we should
> maximize the security as much as possible and came up with a list of HTTP
> security headers that we should include as part of the OpenStack services.
> I would like to introduce this change as part of cinder to start off and
> then propagate this to all the services.
>
>
>
> Some reference links which might give more insight into this:
>
>- https://www.owasp.org/index.php/OWASP_Secure_Headers_
>Project#tab=Headers
>- https://www.keycdn.com/blog/http-security-headers/
>- https://securityintelligence.com/an-introduction-to-http-
>response-headers-for-security/
>
> Please let me know if this looks good and whether it can be included as
> part of Cinder followed by other services. More details on how the
> implementation will be done is mentioned as part of the blueprint but any
> better ideas for implementation is welcomed too !!
>

Wouldn't this be a job for the HTTP server in front of cinder (or whatever
service)? Especially "Strict-Transport-Security" as one shouldn't be
enabling that without ensuring a correct TLS config.

Bonus points in that upstream wouldn't need any changes, and we won't need
to change every project. :)

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] making volume available without stopping VM

2018-06-26 Thread Volodymyr Litovka

Hi Sean,

thanks for the responce, my questions and comments below.

On 6/25/18 9:42 PM, Sean McGinnis wrote:


Not sure if it's an option for you, but in the Pike release support was added
to be able to extend attached volumes. There are several caveats with this
feature though. I believe it only works with libvirt, and if I remember right,
only newer versions of libvirt. You need to have notifications working for Nova
to pick up that Cinder has extended the volume.
Pike release notes states the following: "It is now possible to signal 
and perform an online volume size change as of the 2.51 microversion 
using the volume-extended external event. Nova will perform the volume 
extension so the host can detect its new size. It will also resize the 
device in QEMU so instance can detect the new disk size without 
rebooting. Currently only the *libvirt compute driver with iSCSI and FC 
volumes supports the online volume size change*." And yes, it doesn't 
work for me since I'm using CEPH as backend.


Queens release notes says nothing on changes. Feature matrix 
(https://docs.openstack.org/nova/queens/user/support-matrix.html) says 
it's supported on libvirt/x86 without any other further details. Does 
anybody know whether this feature implemented in Queens for other 
backends except iSCSI and FC?


Mentioned earlier spec are talking about how to make result of resize to 
be visible to VM immediately upon resize, without restarting VM, while I 
don't asking for this. My question is how to resize volume and make it 
available after restart, see below



In fact, I'm ok with delayed resize (upon power-cycle), and it's not an
issue for me that VM don't detect changes immediately. What I want to
understand is that changes to Cinder (and, thus, underlying changes to CEPH)
are safe for VM while it's in active state.

No, this is not considered safe. You are forcing the volume state to be
availabile when it is in fact not.


In very general case, I agree with you. For example, I can imagine that 
allocation of new blocks can fail if volume is declared as available, 
but, in particular case of CEPH:


- in short:
# status of volume in Cinder means nothing to CEPH

- in details:
# while Cinder do provisioning and maintenance
# kvm/libvirt work directly with CEPH (after got this endpoint from 
<-Nova<-Cinder)
# and I see no changes in CEPH's status of volume while it is available 
in Cinder:


* in-use:
$ rbd info volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb
rbd image 'volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb':
    size 20480 MB in 5120 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.2414a7572c9f46
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    flags:
    create_timestamp: Mon Jun 25 10:47:03 2018
    parent: 
volumes/volume-42edf442-1dbb-4b6e-8593-1fbfbc821a1a@volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb.clone_snap

    overlap: 3072 MB

* available:
$ rbd info volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb
rbd image 'volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb':
    size 20480 MB in 5120 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.2414a7572c9f46
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    flags:
    create_timestamp: Mon Jun 25 10:47:03 2018
    parent: 
volumes/volume-42edf442-1dbb-4b6e-8593-1fbfbc821a1a@volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb.clone_snap

    overlap: 3072 MB

# and, during copying data, CEPH successfully allocates additional 
blocks to the volume:


* before copying (volume is already available in Cinder)
$ rbd du volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb
NAME    PROVISIONED USED
volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb  20480M *2256M*

* after copying (while volume is available in Cinder)
$ rbd du volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb
NAME    PROVISIONED USED
volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb  20480M *2560M*

# which preserved after back to in-use:
$ rbd du volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb
NAME    PROVISIONED USED
volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb  20480M *2560M*
$ rbd info volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb
rbd image 'volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb':
    size 20480 MB in 5120 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.2414a7572c9f46
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    flags:
    create_timestamp: Mon Jun 25 10:47:03 2018
    parent: 
volumes/volume-42edf442-1dbb-4b6e-8593-1fbfbc821a1a@volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb.clone_snap

    overlap: 3072 MB

Actually, the only problem with safety I see is possible administrative 
race - since volume is available, cloud administrator or any kind of 
automation can break dependencies. I

Re: [openstack-dev] [cinder] making volume available without stopping VM

2018-06-25 Thread Sean McGinnis
> 
> In fact, I'm ok with delayed resize (upon power-cycle), and it's not an
> issue for me that VM don't detect changes immediately. What I want to
> understand is that changes to Cinder (and, thus, underlying changes to CEPH)
> are safe for VM while it's in active state.
> 

No, this is not considered safe. You are forcing the volume state to be
availabile when it is in fact not.

Not sure if it's an option for you, but in the Pike release support was added
to be able to extend attached volumes. There are several caveats with this
feature though. I believe it only works with libvirt, and if I remember right,
only newer versions of libvirt. You need to have notifications working for Nova
to pick up that Cinder has extended the volume.

You can get some details from the cinder spec:

https://specs.openstack.org/openstack/cinder-specs/specs/pike/extend-attached-volume.html

And the corresponding Nova spec:

http://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/nova-support-attached-volume-extend.html

You may also want to read through the mailing list thread if you want to get in
to some of the nitty gritty details behind why certain design choices were
made:

http://lists.openstack.org/pipermail/openstack-dev/2017-April/115292.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] making volume available without stopping VM

2018-06-25 Thread Chris Friesen

On 06/23/2018 08:38 AM, Volodymyr Litovka wrote:

Dear friends,

I did some tests with making volume available without stopping VM. I'm using
CEPH and these steps produce the following results:

1) openstack volume set --state available [UUID]
- nothing changed inside both VM (volume is still connected) and CEPH
2) openstack volume set --size [new size] --state in-use [UUID]
- nothing changed inside VM (volume is still connected and has an old size)
- size of CEPH volume changed to the new value
3) during these operations I was copying a lot of data from external source and
all md5 sums are the same on both VM and source
4) changes on VM happens upon any kind of power-cycle (e.g. reboot (either soft
or hard): openstack server reboot [--hard] [VM uuid] )
- note: NOT after 'reboot' from inside VM

It seems, that all these manipilations with cinder just update internal
parameters of cinder/CEPH subsystems, without immediate effect for VMs. Is it
safe to use this mechanism in this particular environent (e.g. CEPH as backend)?


There are a different set of instructions[1] which imply that the change should 
be done via the hypervisor, and that the guest will then see the changes 
immediately.


Also, If you resize the backend in a way that bypasses nova, I think it will 
result in the placement data being wrong.  (At least temporarily.)


Chris


[1] 
https://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#Online_resizing_of_KVM_images_.28rbd.29



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] making volume available without stopping VM

2018-06-25 Thread Volodymyr Litovka

Hi Jay,

We have had similar issues with extending attached volumes that are 
iSCSI based. In that case the VM has to be forced to rescan the scsi bus.


In this case I am not sure if there needs to be a change to Libvirt or 
to rbd or something else.


I would recommend reaching out to John Bernard for help.


In fact, I'm ok with delayed resize (upon power-cycle), and it's not an 
issue for me that VM don't detect changes immediately. What I want to 
understand is that changes to Cinder (and, thus, underlying changes to 
CEPH) are safe for VM while it's in active state.


Hopefully, Jon will help with this question.

Thank you!

On 6/23/18 8:41 PM, Jay Bryant wrote:



On Sat, Jun 23, 2018, 9:39 AM Volodymyr Litovka > wrote:


Dear friends,

I did some tests with making volume available without stopping VM.
I'm
using CEPH and these steps produce the following results:

1) openstack volume set --state available [UUID]
- nothing changed inside both VM (volume is still connected) and CEPH
2) openstack volume set --size [new size] --state in-use [UUID]
- nothing changed inside VM (volume is still connected and has an
old size)
- size of CEPH volume changed to the new value
3) during these operations I was copying a lot of data from external
source and all md5 sums are the same on both VM and source
4) changes on VM happens upon any kind of power-cycle (e.g. reboot
(either soft or hard): openstack server reboot [--hard] [VM uuid] )
- note: NOT after 'reboot' from inside VM

It seems, that all these manipilations with cinder just update
internal
parameters of cinder/CEPH subsystems, without immediate effect for
VMs.
Is it safe to use this mechanism in this particular environent (e.g.
CEPH as backend)?

 From practical point of view, it's useful when somebody, for
example,
update project in batch mode, and will then manually reboot every VM,
affected by the update, in appropriate time with minimized downtime
(it's just reboot, not manual stop/update/start).

Thank you.

-- 
Volodymyr Litovka

   "Vision without Execution is Hallucination." -- Thomas Edison



--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] making volume available without stopping VM

2018-06-23 Thread Jay Bryant
On Sat, Jun 23, 2018, 9:39 AM Volodymyr Litovka  wrote:

> Dear friends,
>
> I did some tests with making volume available without stopping VM. I'm
> using CEPH and these steps produce the following results:
>
> 1) openstack volume set --state available [UUID]
> - nothing changed inside both VM (volume is still connected) and CEPH
> 2) openstack volume set --size [new size] --state in-use [UUID]
> - nothing changed inside VM (volume is still connected and has an old size)
> - size of CEPH volume changed to the new value
> 3) during these operations I was copying a lot of data from external
> source and all md5 sums are the same on both VM and source
> 4) changes on VM happens upon any kind of power-cycle (e.g. reboot
> (either soft or hard): openstack server reboot [--hard] [VM uuid] )
> - note: NOT after 'reboot' from inside VM
>
> It seems, that all these manipilations with cinder just update internal
> parameters of cinder/CEPH subsystems, without immediate effect for VMs.
> Is it safe to use this mechanism in this particular environent (e.g.
> CEPH as backend)?
>
>  From practical point of view, it's useful when somebody, for example,
> update project in batch mode, and will then manually reboot every VM,
> affected by the update, in appropriate time with minimized downtime
> (it's just reboot, not manual stop/update/start).
>
> Thank you.
>
> --
> Volodymyr Litovka
>"Vision without Execution is Hallucination." -- Thomas Edison
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Volodymyr,

We have had similar issues with extending attached volumes that are iSCSI
based. In that case the VM has to be forced to rescan the scsi bus.

In this case I am not sure if there needs to be a change to Libvirt or to
rbd or something else.

I would recommend reaching out to John Bernard for help.

Jay

>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already?

2018-06-15 Thread Erlon Cruz
Hi Thomas,

Yes. If you have more than 1 volume node, or 1 volume node with multiple
backends definitions. Each volume node should have at least one [backend]
that will point to your storage configuration. You can add that config for
each of them.

Erlon

Em sex, 15 de jun de 2018 às 09:52, Thomas Goirand 
escreveu:

> On 06/14/2018 01:10 PM, Erlon Cruz wrote:
> > Hi Thomas,
> >
> > The reserved_percentage *is* taken in account for non thin provisoning
> > backends. So you can use it to spare the space you need for backups. It
> > is a per backend configuration.
>
> Oh. Reading the doc, I thought it was only for thin provisioning, it's
> nice if it works with "normal" cinder LVM then ... :P
>
> When you say "per backend", does it means it can be set differently on
> each volume node?
>
> > If you have already tried to used it and that is not working, please let
> > us know what release you are using, because despite this being the
> > current (and proper) behavior, it might not being like this in the past.
> >
> > Erlon
>
> Will do, thanks.
>
> Cheers,
>
> Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [placement] cinder + placement forum session etherpad

2018-06-15 Thread Chris Dent

On Fri, 15 Jun 2018, Eric Fried wrote:


We just merged an initial pass at direct access to the placement service
[1].  See the test_direct suite for simple usage examples.

Note that this was written primarily to satisfy the FFU use case in
blueprint reshape-provider-tree [2] and therefore likely won't have
everything cinder needs.  So play around with it, but please do not put
it anywhere near production until we've had some more collab.  Find us
in #openstack-placement.


Just to word this a bit more strongly (see also
http://p.anticdent.org/2nbF, where this is paraphrased from):

It would be bad news for cinder to start from placement direct. Better
would be for cinder to figure out how to use placement "normally", and
then for the standalone special case, consider placement direct or
something derived from it. PlacementDirect, as currently written,
is really for special cases only, for use in extremis only.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [placement] cinder + placement forum session etherpad

2018-06-15 Thread Eric Fried
We just merged an initial pass at direct access to the placement service
[1].  See the test_direct suite for simple usage examples.

Note that this was written primarily to satisfy the FFU use case in
blueprint reshape-provider-tree [2] and therefore likely won't have
everything cinder needs.  So play around with it, but please do not put
it anywhere near production until we've had some more collab.  Find us
in #openstack-placement.

-efried

[1] https://review.openstack.org/572576
[2] https://review.openstack.org/572583

On 06/04/2018 07:57 AM, Jay S Bryant wrote:
> 
> 
> On 6/1/2018 7:28 PM, Chris Dent wrote:
>> On Wed, 9 May 2018, Chris Dent wrote:
>>
>>> I've started an etherpad for the forum session in Vancouver devoted
>>> to discussing the possibility of tracking and allocation resources
>>> in Cinder using the Placement service. This is not a done deal.
>>> Instead the session is to discuss if it could work and how to make
>>> it happen if it seems like a good idea.
>>>
>>> The etherpad is at
>>>
>>>    https://etherpad.openstack.org/p/YVR-cinder-placement
>>
>> The session went well. Some of the members of the cinder team who
>> might have had more questions had not been able to be at summit so
>> we were unable to get their input.
>>
>> We clarified some of the things that cinder wants to be able to
>> accomplish (run multiple schedulers in active-active and avoid race
>> conditions) and the fact that this is what placement is built for.
>> We also made it clear that placement itself can be highly available
>> (and scalable) because of its nature as a dead-simple web app over a
>> database.
>>
>> The next steps are for the cinder team to talk amongst themselves
>> and socialize the capabilities of placement (with the help of
>> placement people) and see if it will be suitable. It is unlikely
>> there will be much visible progress in this area before Stein.
> Chris,
> 
> Thanks for this update.  I have it on the agenda for the Cinder team to
> discuss this further.  We ran out of time in last week's meeting but
> will hopefully get some time to discuss it this week.  We will keep you
> updated as to how things progress on our end and pull in the placement
> guys as necessary. 
> 
> Jay
>>
>> See the etherpad for a bit more detail.
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already?

2018-06-15 Thread Thomas Goirand
On 06/14/2018 01:10 PM, Erlon Cruz wrote:
> Hi Thomas,
> 
> The reserved_percentage *is* taken in account for non thin provisoning
> backends. So you can use it to spare the space you need for backups. It
> is a per backend configuration.

Oh. Reading the doc, I thought it was only for thin provisioning, it's
nice if it works with "normal" cinder LVM then ... :P

When you say "per backend", does it means it can be set differently on
each volume node?

> If you have already tried to used it and that is not working, please let
> us know what release you are using, because despite this being the
> current (and proper) behavior, it might not being like this in the past.
> 
> Erlon

Will do, thanks.

Cheers,

Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already?

2018-06-14 Thread Sean McGinnis
On Thu, Jun 14, 2018 at 08:10:56AM -0300, Erlon Cruz wrote:
> Hi Thomas,
> 
> The reserved_percentage *is* taken in account for non thin provisoning
> backends. So you can use it to spare the space you need for backups. It is
> a per backend configuration.
> 
> If you have already tried to used it and that is not working, please let us
> know what release you are using, because despite this being the current
> (and proper) behavior, it might not being like this in the past.
> 
> Erlon
> 

Guess I didn't read far enough ahead. Thanks Erlon!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already?

2018-06-14 Thread Sean McGinnis
On Thu, Jun 14, 2018 at 11:13:22AM +0200, Thomas Goirand wrote:
> Hi,
> 
> When using cinder-backup, it first makes a snapshot, then sends the
> backup wherever it's configured. The issue is, to perform a backup, one
> needs to make a snapshot of a volume, meaning that one needs the size of
> the volume as empty space to be able to make the snapshot.
> 
> So, let's say I have a cinder volume of 1 TB, this mean I need 1 TB as
> empty space on the volume node so I can do a backup of that volume.
> 
> My question is: is there a way to tell cinder to reserve an amount of
> space for this kind of operation? The only thing I saw was
> reserved_percentage, but this looks like for thin provisioning only. If
> this doesn't exist, would such new option be accepted by the Cinder
> community, as a per volume node option? Or should we do it as a global
> setting?
> 

I don't believe we have this as a setting anywhere today.

It would be best as a per-backend (or backend_defaults) setting as some
backends can create volumes from snapshots without consuming any extra space,
while others like you point out with LVM needing to allocate a considerable
amount of space.

Maybe someone else can chime in if they are aware of another way this is
already being handled, but I have not had to deal with it, so I'm not aware of
anything.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already?

2018-06-14 Thread Erlon Cruz
Hi Thomas,

The reserved_percentage *is* taken in account for non thin provisoning
backends. So you can use it to spare the space you need for backups. It is
a per backend configuration.

If you have already tried to used it and that is not working, please let us
know what release you are using, because despite this being the current
(and proper) behavior, it might not being like this in the past.

Erlon

Em qui, 14 de jun de 2018 às 06:13, Thomas Goirand 
escreveu:

> Hi,
>
> When using cinder-backup, it first makes a snapshot, then sends the
> backup wherever it's configured. The issue is, to perform a backup, one
> needs to make a snapshot of a volume, meaning that one needs the size of
> the volume as empty space to be able to make the snapshot.
>
> So, let's say I have a cinder volume of 1 TB, this mean I need 1 TB as
> empty space on the volume node so I can do a backup of that volume.
>
> My question is: is there a way to tell cinder to reserve an amount of
> space for this kind of operation? The only thing I saw was
> reserved_percentage, but this looks like for thin provisioning only. If
> this doesn't exist, would such new option be accepted by the Cinder
> community, as a per volume node option? Or should we do it as a global
> setting?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Enabling tempest test for in-use volume extending

2018-06-13 Thread Matt Riedemann

On 6/7/2018 8:33 AM, Lucio Seki wrote:

Since Pike release, Cinder supports in-use volume extending [1].
By default, it assumes that every storage backend is able to perform 
this operation.


Actually, by default, Tempest assumes that no backends support it, which 
is why it's disabled by default in Tempest:


https://review.openstack.org/#/c/480746/7/tempest/config.py

And then only enabled by default in devstack if you're using lvm and 
libvirt since at the time those were the only backends that supported it.


Thus, the tempest test for this feature should be enabled by default. A 
patch was submitted to enable it [2].


Please note that, after this patch being merged, the 3rd party CI 
maintainers may need to override this configuration, if the backend 
being tested does not support in-use volume extending.


[1] Add ability to extend 'in-use' volume: 
https://review.openstack.org/#/c/454287/
[2] Enable tempest tests for attached volume extending: 
https://review.openstack.org/#/c/572188


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ...

2018-06-07 Thread Jay S Bryant

Peter,

Thanks for getting that fixed.  The associated patch has been removed so 
we should be good now.


Jay


On 6/7/2018 9:15 AM, Peter Penchev wrote:

On Mon, Jun 04, 2018 at 02:40:09PM -0500, Sean McGinnis wrote:

Our CI has been chugging along since June 2nd (not really related to
the timing of your e-mail, it just so happened that we fixed another
small problem there).  You can see the logs at

   http://logs.ci-openstack.storpool.com/


Thanks Peter.

It looks like the reason the report run doesn't show Storpool reporting is a
due to a mismatch on name. The officially list account is "StorPool CI"
according to https://wiki.openstack.org/wiki/ThirdPartySystems/StorPool_CI

But it appears on looking into this that the real CI account is "StorPool
distributed storage CI". Is that correct? If so, can you update the wiki with
the correct account name?

Right... sorry about that.  I've fixed that in the wiki -
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems&oldid=161664

Best regards,
Peter



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ...

2018-06-07 Thread Peter Penchev
On Mon, Jun 04, 2018 at 02:40:09PM -0500, Sean McGinnis wrote:
> > 
> > Our CI has been chugging along since June 2nd (not really related to
> > the timing of your e-mail, it just so happened that we fixed another
> > small problem there).  You can see the logs at
> > 
> >   http://logs.ci-openstack.storpool.com/
> > 
> 
> Thanks Peter.
> 
> It looks like the reason the report run doesn't show Storpool reporting is a
> due to a mismatch on name. The officially list account is "StorPool CI"
> according to https://wiki.openstack.org/wiki/ThirdPartySystems/StorPool_CI
> 
> But it appears on looking into this that the real CI account is "StorPool
> distributed storage CI". Is that correct? If so, can you update the wiki with
> the correct account name?

Right... sorry about that.  I've fixed that in the wiki -
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems&oldid=161664

Best regards,
Peter

-- 
Peter Pentchev  roam@{ringlet.net,debian.org,FreeBSD.org} p...@storpool.com
PGP key:http://people.FreeBSD.org/~roam/roam.key.asc
Key fingerprint 2EE7 A7A5 17FC 124C F115  C354 651E EFB0 2527 DF13


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ...

2018-06-04 Thread Sean McGinnis
> 
> Our CI has been chugging along since June 2nd (not really related to
> the timing of your e-mail, it just so happened that we fixed another
> small problem there).  You can see the logs at
> 
>   http://logs.ci-openstack.storpool.com/
> 

Thanks Peter.

It looks like the reason the report run doesn't show Storpool reporting is a
due to a mismatch on name. The officially list account is "StorPool CI"
according to https://wiki.openstack.org/wiki/ThirdPartySystems/StorPool_CI

But it appears on looking into this that the real CI account is "StorPool
distributed storage CI". Is that correct? If so, can you update the wiki with
the correct account name?

Thanks,
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   8   9   10   >