[openstack-dev] [cinder][manila] PLEASE READ: Change of location for dinner ...

2018-11-13 Thread Jay S Bryant

Team,

The dinner has had to change locations.  Dicke Wirtin didn't get my 
online reservation and they are full.


NEW LOCATION: Joe's Restaurant and Wirsthaus -- Theodor-Heuss-Platz 10, 
14052 Berlin


The time is still 8 pm.

Please pass the word on!

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][manila] Cinder and Friends Dinner at Berlin Summit ...

2018-11-12 Thread Jay S Bryant

Ivan,

Yeah, I saw that was the case but it seems like there is not a point in 
time where there isn't a conflict.  Need to get some food at some point 
so anyone who wants to join can, and then we can head to the party if 
people want.


Jay


On 11/10/2018 8:07 AM, Ivan Kolodyazhny wrote:

Thanks for organizing this, Jay,

Just in case if you missed it, Matrix Party hosted by Trilio + Red Hat 
will be on Tuesday too.



Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Thu, Nov 8, 2018 at 12:43 AM Jay S Bryant > wrote:


All,

I am working on scheduling a dinner for the Cinder team (and our
extended family that work on and around Cinder) during the Summit
in Berlin.  I have created an etherpad for people to RSVP for
dinner [1].

It seemed like Tuesday night after the Marketplace Mixer was the
best time for most people.

So, it will be a little later dinner ... 8 pm.  Here is the place:

Location: http://www.dicke-wirtin.de/
Address: Carmerstraße 9, 10623 Berlin, Germany

It looks like the kind of place that will fit for our usual group.

If planning to attend please add your name to the etherpad and I
will get a reservation in over the weekend.

Hope to see you all on Tuesday!

Jay

[1] https://etherpad.openstack.org/p/BER-cinder-outing-planning

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] No meeting this week ...

2018-11-12 Thread Jay S Bryant

Team,

Just a friendly reminder that we will not have our weekly meeting this 
week due to the OpenStack Summit.


Hope to see some of you here.  Otherwise, talk to you next week!

Thanks,

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][manila] Cinder and Friends Dinner at Berlin Summit ...

2018-11-10 Thread Ivan Kolodyazhny
Thanks for organizing this, Jay,

Just in case if you missed it, Matrix Party hosted by Trilio + Red Hat will
be on Tuesday too.


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Thu, Nov 8, 2018 at 12:43 AM Jay S Bryant  wrote:

> All,
>
> I am working on scheduling a dinner for the Cinder team (and our extended
> family that work on and around Cinder) during the Summit in Berlin.  I have
> created an etherpad for people to RSVP for dinner [1].
>
> It seemed like Tuesday night after the Marketplace Mixer was the best time
> for most people.
>
> So, it will be a little later dinner ... 8 pm.  Here is the place:
> Location:  http://www.dicke-wirtin.de/
> Address:  Carmerstraße 9, 10623 Berlin, Germany
>
> It looks like the kind of place that will fit for our usual group.
>
> If planning to attend please add your name to the etherpad and I will get
> a reservation in over the weekend.
>
> Hope to see you all on Tuesday!
>
> Jay
>
> [1]  https://etherpad.openstack.org/p/BER-cinder-outing-planning
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][manila] Cinder and Friends Dinner at Berlin Summit ...

2018-11-07 Thread Jay S Bryant

All,

I am working on scheduling a dinner for the Cinder team (and our 
extended family that work on and around Cinder) during the Summit in 
Berlin.  I have created an etherpad for people to RSVP for dinner [1].


It seemed like Tuesday night after the Marketplace Mixer was the best 
time for most people.


So, it will be a little later dinner ... 8 pm.  Here is the place:

Location: http://www.dicke-wirtin.de/
Address:  Carmerstraße 9, 10623 Berlin, Germany

It looks like the kind of place that will fit for our usual group.

If planning to attend please add your name to the etherpad and I will 
get a reservation in over the weekend.


Hope to see you all on Tuesday!

Jay

[1] https://etherpad.openstack.org/p/BER-cinder-outing-planning

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot

2018-11-04 Thread Rambo
Sorry , I mean use the NFS driver as the cinder-backup_driver.I see the 
remotefs code achieve the create_volume_from snapshot[1],in this function the 
snapshot.status must be available. But before this in the api part, the 
snapshot.status was changed to the backing_up status[2].Is there something 
wrong?Can you tell me more about this?Thank you very much.




[1]https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/remotefs.py#L1259
[2]https://github.com/openstack/cinder/blob/master/cinder/backup/api.py#L292
-- Original --
From:  "Eric Harney";
Date:  Fri, Nov 2, 2018 10:00 PM
To:  "jsbryant"; "OpenStack 
Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about use nfs driver to backup the 
volume snapshot

 
On 11/1/18 4:44 PM, Jay Bryant wrote:
> On Thu, Nov 1, 2018, 10:44 AM Rambo  wrote:
> 
>> Hi,all
>>
>>   Recently, I use the nfs driver as the cinder-backup backend, when I
>> use it to backup the volume snapshot, the result is return the
>> NotImplementedError[1].And the nfs.py doesn't has the
>> create_volume_from_snapshot function. Does the community plan to achieve
>> it which is as nfs as the cinder-backup backend?Can you tell me about
>> this?Thank you very much!
>>
>> Rambo,
> 
> The NFS driver doesn't have full snapshot support. I am not sure if that
> function missing was an oversight or not. I would reach out to Eric Harney
> as he implemented that code.
> 
> Jay
> 

create_volume_from_snapshot is implemented in the NFS driver.  It is in 
the remotefs code that the NFS driver inherits from.

But, I'm not sure I understand what's being asked here -- how is this 
related to using NFS as the backup backend?


>>
>>
>> [1]
>> https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142
>>
>>
>>
>>
>>
>>
>>
>>
>> Best Regards
>> Rambo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot

2018-11-02 Thread Eric Harney

On 11/1/18 4:44 PM, Jay Bryant wrote:

On Thu, Nov 1, 2018, 10:44 AM Rambo  wrote:


Hi,all

  Recently, I use the nfs driver as the cinder-backup backend, when I
use it to backup the volume snapshot, the result is return the
NotImplementedError[1].And the nfs.py doesn't has the
create_volume_from_snapshot function. Does the community plan to achieve
it which is as nfs as the cinder-backup backend?Can you tell me about
this?Thank you very much!

Rambo,


The NFS driver doesn't have full snapshot support. I am not sure if that
function missing was an oversight or not. I would reach out to Eric Harney
as he implemented that code.

Jay



create_volume_from_snapshot is implemented in the NFS driver.  It is in 
the remotefs code that the NFS driver inherits from.


But, I'm not sure I understand what's being asked here -- how is this 
related to using NFS as the backup backend?






[1]
https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142








Best Regards
Rambo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot

2018-11-01 Thread Jay Bryant
On Thu, Nov 1, 2018, 10:44 AM Rambo  wrote:

> Hi,all
>
>  Recently, I use the nfs driver as the cinder-backup backend, when I
> use it to backup the volume snapshot, the result is return the
> NotImplementedError[1].And the nfs.py doesn't has the
> create_volume_from_snapshot function. Does the community plan to achieve
> it which is as nfs as the cinder-backup backend?Can you tell me about
> this?Thank you very much!
>
> Rambo,

The NFS driver doesn't have full snapshot support. I am not sure if that
function missing was an oversight or not. I would reach out to Eric Harney
as he implemented that code.

Jay

>
>
> [1]
> https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142
>
>
>
>
>
>
>
>
> Best Regards
> Rambo
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] about use nfs driver to backup the volume snapshot

2018-11-01 Thread Rambo
Hi,all


 Recently, I use the nfs driver as the cinder-backup backend, when I use it 
to backup the volume snapshot, the result is return the 
NotImplementedError[1].And the nfs.py doesn't has the 
create_volume_from_snapshot function. Does the community plan to achieve it 
which is as nfs as the cinder-backup backend?Can you tell me about this?Thank 
you very much!






[1]https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142
















Best Regards
Rambo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-25 Thread Boxiang Zhu


Great, Jon. Thanks for your reply. I am looking forward to your report.


Cheers,
Boxiang
On 10/23/2018 22:01,Jon Bernard wrote:
* melanie witt  wrote:
On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
host dev@rbd-2#ceph, but it failed with the exception
'NotImplementedError(_("Swap only supports host devices")'.

So that, my real problem is that is there any work to migrate
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is
*available*(the spec) and another is *in-use*(my scope).


[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150

Ah, I think I understand now, thank you for providing all of those details.
And I think you explained it in your first email, that cinder supports
migration of ceph volumes if they are 'available' but not if they are
'in-use'. Apologies that I didn't get your meaning the first time.

I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
LOG.debug('Only available volumes can be migrated using backend '
'assisted migration. Falling back to generic migration.')
return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance',
it's falling back to generic migration, which will end up with an error in
nova because the source_path is not set in the volume config.

Can anyone from the cinder team chime in about whether the ceph volume
migration could be expanded to allow migration of 'in-use' volumes? Is there
a reason not to allow migration of 'in-use' volumes?

Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.

--
Jon


[3] 
https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621

Cheers,
-melanie






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-24 Thread melanie witt

On Tue, 23 Oct 2018 10:01:42 -0400, Jon Bernard wrote:

* melanie witt  wrote:

On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:

I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
host dev@rbd-2#ceph, but it failed with the exception
'NotImplementedError(_("Swap only supports host devices")'.

So that, my real problem is that is there any work to migrate
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is
*available*(the spec) and another is *in-use*(my scope).


[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Ah, I think I understand now, thank you for providing all of those details.
And I think you explained it in your first email, that cinder supports
migration of ceph volumes if they are 'available' but not if they are
'in-use'. Apologies that I didn't get your meaning the first time.

I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
 LOG.debug('Only available volumes can be migrated using backend '
   'assisted migration. Falling back to generic migration.')
 return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance',
it's falling back to generic migration, which will end up with an error in
nova because the source_path is not set in the volume config.

Can anyone from the cinder team chime in about whether the ceph volume
migration could be expanded to allow migration of 'in-use' volumes? Is there
a reason not to allow migration of 'in-use' volumes?


Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.


OK, thanks for this info, Jon. I'll be interested in your findings.

Cheers,
-melanie




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Cinder team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Clearly Cinder is an integral part of meeting the 'Basic Physical Data 
Center Management' design goal, and also contributes to the 'Hardware 
Virtualisation' goal.


The last paragraph in the 'Plays Well With Others' goal, about providing 
a standalone backend abstraction layer independently of the higher-level 
API (that might include e.g. scheduling and integration with other 
OpenStack services) was added with Cinder in mind, as I know that this 
is something the Cinder community has discussed, and it might also be 
applicable to other projects. Of course this is by no means mandatory, 
but it might be an interesting are to continue exploring.


The Partitioning section highlights the known mismatch between the 
concept of Availability Zones as borrowed from other clouds and the way 
operators use OpenStack, and offers a long-term design direction that 
Cinder might want to pursue in conjunction with Nova.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-24 Thread Jay S. Bryant



On 10/23/2018 9:01 AM, Jon Bernard wrote:

* melanie witt  wrote:

On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:

I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
host dev@rbd-2#ceph, but it failed with the exception
'NotImplementedError(_("Swap only supports host devices")'.

So that, my real problem is that is there any work to migrate
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is
*available*(the spec) and another is *in-use*(my scope).


[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150

Ah, I think I understand now, thank you for providing all of those details.
And I think you explained it in your first email, that cinder supports
migration of ceph volumes if they are 'available' but not if they are
'in-use'. Apologies that I didn't get your meaning the first time.

I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
 LOG.debug('Only available volumes can be migrated using backend '
   'assisted migration. Falling back to generic migration.')
 return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance',
it's falling back to generic migration, which will end up with an error in
nova because the source_path is not set in the volume config.

Can anyone from the cinder team chime in about whether the ceph volume
migration could be expanded to allow migration of 'in-use' volumes? Is there
a reason not to allow migration of 'in-use' volumes?

Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.

Jon,

Thanks for the explanation and investigation!

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-23 Thread Jon Bernard
* melanie witt  wrote:
> On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote:
> > 
> > The version of my cinder and nova is Rocky. The scope of the cinder spec[1]
> > is only for available volume migration between two pools from the same
> > ceph cluster.
> > If the volume is in-use status[2], it will call the generic migration
> > function. So that as you
> > describe it, on the nova side, it raises NotImplementedError(_("Swap
> > only supports host devices").
> > The get_config of net volume[3] has not source_path.
> 
> Ah, OK, so you're trying to migrate a volume across two separate ceph
> clusters, and that is not supported.
> 
> > So does anyone try to succeed to migrate volume(in-use) with ceph
> > backend or is anyone doing something of it?
> 
> Hopefully someone can share their experience with trying to migrate volumes
> across separate ceph clusters. I unfortunately don't know anything about it.

If this is the case, then Cinder cannot request a storage-specific
migration which is typically more efficient.  The migration will require
a complete copy of each allocated block.  Whether the volume is attached
or not will determine who (cinder or nova) will perform the operation.

-- 
Jon

> 
> Best,
> -melanie
> 
> > [1] https://review.openstack.org/#/c/296150
> > [2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
> > [3] 
> > https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-23 Thread Jon Bernard
* melanie witt  wrote:
> On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
> > I created a new vm and a new volume with type 'ceph'[So that the volume
> > will be created on one of two hosts. I assume that the volume created on
> > host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
> > vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
> > host dev@rbd-2#ceph, but it failed with the exception
> > 'NotImplementedError(_("Swap only supports host devices")'.
> > 
> > So that, my real problem is that is there any work to migrate
> > volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
> > in the same ceph cluster?
> > The difference between the spec[2] with my scope is only one is
> > *available*(the spec) and another is *in-use*(my scope).
> > 
> > 
> > [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> > [2] https://review.openstack.org/#/c/296150
> 
> Ah, I think I understand now, thank you for providing all of those details.
> And I think you explained it in your first email, that cinder supports
> migration of ceph volumes if they are 'available' but not if they are
> 'in-use'. Apologies that I didn't get your meaning the first time.
> 
> I see now the code you were referring to is this [3]:
> 
> if volume.status not in ('available', 'retyping', 'maintenance'):
> LOG.debug('Only available volumes can be migrated using backend '
>   'assisted migration. Falling back to generic migration.')
> return refuse_to_migrate
> 
> So because your volume is not 'available', 'retyping', or 'maintenance',
> it's falling back to generic migration, which will end up with an error in
> nova because the source_path is not set in the volume config.
> 
> Can anyone from the cinder team chime in about whether the ceph volume
> migration could be expanded to allow migration of 'in-use' volumes? Is there
> a reason not to allow migration of 'in-use' volumes?

Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.

-- 
Jon

> 
> [3] 
> https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621
> 
> Cheers,
> -melanie
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-22 Thread melanie witt

On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
I created a new vm and a new volume with type 'ceph'[So that the volume 
will be created on one of two hosts. I assume that the volume created on 
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the 
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to 
host dev@rbd-2#ceph, but it failed with the exception 
'NotImplementedError(_("Swap only supports host devices")'.


So that, my real problem is that is there any work to migrate 
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool) 
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is 
*available*(the spec) and another is *in-use*(my scope).



[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Ah, I think I understand now, thank you for providing all of those 
details. And I think you explained it in your first email, that cinder 
supports migration of ceph volumes if they are 'available' but not if 
they are 'in-use'. Apologies that I didn't get your meaning the first time.


I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
LOG.debug('Only available volumes can be migrated using backend '
  'assisted migration. Falling back to generic migration.')
return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance', 
it's falling back to generic migration, which will end up with an error 
in nova because the source_path is not set in the volume config.


Can anyone from the cinder team chime in about whether the ceph volume 
migration could be expanded to allow migration of 'in-use' volumes? Is 
there a reason not to allow migration of 'in-use' volumes?


[3] 
https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621


Cheers,
-melanie






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-21 Thread Boxiang Zhu


Jay and Melanie, It's my fault to let you misunderstand the problem. I should 
describe my problem more clearly. My problem is not to migrate volumes between 
two ceph clusters. 


I have two clusters, one is openstack cluster(allinone env, hostname is dev) 
and another is ceph cluster. Omit the integrated configurations for openstack 
and ceph.[1] The special config of cinder.conf is as followed:


[DEFAULT]
enabled_backends = rbd-1,rbd-2
..
[rbd-1]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes001
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 2
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81
[rbd-2]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes002
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 2
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81


There will be two hosts named dev@rbd-1#ceph and dev@rbd-2#ceph.
Then I create a volume type named 'ceph' with the command 'cinder type-create 
ceph' and add extra_spec 'volume_backend_name=ceph' for it with the command 
'cinder type-key  set volume_backend_name=ceph'. 


I created a new vm and a new volume with type 'ceph'[So that the volume will be 
created on one of two hosts. I assume that the volume created on host 
dev@rbd-1#ceph this time]. Next step is to attach the volume to the vm. At last 
I want to migrate the volume from host dev@rbd-1#ceph to host dev@rbd-2#ceph, 
but it failed with the exception 'NotImplementedError(_("Swap only supports 
host devices")'.


So that, my real problem is that is there any work to migrate 
volume(in-use)(ceph rbd) from one host(pool) to another host(pool) in the same 
ceph cluster?
The difference between the spec[2] with my scope is only one is available(the 
spec) and another is in-use(my scope).




[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Cheers,
Boxiang
On 10/21/2018 23:19,Jay S. Bryant wrote:

Boxiang,

I have not herd any discussion of extending this functionality for Ceph to work 
between different Ceph Clusters.  I wasn't aware, however, that the existing 
spec was limited to one Ceph cluster.  So, that is good to know.

I would recommend reaching out to Jon Bernard or Eric Harney for guidance on 
how to proceed.  They work closely with the Ceph driver and could provide 
insight.

Jay




On 10/19/2018 10:21 AM, Boxiang Zhu wrote:



Hi melanie, thanks for your reply.


The version of my cinder and nova is Rocky. The scope of the cinder spec[1] 
is only for available volume migration between two pools from the same ceph 
cluster.
If the volume is in-use status[2], it will call the generic migration function. 
So that as you 
describe it, on the nova side, it raises NotImplementedError(_("Swap only 
supports host devices"). 
The get_config of net volume[3] has not source_path.


So does anyone try to succeed to migrate volume(in-use) with ceph backend or is 
anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101




Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt wrote:
On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm.
I can migrate the volume(in-use) from one host to another. The nova
libvirt will call the 'rebase' to finish it. But if using ceph backend,
it raises exception 'Swap only supports host devices'. So now it does
not support to migrate volume(in-use). Does anyone do this work now? Or
Is there any way to let me migrate volume(in-use) with ceph backend?

What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be something
additional missing that we would need to do to make the migration work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [cinder]ceph rbd replication group support

2018-10-21 Thread Jay S. Bryant
I would reach out to Lisa Li (lixiaoy1) on Cinder to see if this is 
something they may pick back up.  She has been more active in the 
community lately and may be able to look at this again or at least have 
good guidance for you.


Thanks!

Jay



On 10/19/2018 1:14 AM, 王俊 wrote:


Hi:

I have a question about rbd replication group, I want to know the plan 
or roadmap about it? Anybody work on it?


Blueprint: 
https://blueprints.launchpad.net/cinder/+spec/ceph-rbd-replication-group-support


Thanks



保密:本函仅供收件人使用,如阁下并非抬头标明的收件人,请您即刻删除本函,勿以任何方式使用及传播,并请您能将此误发情形通知发件人,谢谢! 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-21 Thread Jay S. Bryant

Boxiang,

I have not herd any discussion of extending this functionality for Ceph 
to work between different Ceph Clusters.  I wasn't aware, however, that 
the existing spec was limited to one Ceph cluster. So, that is good to know.


I would recommend reaching out to Jon Bernard or Eric Harney for 
guidance on how to proceed.  They work closely with the Ceph driver and 
could provide insight.


Jay


On 10/19/2018 10:21 AM, Boxiang Zhu wrote:


Hi melanie, thanks for your reply.

The version of my cinder and nova is Rocky. The scope of the cinder 
spec[1]
is only for available volume migration between two pools from the same 
ceph cluster.
If the volume is in-use status[2], it will call the generic migration 
function. So that as you
describe it, on the nova side, it raises NotImplementedError(_("Swap 
only supports host devices").

The get_config of net volume[3] has not source_path.

So does anyone try to succeed to migrate volume(in-use) with ceph 
backend or is anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] 
https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101



Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt 
 wrote:


On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:

When I use the LVM backend to create the volume, then attach
it to a vm.
I can migrate the volume(in-use) from one host to another. The
nova
libvirt will call the 'rebase' to finish it. But if using ceph
backend,
it raises exception 'Swap only supports host devices'. So now
it does
not support to migrate volume(in-use). Does anyone do this
work now? Or
Is there any way to let me migrate volume(in-use) with ceph
backend?


What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:


https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be
something
additional missing that we would need to do to make the migration
work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread melanie witt

On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote:


The version of my cinder and nova is Rocky. The scope of the cinder spec[1]
is only for available volume migration between two pools from the same 
ceph cluster.
If the volume is in-use status[2], it will call the generic migration 
function. So that as you
describe it, on the nova side, it raises NotImplementedError(_("Swap 
only supports host devices").

The get_config of net volume[3] has not source_path.


Ah, OK, so you're trying to migrate a volume across two separate ceph 
clusters, and that is not supported.


So does anyone try to succeed to migrate volume(in-use) with ceph 
backend or is anyone doing something of it?


Hopefully someone can share their experience with trying to migrate 
volumes across separate ceph clusters. I unfortunately don't know 
anything about it.


Best,
-melanie


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread Boxiang Zhu


Hi melanie, thanks for your reply.


The version of my cinder and nova is Rocky. The scope of the cinder spec[1] 
is only for available volume migration between two pools from the same ceph 
cluster.
If the volume is in-use status[2], it will call the generic migration function. 
So that as you 
describe it, on the nova side, it raises NotImplementedError(_("Swap only 
supports host devices"). 
The get_config of net volume[3] has not source_path.


So does anyone try to succeed to migrate volume(in-use) with ceph backend or is 
anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101




Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt wrote:
On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm.
I can migrate the volume(in-use) from one host to another. The nova
libvirt will call the 'rebase' to finish it. But if using ceph backend,
it raises exception 'Swap only supports host devices'. So now it does
not support to migrate volume(in-use). Does anyone do this work now? Or
Is there any way to let me migrate volume(in-use) with ceph backend?

What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be something
additional missing that we would need to do to make the migration work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread melanie witt

On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm. 
I can migrate the volume(in-use) from one host to another. The nova 
libvirt will call the 'rebase' to finish it. But if using ceph backend, 
it raises exception 'Swap only supports host devices'. So now it does 
not support to migrate volume(in-use). Does anyone do this work now? Or 
Is there any way to let me migrate volume(in-use) with ceph backend?


What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to 
enable migration of in-use volumes with ceph semi-recently (Queens).


On the nova side, the code looks for the source_path in the volume 
config, and if there is not one present, it raises 
NotImplementedError(_("Swap only supports host devices"). So in your 
environment, the volume configs must be missing a source_path.


If you are using at least Queens version, then there must be something 
additional missing that we would need to do to make the migration work.


[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]ceph rbd replication group support

2018-10-19 Thread 王俊
Hi:
I have a question about rbd replication group, I want to know the plan or 
roadmap about it? Anybody work on it?
Blueprint: 
https://blueprints.launchpad.net/cinder/+spec/ceph-rbd-replication-group-support

Thanks


保密:本函仅供收件人使用,如阁下并非抬头标明的收件人,请您即刻删除本函,勿以任何方式使用及传播,并请您能将此误发情形通知发件人,谢谢!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-18 Thread Boxiang Zhu


Hi folks,
When I use the LVM backend to create the volume, then attach it to a vm. I 
can migrate the volume(in-use) from one host to another. The nova libvirt will 
call the 'rebase' to finish it. But if using ceph backend, it raises exception 
'Swap only supports host devices'. So now it does not support to migrate 
volume(in-use). Does anyone do this work now? Or Is there any way to let me 
migrate volume(in-use) with ceph backend?


Cheers,
Boxiang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][swift] FOSDEM Call for Participation: Software Defined Storage devroom

2018-10-12 Thread Niels de Vos
CfP for the Software Defined Storage devroom at FOSDEM 2019 (Brussels,
Belgium, February 3rd).

FOSDEM is a free software event that offers open source communities a
place to meet, share ideas and collaborate. It is renown for being
highly developer- oriented and brings together 8000+ participants from
all over the world.  It is held in the city of Brussels (Belgium).

FOSDEM 2019 will take place during the weekend of February 2nd-3rd 2019.
More details about the event can be found at http://fosdem.org/

** Call For Participation

The Software Defined Storage devroom will go into it's third round for
talks around Open Source Software Defined Storage projects, management
tools and real world deployments.

Presentation topics could include but are not limited too:

- Your work on a SDS project like Ceph, Gluster, OpenEBS or LizardFS

- Your work on or with SDS related projects like SWIFT or Container
  Storage Interface

- Management tools for SDS deployments

- Monitoring tools for SDS clusters

** Important dates:

- Nov 25th 2018:  submission deadline for talk proposals
- Dec 17th 2018:  announcement of the final schedule
- Feb  3rd 2019:  Software Defined Storage dev room

Talk proposals will be reviewed by a steering committee:
- Niels de Vos (Gluster Developer - Red Hat)
- Jan Fajerski (Ceph Developer - SUSE)
- other volunteers TBA

Use the FOSDEM 'pentabarf' tool to submit your proposal:
https://penta.fosdem.org/submission/FOSDEM19

- If necessary, create a Pentabarf account and activate it.
  Please reuse your account from previous years if you have already
  created it.

- In the "Person" section, provide First name, Last name
  (in the "General" tab), Email (in the "Contact" tab) and Bio
  ("Abstract" field in the "Description" tab).

- Submit a proposal by clicking on "Create event".

- Important! Select the "Software Defined Storage devroom" track (on the
  "General" tab).

- Provide the title of your talk ("Event title" in the "General" tab).

- Provide a description of the subject of the talk and the intended
  audience (in the "Abstract" field of the "Description" tab)

- Provide a rough outline of the talk or goals of the session (a short
  list of bullet points covering topics that will be discussed) in the
  "Full description" field in the "Description" tab

- Provide an expected length of your talk in the "Duration" field. Please
  count at least 10 minutes of discussion into your proposal plus allow
  5 minutes for the handover to the next presenter.
  Suggested talk length would be 20+10 and 45+15 minutes.

** Recording of talks

The FOSDEM organizers plan to have live streaming and recording fully
working, both for remote/later viewing of talks, and so that people can
watch streams in the hallways when rooms are full. This requires
speakers to consent to being recorded and streamed. If you plan to be a
speaker, please understand that by doing so you implicitly give consent
for your talk to be recorded and streamed. The recordings will be
published under the same license as all FOSDEM content (CC-BY).

Hope to hear from you soon! And please forward this announcement.

If you have any further questions, please write to the mailinglist at
storage-devr...@lists.fosdem.org and we will try to answer as soon as
possible.

Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][qa] Enabling online volume_extend tests by default

2018-10-09 Thread Erlon Cruz
Hi Ghanshyam,


Though I have concern over running those tests by default(making config
> options True by default), because it is not confirmed all cinder backends
> implements this functionality and it only works for nova libvirt driver. We
> need to keep config options default as False and Devstack/CI can make it
> True to run the tests.
>
>
The discussion on the PTG was about whether we should run this on gate to
actually break the CIs. Once that happens, vendors will have 3 options:

#1: fix their drivers by properly implementing  volume_extend and run
the positive tests
#2: fix their drivers by reporting that they not support volume_extend
and run the negative tests
#3: disable volume extend tests at all (not recommendable), but this
still give us a hint on whether the vendor supports this or not


> If this feature becomes mandatory functionality (or cinder say standard
> feature i think) to implement for every backends and it work with all nova
> driver also(in term of instance action events) then, we can enable this
> feature tests by default. But until then, we should keep them disable by
> default in Tempest but we can enable them on gate via Devstack (patch you
> mentioned) and test them daily on integrated-gate.
>

Its not mandatory that the driver must implement online_extend, but if the
driver does not support it, the driver should report as so.


> Overall, I am ok with Devstack change to make these tests enable for every
> Cinder backends but we need to keep the config options false in Tempest.
>

So, the outcome from the PTG was that we would first merge the tempest test
and give time for vendors to get the drivers fixed. Then we would change it
in devstack so we push vendor to fix their drivers in case they hadn't done
that.

Erlon



>
> I will review those patch and leave comments on gerrit (i saw those patch
> introduce new config option than using the existing one)
>
> -gmann
>
>  > Please let us know if you have any question or concerns about it.
>  > Kind regards,Erlon_[1]
> https://review.openstack.org/#/c/572188/[2]
> https://review.openstack.org/#/c/578463/
> __
>  > OpenStack Development Mailing List (not for usage questions)
>  > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  >
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-09 Thread Jay S Bryant



On 10/8/2018 8:54 AM, Sean McGinnis wrote:

On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote:

In Denver, we agree to add a new "re-image" API in cinder to support upport
volume-backed server rebuild with a new image.

An initial blueprint has been drafted in [3], welcome to review it, thanks.
: )

[snip]

The "force" parameter idea comes from [4], means that
1. we can re-image an "available" volume directly.
2. we can't re-image "in-use"/"reserved" volume directly.
3. we can only re-image an "in-use"/"reserved" volume with "force"
parameter.

And it means nova need to always call re-image API with an extra "force"
parameter,
because the volume status is "in-use" or "reserve" when we rebuild the
server.

*So, what's you idea? Do we really want to add this "force" parameter?*


I would prefer we have the "force" parameter, even if it is something that will
always be defaulted to True from Nova.

Having this exposed as a REST API means anyone could call it, not just Nova
code. So as protection from someone doing something that they are not really
clear on the full implications of, having a flag in there to guard volumes that
are already attached or reserved for shelved instances is worth the very minor
extra overhead.
I concur with Sean's assessment.  I think putting a safety switch in 
place in this design is important to ensure that people using the API 
directly are less likely to do something that they may not actually want 
to do.


Jay

[1] https://etherpad.openstack.org/p/nova-ptg-stein L483
[2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12
[3] https://review.openstack.org/#/c/605317
[4]
https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75

Regards,
Yikun

Jiang Yikun(Kero)
Mail: yikunk...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-09 Thread Matt Riedemann

On 10/9/2018 8:04 AM, Erlon Cruz wrote:
If you are planning to re-image an image on a bootable volume then yes 
you should use a force parameter. I have lost the discussion about this 
on PTG. What is the main use cases? This seems to me something that 
could be leveraged with the current revert-to-snapshot API, which would 
be even better. The flow would be:


1 - create a volume from image
2 - create an snapshot
3 - do whatever you wan't
4 - revert the snapshot

Would that help in your the use cases?


As the spec mentions, this is for enabling re-imaging the root volume on 
a server when nova rebuilds the server. That is not allowed today 
because the compute service can't re-image the root volume. We don't 
want to jump through a bunch of gross alternative hoops to create a new 
root volume with the new image and swap them out (the reasons why are in 
the spec, and have been discussed previously in the ML). So nova is 
asking cinder to provide an API to change the image in a volume which 
the nova rebuild operation will use to re-image the root volume on a 
volume-backed server. I don't know if revert-to-snapshot solves that use 
case, but it doesn't sound like it. With the nova rebuild API, the user 
provides an image reference and that is used to re-image the root disk 
on the server. So it might not be a snapshot, it could be something new.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-09 Thread Erlon Cruz
If you are planning to re-image an image on a bootable volume then yes you
should use a force parameter. I have lost the discussion about this on PTG.
What is the main use cases? This seems to me something that could be
leveraged with the current revert-to-snapshot API, which would be even
better. The flow would be:

1 - create a volume from image
2 - create an snapshot
3 - do whatever you wan't
4 - revert the snapshot

Would that help in your the use cases?

Em seg, 8 de out de 2018 às 10:54, Sean McGinnis 
escreveu:

> On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote:
> > In Denver, we agree to add a new "re-image" API in cinder to support
> upport
> > volume-backed server rebuild with a new image.
> >
> > An initial blueprint has been drafted in [3], welcome to review it,
> thanks.
> > : )
> >
> > [snip]
> >
> > The "force" parameter idea comes from [4], means that
> > 1. we can re-image an "available" volume directly.
> > 2. we can't re-image "in-use"/"reserved" volume directly.
> > 3. we can only re-image an "in-use"/"reserved" volume with "force"
> > parameter.
> >
> > And it means nova need to always call re-image API with an extra "force"
> > parameter,
> > because the volume status is "in-use" or "reserve" when we rebuild the
> > server.
> >
> > *So, what's you idea? Do we really want to add this "force" parameter?*
> >
>
> I would prefer we have the "force" parameter, even if it is something that
> will
> always be defaulted to True from Nova.
>
> Having this exposed as a REST API means anyone could call it, not just Nova
> code. So as protection from someone doing something that they are not
> really
> clear on the full implications of, having a flag in there to guard volumes
> that
> are already attached or reserved for shelved instances is worth the very
> minor
> extra overhead.
>
> > [1] https://etherpad.openstack.org/p/nova-ptg-stein L483
> > [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild
> L12
> > [3] https://review.openstack.org/#/c/605317
> > [4]
> >
> https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75
> >
> > Regards,
> > Yikun
> > 
> > Jiang Yikun(Kero)
> > Mail: yikunk...@gmail.com
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-08 Thread Sean McGinnis
On Mon, Oct 08, 2018 at 03:09:36PM +0800, Yikun Jiang wrote:
> In Denver, we agree to add a new "re-image" API in cinder to support upport
> volume-backed server rebuild with a new image.
> 
> An initial blueprint has been drafted in [3], welcome to review it, thanks.
> : )
> 
> [snip]
> 
> The "force" parameter idea comes from [4], means that
> 1. we can re-image an "available" volume directly.
> 2. we can't re-image "in-use"/"reserved" volume directly.
> 3. we can only re-image an "in-use"/"reserved" volume with "force"
> parameter.
> 
> And it means nova need to always call re-image API with an extra "force"
> parameter,
> because the volume status is "in-use" or "reserve" when we rebuild the
> server.
> 
> *So, what's you idea? Do we really want to add this "force" parameter?*
> 

I would prefer we have the "force" parameter, even if it is something that will
always be defaulted to True from Nova.

Having this exposed as a REST API means anyone could call it, not just Nova
code. So as protection from someone doing something that they are not really
clear on the full implications of, having a flag in there to guard volumes that
are already attached or reserved for shelved instances is worth the very minor
extra overhead.

> [1] https://etherpad.openstack.org/p/nova-ptg-stein L483
> [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12
> [3] https://review.openstack.org/#/c/605317
> [4]
> https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75
> 
> Regards,
> Yikun
> 
> Jiang Yikun(Kero)
> Mail: yikunk...@gmail.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [nova] Do we need a "force" parameter in cinder "re-image" API?

2018-10-08 Thread Yikun Jiang
In Denver, we agree to add a new "re-image" API in cinder to support upport
volume-backed server rebuild with a new image.

An initial blueprint has been drafted in [3], welcome to review it, thanks.
: )

The API is very simple, something like:

URL:

  POST /v3/{project_id}/volumes/{volume_id}/action

Request body:

  {
  'os-reimage': {
  'image_id': "71543ced-a8af-45b6-a5c4-a46282108a90"
  }
  }

The question is do we need a "force" parameter in request body? like:
  {
  'os-reimage': {
  'image_id': "71543ced-a8af-45b6-a5c4-a46282108a90",
*  'force': True*
  }
  }

The "force" parameter idea comes from [4], means that
1. we can re-image an "available" volume directly.
2. we can't re-image "in-use"/"reserved" volume directly.
3. we can only re-image an "in-use"/"reserved" volume with "force"
parameter.

And it means nova need to always call re-image API with an extra "force"
parameter,
because the volume status is "in-use" or "reserve" when we rebuild the
server.

*So, what's you idea? Do we really want to add this "force" parameter?*

[1] https://etherpad.openstack.org/p/nova-ptg-stein L483
[2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday-rebuild L12
[3] https://review.openstack.org/#/c/605317
[4]
https://review.openstack.org/#/c/605317/1/specs/stein/add-volume-re-image-api.rst@75

Regards,
Yikun

Jiang Yikun(Kero)
Mail: yikunk...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][qa] Enabling online volume_extend tests by default

2018-10-07 Thread Ghanshyam Mann
  On Sat, 06 Oct 2018 01:42:11 +0900 Erlon Cruz  wrote 
 
 > Hey folks,
 > Following up on the discussions that we had on the Denver PTG, the Cinder 
 > teamis planning to enable online volume_extend tests[1] to be run by 
 > default. Currently,those tests are only run by some CI systems and infra 
 > jobs that explicitly set it tobe so.
 > We are also adding a negative test and an associated option  in tempest[2] 
 > to allowvendor drivers that does not support online extending to be tested. 
 > This patch willbe merged first and after a reasonable time for people check 
 > whether their backends supports that or not, we will proceed and merge the 
 > devstack patch[1]triggering the tests in all CIs and infra jobs.

Thanks Erlon. +1 on running those tests on gate.  

Though I have concern over running those tests by default(making config options 
True by default), because it is not confirmed all cinder backends implements 
this functionality and it only works for nova libvirt driver. We need to keep 
config options default as False and Devstack/CI can make it True to run the 
tests. 

If this feature becomes mandatory functionality (or cinder say standard feature 
i think) to implement for every backends and it work with all nova driver 
also(in term of instance action events) then, we can enable this feature tests 
by default. But until then, we should keep them disable by default in Tempest 
but we can enable them on gate via Devstack (patch you mentioned) and test them 
daily on integrated-gate. 

Overall, I am ok with Devstack change to make these tests enable for every 
Cinder backends but we need to keep the config options false in Tempest. 

I will review those patch and leave comments on gerrit (i saw those patch 
introduce new config option than using the existing one)

-gmann

 > Please let us know if you have any question or concerns about it.
 > Kind regards,Erlon_[1] 
 > https://review.openstack.org/#/c/572188/[2] 
 > https://review.openstack.org/#/c/578463/ 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][qa] Enabling online volume_extend tests by default

2018-10-05 Thread Erlon Cruz
Hey folks,

Following up on the discussions that we had on the Denver PTG, the Cinder
team
is planning to enable online volume_extend tests[1] to be run by default.
Currently,
those tests are only run by some CI systems and infra jobs that explicitly
set it to
be so.

We are also adding a negative test and an associated option  in tempest[2]
to allow
vendor drivers that does not support online extending to be tested. This
patch will
be merged first and after a reasonable time for people check whether their
backends supports that or not, we will proceed and merge the devstack
patch[1]
triggering the tests in all CIs and infra jobs.

Please let us know if you have any question or concerns about it.

Kind regards,
Erlon
_
[1] https://review.openstack.org/#/c/572188/
[2] https://review.openstack.org/#/c/578463/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ...

2018-10-03 Thread Matt Riedemann

On 10/3/2018 9:45 AM, Jay S. Bryant wrote:

Team,

We had discussed the possibility of adding Gorka to the stable core team 
during the PTG.  He does review a number of our backport patches and is 
active in that area.


If there are no objections in the next week I will add him to the list.

Thanks!

Jay (jungleboyj)


+1 from me in the stable-maint-core peanut gallery.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ...

2018-10-03 Thread Sean McGinnis
On Wed, Oct 03, 2018 at 09:45:25AM -0500, Jay S. Bryant wrote:
> Team,
> 
> We had discussed the possibility of adding Gorka to the stable core team
> during the PTG.  He does review a number of our backport patches and is
> active in that area.
> 
> If there are no objections in the next week I will add him to the list.
> 
> Thanks!
> 
> Jay (jungleboyj)
> 

+1 from me. Gorka has shown to understand the stable policies and I think his
coming from a company that has a vested interest in stable backports would make
him a good candidate for stable core.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ...

2018-10-03 Thread Ivan Kolodyazhny
+1 from me to Gorka!



Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Wed, Oct 3, 2018 at 5:47 PM Jay S. Bryant  wrote:

> Team,
>
> We had discussed the possibility of adding Gorka to the stable core team
> during the PTG.  He does review a number of our backport patches and is
> active in that area.
>
> If there are no objections in the next week I will add him to the list.
>
> Thanks!
>
> Jay (jungleboyj)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Proposing Gorka Eguileor to Stable Core ...

2018-10-03 Thread Jay S. Bryant

Team,

We had discussed the possibility of adding Gorka to the stable core team 
during the PTG.  He does review a number of our backport patches and is 
active in that area.


If there are no objections in the next week I will add him to the list.

Thanks!

Jay (jungleboyj)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Follow-up on core team changes ...

2018-10-03 Thread Jay S. Bryant

Team,

I wanted to follow up on the note I sent a week or so ago about changes 
to the Core team.  I talked to Winston-D (Huang Zhiteng) and it sounded 
like he would not be able to take a more active role.  There were no 
other objections so I am removing him from the Core list.


John Griffith indicated an interest in staying on and thinks that he 
will be able to get more time for Cinder.  As a result we have decided 
to keep him on.


This leaves Cinder with 9 people on the core team.

Thanks!

Jay (jungleboyj)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming

2018-09-28 Thread Tobias Urdin

Thanks Sean!

I did a quick sanity check on the backup part in the puppet-cinder 
module and there is no opinionated

default value there which needs to be changed.

Best regards

On 09/27/2018 08:37 PM, Sean McGinnis wrote:

This probably applies to all deployment tools, so hopefully this reaches the
right folks.

In Havana, Cinder deprecated the use of specifying the module for configuring
backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed
the backwards compatibility handling for configs that still used the old way.

Looking through a quick search, it appears there may be some tools that are
still defaulting to setting the backup driver name using the patch. If your
project does not specify the full driver class path, please update these to do
so now.

Any questions, please reach out here or in the #openstack-cinder channel.

Thanks!
Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming

2018-09-27 Thread Mohammed Naser
Thanks for the email Sean.

https://review.openstack.org/605846 Fix Cinder backup to use full paths

I think this should cover us, please let me know if we did things right.

FYI: the docs all still seem to point at the old paths..

https://docs.openstack.org/cinder/latest/configuration/block-storage/backup-drivers.html
On Thu, Sep 27, 2018 at 2:33 PM Sean McGinnis  wrote:
>
> This probably applies to all deployment tools, so hopefully this reaches the
> right folks.
>
> In Havana, Cinder deprecated the use of specifying the module for configuring
> backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed
> the backwards compatibility handling for configs that still used the old way.
>
> Looking through a quick search, it appears there may be some tools that are
> still defaulting to setting the backup driver name using the patch. If your
> project does not specify the full driver class path, please update these to do
> so now.
>
> Any questions, please reach out here or in the #openstack-cinder channel.
>
> Thanks!
> Sean
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming

2018-09-27 Thread Sean McGinnis
This probably applies to all deployment tools, so hopefully this reaches the
right folks.

In Havana, Cinder deprecated the use of specifying the module for configuring
backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed
the backwards compatibility handling for configs that still used the old way.

Looking through a quick search, it appears there may be some tools that are
still defaulting to setting the backup driver name using the patch. If your
project does not specify the full driver class path, please update these to do
so now.

Any questions, please reach out here or in the #openstack-cinder channel.

Thanks!
Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Lance Bragstad
For those who may be following along and are not familiar with what we mean
by federated auto-provisioning [0].

[0]
https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning

On Wed, Sep 26, 2018 at 9:06 AM Morgan Fainberg 
wrote:

> This discussion was also not about user assigned IDs, but predictable IDs
> with the auto provisioning. We still want it to be something keystone
> controls (locally). It might be hash domain ID and value from assertion (
> similar.to the LDAP user ID generator). As long as within an environment,
> the IDs are predictable when auto provisioning via federation, we should be
> good. And the problem of the totally unknown ID until provisioning could be
> made less of an issue for someone working within a massively federated edge
> environment.
>
> I don't want user/explicit admin set IDs.
>
> On Wed, Sep 26, 2018, 04:43 Jay Pipes  wrote:
>
>> On 09/26/2018 05:10 AM, Colleen Murphy wrote:
>> > Thanks for the summary, Ildiko. I have some questions inline.
>> >
>> > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
>> >
>> > 
>> >
>> >>
>> >> We agreed to prefer federation for Keystone and came up with two work
>> >> items to cover missing functionality:
>> >>
>> >> * Keystone to trust a token from an ID Provider master and when the
>> auth
>> >> method is called, perform an idempotent creation of the user, project
>> >> and role assignments according to the assertions made in the token
>> >
>> > This sounds like it is based on the customizations done at Oath, which
>> to my recollection did not use the actual federation implementation in
>> keystone due to its reliance on Athenz (I think?) as an identity manager.
>> Something similar can be accomplished in standard keystone with the mapping
>> API in keystone which can cause dynamic generation of a shadow user,
>> project and role assignments.
>> >
>> >> * Keystone should support the creation of users and projects with
>> >> predictable UUIDs (eg.: hash of the name of the users and projects).
>> >> This greatly simplifies Image federation and telemetry gathering
>> >
>> > I was in and out of the room and don't recall this discussion exactly.
>> We have historically pushed back hard against allowing setting a project ID
>> via the API, though I can see predictable-but-not-settable as less
>> problematic. One of the use cases from the past was being able to use the
>> same token in different regions, which is problematic from a security
>> perspective. Is that that idea here? Or could someone provide more details
>> on why this is needed?
>>
>> Hi Colleen,
>>
>> I wasn't in the room for this conversation either, but I believe the
>> "use case" wanted here is mostly a convenience one. If the edge
>> deployment is composed of hundreds of small Keystone installations and
>> you have a user (e.g. an NFV MANO user) which should have visibility
>> across all of those Keystone installations, it becomes a hassle to need
>> to remember (or in the case of headless users, store some lookup of) all
>> the different tenant and user UUIDs for what is essentially the same
>> user across all of those Keystone installations.
>>
>> I'd argue that as long as it's possible to create a Keystone tenant and
>> user with a unique name within a deployment, and as long as it's
>> possible to authenticate using the tenant and user *name* (i.e. not the
>> UUID), then this isn't too big of a problem. However, I do know that a
>> bunch of scripts and external tools rely on setting the tenant and/or
>> user via the UUID values and not the names, so that might be where this
>> feature request is coming from.
>>
>> Hope that makes sense?
>>
>> Best,
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread James Penick
Hey Colleen,

>This sounds like it is based on the customizations done at Oath, which to
my recollection did not use the actual federation implementation in
keystone due to its reliance on Athenz (I think?) as an identity manager.
Something similar can be accomplished in standard keystone with the mapping
API in keystone which can cause dynamic generation of a shadow user,
project and role assignments.

You're correct, this was more about the general design of asymmetrical
token based authentication rather that our exact implementation with
Athenz. We didn't use the shadow users because Athenz authentication in our
implementation is done via an 'ntoken'  which is Athenz' older method for
identification, so it was it more straightforward for us to resurrect the
PKI driver. The new way is via mTLS, where the user can identify themselves
via a client cert. I imagine we'll need to move our implementation to use
shadow users as a part of that change.

>We have historically pushed back hard against allowing setting a project
ID via the API, though I can see predictable-but-not-settable as less
problematic.

Yup, predictable-but-not-settable is what we need. Basically as long as the
uuid is a hash of the string, we're good. I definitely don't want to be
able to set a user ID or project ID via API, because of the security and
operability problems that could arise. In my mind this would just be a
config setting.

>One of the use cases from the past was being able to use the same token in
different regions, which is problematic from a security perspective. Is
that that idea here? Or could someone provide more details on why this is
needed?

Well, sorta. As far as we're concerned you can get authenticate to keystone
in each region independently using your credential from the IdP. Our use
cases are more about simplifying federation of other systems, like Glance.
Say I create an image and a member list for that image. I'd like to be able
to copy that image *and* all of its metadata straight across to another
cluster and have things Just Work without needing to look up and resolve
the new UUIDs on the new cluster.

However, for deployers who wish to use Keystone as their IdP, then in that
case they'll need to use that keystone credential to establish a credential
in the keystone cluster in that region.

-James

On Wed, Sep 26, 2018 at 2:10 AM Colleen Murphy  wrote:

> Thanks for the summary, Ildiko. I have some questions inline.
>
> On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
>
> 
>
> >
> > We agreed to prefer federation for Keystone and came up with two work
> > items to cover missing functionality:
> >
> > * Keystone to trust a token from an ID Provider master and when the auth
> > method is called, perform an idempotent creation of the user, project
> > and role assignments according to the assertions made in the token
>
> This sounds like it is based on the customizations done at Oath, which to
> my recollection did not use the actual federation implementation in
> keystone due to its reliance on Athenz (I think?) as an identity manager.
> Something similar can be accomplished in standard keystone with the mapping
> API in keystone which can cause dynamic generation of a shadow user,
> project and role assignments.
>
> > * Keystone should support the creation of users and projects with
> > predictable UUIDs (eg.: hash of the name of the users and projects).
> > This greatly simplifies Image federation and telemetry gathering
>
> I was in and out of the room and don't recall this discussion exactly. We
> have historically pushed back hard against allowing setting a project ID
> via the API, though I can see predictable-but-not-settable as less
> problematic. One of the use cases from the past was being able to use the
> same token in different regions, which is problematic from a security
> perspective. Is that that idea here? Or could someone provide more details
> on why this is needed?
>
> Were there any volunteers to help write up specs and work on the
> implementations in keystone?
>
> 
>
> Colleen (cmurphy)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Giulio Fidente
hi,

thanks for sharing this!

At TripleO we're looking at implementing in Stein deployment of at least
1 regional DC and N edge zones. More comments below.

On 9/25/18 11:21 AM, Ildiko Vancsa wrote:
> Hi,
>
> Hereby I would like to give you a short summary on the discussions
that happened at the PTG in the area of edge.
>
> The Edge Computing Group sessions took place on Tuesday where our main
activity was to draw an overall architecture diagram to capture the
basic setup and requirements of edge towards a set of OpenStack
services. Our main and initial focus was around Keystone and Glance, but
discussion with other project teams such as Nova, Ironic and Cinder also
happened later during the week.
>
> The edge architecture diagrams we drew are part of a so called Minimum
Viable Product (MVP) which refers to the minimalist nature of the setup
where we didn’t try to cover all aspects but rather define a minimum set
of services and requirements to get to a functional system. This
architecture will evolve further as we collect more use cases and
requirements.
>
> To describe edge use cases on a higher level with Mobile Edge as a use
case in the background we identified three main building blocks:
>
> * Main or Regional Datacenter (DC)
> * Edge Sites
> * Far Edge Sites or Cloudlets
>
> We examined the architecture diagram with the following user stories
in mind:
>
> * As a deployer of OpenStack I want to minimize the number of control
planes I need to manage across a large geographical region.
> * As a user of OpenStack I expect instance autoscale continues to
function in an edge site if connectivity is lost to the main datacenter.
> * As a deployer of OpenStack I want disk images to be pulled to a
cluster on demand, without needing to sync every disk image everywhere.
> * As a user of OpenStack I want to manage all of my instances in a
region (from regional DC to far edge cloudlets) via a single API endpoint.
>
> We concluded to talk about service requirements in two major categories:
>
> 1. The Edge sites are fully operational in case of a connection loss
between the Regional DC and the Edge site which requires control plane
services running on the Edge site
> 2. Having full control on the Edge site is not critical in case a
connection loss between the Regional DC and an Edge site which can be
satisfied by having the control plane services running only in the
Regional DC
>
> In the first case the orchestration of the services becomes harder and
is not necessarily solved yet, while in the second case you have
centralized control but losing functionality on the Edge sites in the
event of a connection loss.
>
> We did not discuss things such as HA at the PTG and we did not go into
details on networking during the architectural discussion either.

while TripleO used to rely on pacemaker to manage cinder-volume A/P in
the controlplane, we'd like to push for cinder-volume A/A in the edge
zone and avoid the deployment of pacemaker in the edge zones

the safety of cinder-volume A/A seems to depend mostly on the backend
driver and for RBD we should be good

> We agreed to prefer federation for Keystone and came up with two work
items to cover missing functionality:
>
> * Keystone to trust a token from an ID Provider master and when the
auth method is called, perform an idempotent creation of the user,
project and role assignments according to the assertions made in the token
> * Keystone should support the creation of users and projects with
predictable UUIDs (eg.: hash of the name of the users and projects).
This greatly simplifies Image federation and telemetry gathering
>
> For Glance we explored image caching and spent some time discussing
the option to also cache metadata so a user can boot new instances at
the edge in case of a network connection loss which would result in
being disconnected from the registry:
>
> * I as a user of Glance, want to upload an image in the main
datacenter and boot that image in an edge datacenter. Fetch the image to
the edge datacenter with its metadata
>
> We are still in the progress of documenting the discussions and draw
the architecture diagrams and flows for Keystone and Glance.

for glance we'd like to deploy only one glance-api in the regional dc
and configure glance/cache in each edge zone ... pointing all instances
to a shared database

this should solve the metadata problem and also provide for storage
"locality" into every edge zone

> In addition to the above we went through Dublin PTG wiki
(https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG)
capturing requirements:
>
> * we agreed to consider the list of requirements on the wiki finalized
for now
> * agreed to move there the additional requirements listed on the Use
Cases (https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases)
wiki page
>
> For the details on the discussions with related OpenStack projects you
can check the following etherpads for notes:
>
> * Cinder:

Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Morgan Fainberg
This discussion was also not about user assigned IDs, but predictable IDs
with the auto provisioning. We still want it to be something keystone
controls (locally). It might be hash domain ID and value from assertion (
similar.to the LDAP user ID generator). As long as within an environment,
the IDs are predictable when auto provisioning via federation, we should be
good. And the problem of the totally unknown ID until provisioning could be
made less of an issue for someone working within a massively federated edge
environment.

I don't want user/explicit admin set IDs.

On Wed, Sep 26, 2018, 04:43 Jay Pipes  wrote:

> On 09/26/2018 05:10 AM, Colleen Murphy wrote:
> > Thanks for the summary, Ildiko. I have some questions inline.
> >
> > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
> >
> > 
> >
> >>
> >> We agreed to prefer federation for Keystone and came up with two work
> >> items to cover missing functionality:
> >>
> >> * Keystone to trust a token from an ID Provider master and when the auth
> >> method is called, perform an idempotent creation of the user, project
> >> and role assignments according to the assertions made in the token
> >
> > This sounds like it is based on the customizations done at Oath, which
> to my recollection did not use the actual federation implementation in
> keystone due to its reliance on Athenz (I think?) as an identity manager.
> Something similar can be accomplished in standard keystone with the mapping
> API in keystone which can cause dynamic generation of a shadow user,
> project and role assignments.
> >
> >> * Keystone should support the creation of users and projects with
> >> predictable UUIDs (eg.: hash of the name of the users and projects).
> >> This greatly simplifies Image federation and telemetry gathering
> >
> > I was in and out of the room and don't recall this discussion exactly.
> We have historically pushed back hard against allowing setting a project ID
> via the API, though I can see predictable-but-not-settable as less
> problematic. One of the use cases from the past was being able to use the
> same token in different regions, which is problematic from a security
> perspective. Is that that idea here? Or could someone provide more details
> on why this is needed?
>
> Hi Colleen,
>
> I wasn't in the room for this conversation either, but I believe the
> "use case" wanted here is mostly a convenience one. If the edge
> deployment is composed of hundreds of small Keystone installations and
> you have a user (e.g. an NFV MANO user) which should have visibility
> across all of those Keystone installations, it becomes a hassle to need
> to remember (or in the case of headless users, store some lookup of) all
> the different tenant and user UUIDs for what is essentially the same
> user across all of those Keystone installations.
>
> I'd argue that as long as it's possible to create a Keystone tenant and
> user with a unique name within a deployment, and as long as it's
> possible to authenticate using the tenant and user *name* (i.e. not the
> UUID), then this isn't too big of a problem. However, I do know that a
> bunch of scripts and external tools rely on setting the tenant and/or
> user via the UUID values and not the names, so that might be where this
> feature request is coming from.
>
> Hope that makes sense?
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Jay Pipes

On 09/26/2018 05:10 AM, Colleen Murphy wrote:

Thanks for the summary, Ildiko. I have some questions inline.

On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:





We agreed to prefer federation for Keystone and came up with two work
items to cover missing functionality:

* Keystone to trust a token from an ID Provider master and when the auth
method is called, perform an idempotent creation of the user, project
and role assignments according to the assertions made in the token


This sounds like it is based on the customizations done at Oath, which to my 
recollection did not use the actual federation implementation in keystone due 
to its reliance on Athenz (I think?) as an identity manager. Something similar 
can be accomplished in standard keystone with the mapping API in keystone which 
can cause dynamic generation of a shadow user, project and role assignments.


* Keystone should support the creation of users and projects with
predictable UUIDs (eg.: hash of the name of the users and projects).
This greatly simplifies Image federation and telemetry gathering


I was in and out of the room and don't recall this discussion exactly. We have 
historically pushed back hard against allowing setting a project ID via the 
API, though I can see predictable-but-not-settable as less problematic. One of 
the use cases from the past was being able to use the same token in different 
regions, which is problematic from a security perspective. Is that that idea 
here? Or could someone provide more details on why this is needed?


Hi Colleen,

I wasn't in the room for this conversation either, but I believe the 
"use case" wanted here is mostly a convenience one. If the edge 
deployment is composed of hundreds of small Keystone installations and 
you have a user (e.g. an NFV MANO user) which should have visibility 
across all of those Keystone installations, it becomes a hassle to need 
to remember (or in the case of headless users, store some lookup of) all 
the different tenant and user UUIDs for what is essentially the same 
user across all of those Keystone installations.


I'd argue that as long as it's possible to create a Keystone tenant and 
user with a unique name within a deployment, and as long as it's 
possible to authenticate using the tenant and user *name* (i.e. not the 
UUID), then this isn't too big of a problem. However, I do know that a 
bunch of scripts and external tools rely on setting the tenant and/or 
user via the UUID values and not the names, so that might be where this 
feature request is coming from.


Hope that makes sense?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Colleen Murphy
Thanks for the summary, Ildiko. I have some questions inline.

On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:



> 
> We agreed to prefer federation for Keystone and came up with two work 
> items to cover missing functionality:
> 
> * Keystone to trust a token from an ID Provider master and when the auth 
> method is called, perform an idempotent creation of the user, project 
> and role assignments according to the assertions made in the token

This sounds like it is based on the customizations done at Oath, which to my 
recollection did not use the actual federation implementation in keystone due 
to its reliance on Athenz (I think?) as an identity manager. Something similar 
can be accomplished in standard keystone with the mapping API in keystone which 
can cause dynamic generation of a shadow user, project and role assignments.

> * Keystone should support the creation of users and projects with 
> predictable UUIDs (eg.: hash of the name of the users and projects). 
> This greatly simplifies Image federation and telemetry gathering

I was in and out of the room and don't recall this discussion exactly. We have 
historically pushed back hard against allowing setting a project ID via the 
API, though I can see predictable-but-not-settable as less problematic. One of 
the use cases from the past was being able to use the same token in different 
regions, which is problematic from a security perspective. Is that that idea 
here? Or could someone provide more details on why this is needed?

Were there any volunteers to help write up specs and work on the 
implementations in keystone?



Colleen (cmurphy)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-25 Thread Ildiko Vancsa
Hi,

Hereby I would like to give you a short summary on the discussions that 
happened at the PTG in the area of edge.

The Edge Computing Group sessions took place on Tuesday where our main activity 
was to draw an overall architecture diagram to capture the basic setup and 
requirements of edge towards a set of OpenStack services. Our main and initial 
focus was around Keystone and Glance, but discussion with other project teams 
such as Nova, Ironic and Cinder also happened later during the week.

The edge architecture diagrams we drew are part of a so called Minimum Viable 
Product (MVP) which refers to the minimalist nature of the setup where we 
didn’t try to cover all aspects but rather define a minimum set of services and 
requirements to get to a functional system. This architecture will evolve 
further as we collect more use cases and requirements.

To describe edge use cases on a higher level with Mobile Edge as a use case in 
the background we identified three main building blocks:

* Main or Regional Datacenter (DC)
* Edge Sites
* Far Edge Sites or Cloudlets

We examined the architecture diagram with the following user stories in mind:

* As a deployer of OpenStack I want to minimize the number of control planes I 
need to manage across a large geographical region.
* As a user of OpenStack I expect instance autoscale continues to function in 
an edge site if connectivity is lost to the main datacenter.
* As a deployer of OpenStack I want disk images to be pulled to a cluster on 
demand, without needing to sync every disk image everywhere.
* As a user of OpenStack I want to manage all of my instances in a region (from 
regional DC to far edge cloudlets) via a single API endpoint. 

We concluded to talk about service requirements in two major categories:

1. The Edge sites are fully operational in case of a connection loss between 
the Regional DC and the Edge site which requires control plane services running 
on the Edge site
2. Having full control on the Edge site is not critical in case a connection 
loss between the Regional DC and an Edge site which can be satisfied by having 
the control plane services running only in the Regional DC

In the first case the orchestration of the services becomes harder and is not 
necessarily solved yet, while in the second case you have centralized control 
but losing functionality on the Edge sites in the event of a connection loss.

We did not discuss things such as HA at the PTG and we did not go into details 
on networking during the architectural discussion either.

We agreed to prefer federation for Keystone and came up with two work items to 
cover missing functionality:

* Keystone to trust a token from an ID Provider master and when the auth method 
is called, perform an idempotent creation of the user, project and role 
assignments according to the assertions made in the token
* Keystone should support the creation of users and projects with predictable 
UUIDs (eg.: hash of the name of the users and projects). This greatly 
simplifies Image federation and telemetry gathering

For Glance we explored image caching and spent some time discussing the option 
to also cache metadata so a user can boot new instances at the edge in case of 
a network connection loss which would result in being disconnected from the 
registry:

* I as a user of Glance, want to upload an image in the main datacenter and 
boot that image in an edge datacenter. Fetch the image to the edge datacenter 
with its metadata

We are still in the progress of documenting the discussions and draw the 
architecture diagrams and flows for Keystone and Glance.


In addition to the above we went through Dublin PTG wiki 
(https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG) 
capturing requirements:

* we agreed to consider the list of requirements on the wiki finalized for now
* agreed to move there the additional requirements listed on the Use Cases 
(https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases) wiki page

For the details on the discussions with related OpenStack projects you can 
check the following etherpads for notes:

* Cinder: https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018
* Glance: https://etherpad.openstack.org/p/glance-stein-edge-architecture
* Ironic: https://etherpad.openstack.org/p/ironic-stein-ptg-edge
* Keystone: https://etherpad.openstack.org/p/keystone-stein-edge-architecture
* Neutron: https://etherpad.openstack.org/p/neutron-stein-ptg
* Nova: https://etherpad.openstack.org/p/nova-ptg-stein

Notes from the StarlingX sessions: 
https://etherpad.openstack.org/p/stx-PTG-agenda


We are still working on the MVP architecture to clean it up and discuss 
comments and questions before moving it to a wiki page. Please let me know if 
you would like to get access to the document and I will share it with you.

Please let me know if you have any questions or comments to the above captured 
items.

Thanks and Best Regards,

[openstack-dev] [cinder][forum] Need Topics for Berlin Forum ...

2018-09-24 Thread Jay S Bryant

Team,

Just a reminder that we have an etherpad to plan topics for the Forum in 
Berlin [1].  We are short on topics right now so please take some time 
to think about what we should talk about.  I am also planning time for 
this discussion during our Wednesday meeting this week.


Thanks for taking time to consider topics!

Jay

(jungleboyj)

[1] https://etherpad.openstack.org/p/cinder-berlin-forum-proposals


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Mid-Cycle Planning ...

2018-09-21 Thread Jay S Bryant

Team,

As we discussed at the PTG I have started an etherpad to do some 
planning for a possible Cinder Mid-cycle meeting.  Please check out the 
etherpad [1] and leave your input.


Thanks!

Jay

[1] https://etherpad.openstack.org/p/cinder-stein-mid-cycle-planning


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposed Changes to the Core Team ...

2018-09-21 Thread Jay S Bryant



On 9/21/2018 12:06 PM, John Griffith wrote:




On Fri, Sep 21, 2018 at 11:00 AM Sean McGinnis > wrote:


On Wed, Sep 19, 2018 at 08:43:24PM -0500, Jay S Bryant wrote:
> All,
>
> In the last year we have had some changes to Core team
participation.  This
> was a topic of discussion at the PTG in Denver last week.  Based
on that
> discussion I have reached out to John Griffith and Winston D
(Huang Zhiteng)
> and asked if they felt they could continue to be a part of the
Core Team.
> Both agreed that it was time to relinquish their titles.
>
> So, I am proposing to remove John Griffith and Winston D from
Cinder Core.
> If I hear no concerns with this plan in the next week I will
remove them.
>
> It is hard to remove people who have been so instrumental to the
early days
> of Cinder.  Your past contributions are greatly appreciated and
the team
> would be happy to have you back if circumstances every change.
>
> Sincerely,
> Jay Bryant
>

Really sad to see Winston go as he's been a long time member, but
I think over
the last several releases it's been obvious he's had other
priorities to
compete with. It would be great if that were to change some day.
He's made a
lot of great contributions to Cinder over the years.

I'm a little reluctant to make any changes with John though. We've
spoken
briefly. He definitely is off to other things now, but with how
deeply he has
been involved up until recently with things like the multiattach
implementation, replication, and other significant things, I would
much rather
have him around but less active than completely gone. Having a few
good reviews
is worth a lot.



I would propose we hold off on changing John's status for at least
a cycle. He
has indicated to me he would be willing to devote a little time to
still doing
reviews as his time allows, and I would hate to lose out on his
expertise on
changes to some things. Maybe we can give it a little more time
and see if his
other demands keep him too busy to participate and reevaluate later?

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hey Everyone,

Now that I'm settling in on my other things I think I can still 
contribute a bit to Cinder on my own time.  I'm still pretty fond of 
OpenStack and Cinder so would love the opportunity to give it a cycle 
to see if I can balance things and still be helpful.


Thanks,
John

Sean,

Thank you for your input on this and for following up with John.

John,

Glad that you are settling into your new position and think some time 
will free up for Cinder again.  I would be happy to have your continued 
input.


I am removing you from consideration for removal.

Jay
(jungleboyj)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposed Changes to the Core Team ...

2018-09-21 Thread John Griffith
On Fri, Sep 21, 2018 at 11:00 AM Sean McGinnis 
wrote:

> On Wed, Sep 19, 2018 at 08:43:24PM -0500, Jay S Bryant wrote:
> > All,
> >
> > In the last year we have had some changes to Core team participation.
> This
> > was a topic of discussion at the PTG in Denver last week.  Based on that
> > discussion I have reached out to John Griffith and Winston D (Huang
> Zhiteng)
> > and asked if they felt they could continue to be a part of the Core
> Team.
> > Both agreed that it was time to relinquish their titles.
> >
> > So, I am proposing to remove John Griffith and Winston D from Cinder
> Core.
> > If I hear no concerns with this plan in the next week I will remove them.
> >
> > It is hard to remove people who have been so instrumental to the early
> days
> > of Cinder.  Your past contributions are greatly appreciated and the team
> > would be happy to have you back if circumstances every change.
> >
> > Sincerely,
> > Jay Bryant
> >
>
> Really sad to see Winston go as he's been a long time member, but I think
> over
> the last several releases it's been obvious he's had other priorities to
> compete with. It would be great if that were to change some day. He's made
> a
> lot of great contributions to Cinder over the years.
>
> I'm a little reluctant to make any changes with John though. We've spoken
> briefly. He definitely is off to other things now, but with how deeply he
> has
> been involved up until recently with things like the multiattach
> implementation, replication, and other significant things, I would much
> rather
> have him around but less active than completely gone. Having a few good
> reviews
> is worth a lot.
>


> I would propose we hold off on changing John's status for at least a
> cycle. He
> has indicated to me he would be willing to devote a little time to still
> doing
> reviews as his time allows, and I would hate to lose out on his expertise
> on
> changes to some things. Maybe we can give it a little more time and see if
> his
> other demands keep him too busy to participate and reevaluate later?
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hey Everyone,

Now that I'm settling in on my other things I think I can still contribute
a bit to Cinder on my own time.  I'm still pretty fond of OpenStack and
Cinder so would love the opportunity to give it a cycle to see if I can
balance things and still be helpful.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposed Changes to the Core Team ...

2018-09-21 Thread Sean McGinnis
On Wed, Sep 19, 2018 at 08:43:24PM -0500, Jay S Bryant wrote:
> All,
> 
> In the last year we have had some changes to Core team participation.  This
> was a topic of discussion at the PTG in Denver last week.  Based on that
> discussion I have reached out to John Griffith and Winston D (Huang Zhiteng)
> and asked if they felt they could continue to be a part of the Core Team. 
> Both agreed that it was time to relinquish their titles.
> 
> So, I am proposing to remove John Griffith and Winston D from Cinder Core. 
> If I hear no concerns with this plan in the next week I will remove them.
> 
> It is hard to remove people who have been so instrumental to the early days
> of Cinder.  Your past contributions are greatly appreciated and the team
> would be happy to have you back if circumstances every change.
> 
> Sincerely,
> Jay Bryant
> 

Really sad to see Winston go as he's been a long time member, but I think over
the last several releases it's been obvious he's had other priorities to
compete with. It would be great if that were to change some day. He's made a
lot of great contributions to Cinder over the years.

I'm a little reluctant to make any changes with John though. We've spoken
briefly. He definitely is off to other things now, but with how deeply he has
been involved up until recently with things like the multiattach
implementation, replication, and other significant things, I would much rather
have him around but less active than completely gone. Having a few good reviews
is worth a lot.

I would propose we hold off on changing John's status for at least a cycle. He
has indicated to me he would be willing to devote a little time to still doing
reviews as his time allows, and I would hate to lose out on his expertise on
changes to some things. Maybe we can give it a little more time and see if his
other demands keep him too busy to participate and reevaluate later?

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Proposed Changes to the Core Team ...

2018-09-19 Thread Jay S Bryant

All,

In the last year we have had some changes to Core team participation.  
This was a topic of discussion at the PTG in Denver last week.  Based on 
that discussion I have reached out to John Griffith and Winston D (Huang 
Zhiteng) and asked if they felt they could continue to be a part of the 
Core Team.  Both agreed that it was time to relinquish their titles.


So, I am proposing to remove John Griffith and Winston D from Cinder 
Core.  If I hear no concerns with this plan in the next week I will 
remove them.


It is hard to remove people who have been so instrumental to the early 
days of Cinder.  Your past contributions are greatly appreciated and the 
team would be happy to have you back if circumstances every change.


Sincerely,
Jay Bryant

(jungleboyj)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Berlin Forum Proposals

2018-09-19 Thread Gorka Eguileor
On 19/09, Jay S Bryant wrote:
> Gorka,
>
> Oh man!  Sorry for the duplication.  I will update the link on the Forum
> page if you are able to move your content over.  Think it will confused
> people less if we use the page I most recently sent out.  Does that make
> sense?
>
Hi Jay,

Yup, it makes sense.

I moved the contents and updated the wiki to point to your etherpad.

> Thanks for catching this mistake!
>

It was my mistake for not mentioning the existing etherpad during the
PTG... XD

Cheers,
Gorka.


> Jay
>
>
> On 9/19/2018 4:42 AM, Gorka Eguileor wrote:
> > On 18/09, Jay S Bryant wrote:
> > > Team,
> > >
> > > I have created an etherpad for our Forum Topic Planning:
> > > https://etherpad.openstack.org/p/cinder-berlin-forum-proposals
> > >
> > > Please add your ideas to the etherpad.  Thank you!
> > >
> > > Jay
> > >
> > Hi Jay,
> >
> > After our last IRC meeting, a couple of weeks ago, I created an etherpad
> > [1] and added it to the Forum wiki [2] (though I failed to mention it).
> >
> > I had added a possible topic to this etherpad [1], but I can move it to
> > yours and update the wiki if you like.
> >
> > Cheers,
> > Gorka.
> >
> >
> > [1]: https://etherpad.openstack.org/p/cinder-forum-stein
> > [2]: https://wiki.openstack.org/wiki/Forum/Berlin2018
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Berlin Forum Proposals

2018-09-19 Thread Jay S Bryant

Gorka,

Oh man!  Sorry for the duplication.  I will update the link on the Forum 
page if you are able to move your content over.  Think it will confused 
people less if we use the page I most recently sent out.  Does that make 
sense?


Thanks for catching this mistake!

Jay


On 9/19/2018 4:42 AM, Gorka Eguileor wrote:

On 18/09, Jay S Bryant wrote:

Team,

I have created an etherpad for our Forum Topic Planning:
https://etherpad.openstack.org/p/cinder-berlin-forum-proposals

Please add your ideas to the etherpad.  Thank you!

Jay


Hi Jay,

After our last IRC meeting, a couple of weeks ago, I created an etherpad
[1] and added it to the Forum wiki [2] (though I failed to mention it).

I had added a possible topic to this etherpad [1], but I can move it to
yours and update the wiki if you like.

Cheers,
Gorka.


[1]: https://etherpad.openstack.org/p/cinder-forum-stein
[2]: https://wiki.openstack.org/wiki/Forum/Berlin2018



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Berlin Forum Proposals

2018-09-19 Thread Gorka Eguileor
On 18/09, Jay S Bryant wrote:
> Team,
>
> I have created an etherpad for our Forum Topic Planning:
> https://etherpad.openstack.org/p/cinder-berlin-forum-proposals
>
> Please add your ideas to the etherpad.  Thank you!
>
> Jay
>

Hi Jay,

After our last IRC meeting, a couple of weeks ago, I created an etherpad
[1] and added it to the Forum wiki [2] (though I failed to mention it).

I had added a possible topic to this etherpad [1], but I can move it to
yours and update the wiki if you like.

Cheers,
Gorka.


[1]: https://etherpad.openstack.org/p/cinder-forum-stein
[2]: https://wiki.openstack.org/wiki/Forum/Berlin2018

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Berlin Forum Proposals

2018-09-18 Thread Jay S Bryant

Team,

I have created an etherpad for our Forum Topic Planning: 
https://etherpad.openstack.org/p/cinder-berlin-forum-proposals


Please add your ideas to the etherpad.  Thank you!

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch

2018-09-17 Thread Clark Boylan
On Mon, Sep 17, 2018, at 8:53 AM, Jay S Bryant wrote:
> 
> 
> On 9/17/2018 10:46 AM, Sean McGinnis wrote:
> >>> Plan
> >>> 
> >>> We would now like to have the driverfixes/ocata branch deleted so there 
> >>> is no
> >>> confusion about where backports should go and we don't accidentally get 
> >>> these
> >>> out of sync again.
> >>>
> >>> Infra team, please delete this branch or let me know if there is a process
> >>> somewhere I should follow to have this removed.
> >> The first step is to make sure that all changes on the branch are in a non 
> >> open state (merged or abandoned). 
> >> https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open
> >>  shows that there are no open changes.
> >>
> >> Next you will want to make sure that the commits on this branch are 
> >> preserved somehow. Git garbage collection will delete and cleanup commits 
> >> if they are not discoverable when working backward from some ref. This is 
> >> why our old stable branch deletion process required we tag the stable 
> >> branch as $release-eol first. Looking at `git log origin/driverfixes/ocata 
> >> ^origin/stable/ocata --no-merges --oneline` there are quite a few commits 
> >> on the driverfixes branch that are not on the stable branch, but that 
> >> appears to be due to cherry pick writing new commits. You have indicated 
> >> above that you believe the two branches are in sync at this point. A quick 
> >> sampling of commits seems to confirm this as well.
> >>
> >> If you can go ahead and confirm that you are ready to delete the 
> >> driverfixes/ocata branch I will go ahead and remove it.
> >>
> >> Clark
> >>
> > I did another spot check too to make sure I hadn't missed anything, but it 
> > does
> > appear to be as you stated that the cherry pick resulted in new commits and
> > they actually are in sync for our purposes.
> >
> > I believe we are ready to proceed.
> Sean,
> 
> Thank you for following up on this.  I agee it is a good idea to remove 
> the old driverfixes/ocata branch to avoid possible confusion in the future.
> 
> Clark,
> 
> Sean, myself and the team worked to carefully cherry-pick everything 
> that was needed in stable/ocata so I am confident that we are ready to 
> remove driverfixes/ocata.
> 

I have removed openstack/cinder driverfixes/ocata branch with HEAD 
a37cc259f197e1a515cf82deb342739a125b65c6.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch

2018-09-17 Thread Jay S Bryant



On 9/17/2018 10:46 AM, Sean McGinnis wrote:

Plan

We would now like to have the driverfixes/ocata branch deleted so there is no
confusion about where backports should go and we don't accidentally get these
out of sync again.

Infra team, please delete this branch or let me know if there is a process
somewhere I should follow to have this removed.

The first step is to make sure that all changes on the branch are in a non open 
state (merged or abandoned). 
https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open
 shows that there are no open changes.

Next you will want to make sure that the commits on this branch are preserved 
somehow. Git garbage collection will delete and cleanup commits if they are not 
discoverable when working backward from some ref. This is why our old stable 
branch deletion process required we tag the stable branch as $release-eol 
first. Looking at `git log origin/driverfixes/ocata ^origin/stable/ocata 
--no-merges --oneline` there are quite a few commits on the driverfixes branch 
that are not on the stable branch, but that appears to be due to cherry pick 
writing new commits. You have indicated above that you believe the two branches 
are in sync at this point. A quick sampling of commits seems to confirm this as 
well.

If you can go ahead and confirm that you are ready to delete the 
driverfixes/ocata branch I will go ahead and remove it.

Clark


I did another spot check too to make sure I hadn't missed anything, but it does
appear to be as you stated that the cherry pick resulted in new commits and
they actually are in sync for our purposes.

I believe we are ready to proceed.

Sean,

Thank you for following up on this.  I agee it is a good idea to remove 
the old driverfixes/ocata branch to avoid possible confusion in the future.


Clark,

Sean, myself and the team worked to carefully cherry-pick everything 
that was needed in stable/ocata so I am confident that we are ready to 
remove driverfixes/ocata.


Thanks!
Jay



Thanks for your help.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch

2018-09-17 Thread Sean McGinnis
> > 
> > Plan
> > 
> > We would now like to have the driverfixes/ocata branch deleted so there is 
> > no
> > confusion about where backports should go and we don't accidentally get 
> > these
> > out of sync again.
> > 
> > Infra team, please delete this branch or let me know if there is a process
> > somewhere I should follow to have this removed.
> 
> The first step is to make sure that all changes on the branch are in a non 
> open state (merged or abandoned). 
> https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open
>  shows that there are no open changes.
> 
> Next you will want to make sure that the commits on this branch are preserved 
> somehow. Git garbage collection will delete and cleanup commits if they are 
> not discoverable when working backward from some ref. This is why our old 
> stable branch deletion process required we tag the stable branch as 
> $release-eol first. Looking at `git log origin/driverfixes/ocata 
> ^origin/stable/ocata --no-merges --oneline` there are quite a few commits on 
> the driverfixes branch that are not on the stable branch, but that appears to 
> be due to cherry pick writing new commits. You have indicated above that you 
> believe the two branches are in sync at this point. A quick sampling of 
> commits seems to confirm this as well.
> 
> If you can go ahead and confirm that you are ready to delete the 
> driverfixes/ocata branch I will go ahead and remove it.
> 
> Clark
> 

I did another spot check too to make sure I hadn't missed anything, but it does
appear to be as you stated that the cherry pick resulted in new commits and
they actually are in sync for our purposes.

I believe we are ready to proceed.

Thanks for your help.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][infra] Remove driverfixes/ocata branch

2018-09-17 Thread Clark Boylan
On Mon, Sep 17, 2018, at 8:00 AM, Sean McGinnis wrote:
> Hello Cinder and Infra teams. Cinder needs some help from infra or some
> pointers on how to proceed.
> 
> tl;dr - The openstack/cinder repo had a driverfixes/ocata branch created for
> fixes that no longer met the more restrictive phase II stable policy criteria.
> Extended maintenance has changed that and we want to delete driverfixes/ocata
> to make sure patches are going to the right place.
> 
> Background
> --
> Before the extended maintenance changes, the Cinder team found a lot of 
> vendors
> were maintaining their own forks to keep backported driver fixes that we were
> not allowing upstream due to the stable policy being more restrictive for 
> older
> (or deleted) branches. We created the driverfixes/* branches as a central 
> place
> for these to go so distros would have one place to grab these fixes, if they
> chose to do so.
> 
> This has worked great IMO, and we do occasionally still have things that need
> to go to driverfixes/mitaka and driverfixes/newton. We had also pushed a lot 
> of
> fixes to driverfixes/ocata, but with the changes to stable policy with 
> extended
> maintenance, that is no longer needed.
> 
> Extended Maintenance Changes
> 
> With things being somewhat relaxed with the extended maintenance changes, we
> are now able to backport bug fixes to stable/ocata that we couldn't before and
> we don't have to worry as much about that branch being deleted.
> 
> I had gone through and identified all patches backported to driverfixes/ocata
> but not stable/ocata and cherry-picked them over to get the two branches in
> sync. The stable/ocata should now be identical or ahead of driverfixes/ocata
> and we want to make sure nothing more gets accidentally merged to
> driverfixes/ocata instead of the official stable branch.
> 
> Plan
> 
> We would now like to have the driverfixes/ocata branch deleted so there is no
> confusion about where backports should go and we don't accidentally get these
> out of sync again.
> 
> Infra team, please delete this branch or let me know if there is a process
> somewhere I should follow to have this removed.

The first step is to make sure that all changes on the branch are in a non open 
state (merged or abandoned). 
https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open
 shows that there are no open changes.

Next you will want to make sure that the commits on this branch are preserved 
somehow. Git garbage collection will delete and cleanup commits if they are not 
discoverable when working backward from some ref. This is why our old stable 
branch deletion process required we tag the stable branch as $release-eol 
first. Looking at `git log origin/driverfixes/ocata ^origin/stable/ocata 
--no-merges --oneline` there are quite a few commits on the driverfixes branch 
that are not on the stable branch, but that appears to be due to cherry pick 
writing new commits. You have indicated above that you believe the two branches 
are in sync at this point. A quick sampling of commits seems to confirm this as 
well.

If you can go ahead and confirm that you are ready to delete the 
driverfixes/ocata branch I will go ahead and remove it.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][infra] Remove driverfixes/ocata branch

2018-09-17 Thread Sean McGinnis
Hello Cinder and Infra teams. Cinder needs some help from infra or some
pointers on how to proceed.

tl;dr - The openstack/cinder repo had a driverfixes/ocata branch created for
fixes that no longer met the more restrictive phase II stable policy criteria.
Extended maintenance has changed that and we want to delete driverfixes/ocata
to make sure patches are going to the right place.

Background
--
Before the extended maintenance changes, the Cinder team found a lot of vendors
were maintaining their own forks to keep backported driver fixes that we were
not allowing upstream due to the stable policy being more restrictive for older
(or deleted) branches. We created the driverfixes/* branches as a central place
for these to go so distros would have one place to grab these fixes, if they
chose to do so.

This has worked great IMO, and we do occasionally still have things that need
to go to driverfixes/mitaka and driverfixes/newton. We had also pushed a lot of
fixes to driverfixes/ocata, but with the changes to stable policy with extended
maintenance, that is no longer needed.

Extended Maintenance Changes

With things being somewhat relaxed with the extended maintenance changes, we
are now able to backport bug fixes to stable/ocata that we couldn't before and
we don't have to worry as much about that branch being deleted.

I had gone through and identified all patches backported to driverfixes/ocata
but not stable/ocata and cherry-picked them over to get the two branches in
sync. The stable/ocata should now be identical or ahead of driverfixes/ocata
and we want to make sure nothing more gets accidentally merged to
driverfixes/ocata instead of the official stable branch.

Plan

We would now like to have the driverfixes/ocata branch deleted so there is no
confusion about where backports should go and we don't accidentally get these
out of sync again.

Infra team, please delete this branch or let me know if there is a process
somewhere I should follow to have this removed.

Thanks!
Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][ptg] Team Photos Posted ...

2018-09-15 Thread Jay S Bryant

Team,

Wanted to share the team photos from the PTG.  You can get them here: 
https://www.dropbox.com/sh/2pmvfkstudih2wf/AADynEnPDJiWIOE2nwjzBgtla/Cinder?dl=0_nav_tracking=1


Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][ptg] Topics scheduled for next week ...

2018-09-11 Thread Gorka Eguileor
On 07/09, Jay S Bryant wrote:
> Team,
>
> I have created an etherpad for each of the days of the PTG and split out the
> proposed topics from the planning etherpad into the individual days for
> discussion: [1] [2] [3]
>
> If you want to add an additional topic please add it to Friday or find some
> time on one of the other days.
>
> I look forward to discussing all these topics with you all next week.
>
> Thanks!
>
> Jay

Thanks Jay.

I have added to the Cinder general etherpad the shared_target discussion
topic, as I believe we should be discussing it in the Cinder room first
before Thursday's meeting with Nova.

I saw that on Wednesday the 2:30 to 3:00 privsep topic is a duplicate of
the 12:00 to 12:30 slot, so I have taken the liberty of replacing it
with the shared_targets one.  I hope that's alright.

Cheers,
Gorka.

>
> [1] https://etherpad.openstack.org/p/cinder-ptg-stein-wednesday
>
> [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday
>
> [3] https://etherpad.openstack.org/p/cinder-ptg-stein-friday
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][ptg] Topics scheduled for next week ...

2018-09-07 Thread Jay S Bryant

Team,

I have created an etherpad for each of the days of the PTG and split out 
the proposed topics from the planning etherpad into the individual days 
for discussion: [1] [2] [3]


If you want to add an additional topic please add it to Friday or find 
some time on one of the other days.


I look forward to discussing all these topics with you all next week.

Thanks!

Jay

[1] https://etherpad.openstack.org/p/cinder-ptg-stein-wednesday

[2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday

[3] https://etherpad.openstack.org/p/cinder-ptg-stein-friday


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][placement] Room Scheduled for Cinder Placement Discussion ...

2018-09-07 Thread Jay S Bryant

All,

The results of the Doodle poll suggested that the end of the day Tuesday 
was the best option for us all to get together. [1]


I have scheduled the Big Thompson Room on Tuesday from 15:15 to 17:00.

I hope we can all get together there and then to have a good discussion.

Thanks!

Jay

[1] https://doodle.com/poll/4twwhy46bxerrthx#table


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova][placement] Doodle Calendar Created for Placement Discussion

2018-09-06 Thread Jay S Bryant

All,

We discussed in our weekly meeting yesterday that it might be good to 
plan an additional meeting at the PTG to continue discussions with 
regards to Cinder's use of the Placement Service.


I have looked at the room schedule [1] and there are quite a few open 
rooms on Monday.  Fewer rooms on Tuesday but there are still some 
options each day.


Please fill out the poll [2] if you are interested in attending ASAP and 
then I will reserve a room as soon as it looks like we have quorum.


Thank you!

Jay

[1] http://ptg.openstack.org/ptg.html

[2] https://doodle.com/poll/4twwhy46bxerrthx


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cinder 13.0.0.0rc3 (rocky)

2018-08-23 Thread no-reply

Hello everyone,

A new release candidate for cinder for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/cinder/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/cinder/log/?h=stable/rocky

Release notes for cinder can be found at:

https://docs.openstack.org/releasenotes/cinder/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cinder 13.0.0.0rc2 (rocky)

2018-08-22 Thread no-reply

Hello everyone,

A new release candidate for cinder for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/cinder/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/cinder/log/?h=stable/rocky

Release notes for cinder can be found at:

https://docs.openstack.org/releasenotes/cinder/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][manila] Team Dinner Planning at PTG...

2018-08-21 Thread Jay S Bryant

All,

We talked in the Cinder team meeting about doing a joint Cinder/Manila 
team dinner at the PTG in Denver.


I have created a Doodle Poll to indicate what night would work best for 
everyone. [1]  Also, if you are planning to come please add your name in 
the Cinder Etherpad [2].


Look forward to seeing you all at the PTG!

Jay

[1] https://doodle.com/poll/8rm3ahdyhmrtx5gp#table

[2] https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] How to mount NFS volume?

2018-08-17 Thread Ivan Kolodyazhny
Hi Clay,

Unfortunately, local-attach doesn't support NFS-based volumes due to the
security reasons. We haven't the good solution now for multi-tenant
environments.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Fri, Aug 17, 2018 at 12:03 PM, Chang, Clay (HPS OE-Linux TDC) <
cl...@hpe.com> wrote:

> Hi,
>
>
>
> I have Cinder configured with NFS backend. On one bare metal node, I can
> use ‘cinder create’ to create the volume with specified size – I saw a
> volume file create on the NFS server, so I suppose the NFS was configured
> correctly.
>
>
>
> My question is, how could I mount the NFS volume on the bare metal node?
>
>
>
> I tried:
>
>
>
> cinder local-attach 3f66c360-e2e1-471e-aa36-57db3fcf3bdb –mountpoint
> /mnt/tmp
>
>
>
> it says:
>
>
>
> “ERROR: Connect to volume via protocol NFS not supported”
>
>
>
> I looked at https://github.com/openstack/python-brick-cinderclient-ext/b
> lob/master/brick_cinderclient_ext/volume_actions.py, found only iSCSI,
> RBD and FIBRE_CHANNEL were supported.
>
>
>
> Wondering if there are ways to mount the NFS volume?
>
>
>
> Thanks,
>
> Clay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] How to mount NFS volume?

2018-08-17 Thread Chang, Clay (HPS OE-Linux TDC)
Hi,

I have Cinder configured with NFS backend. On one bare metal node, I can use 
'cinder create' to create the volume with specified size - I saw a volume file 
create on the NFS server, so I suppose the NFS was configured correctly.

My question is, how could I mount the NFS volume on the bare metal node?

I tried:

cinder local-attach 3f66c360-e2e1-471e-aa36-57db3fcf3bdb -mountpoint /mnt/tmp

it says:

"ERROR: Connect to volume via protocol NFS not supported"

I looked at 
https://github.com/openstack/python-brick-cinderclient-ext/blob/master/brick_cinderclient_ext/volume_actions.py,
 found only iSCSI, RBD and FIBRE_CHANNEL were supported.

Wondering if there are ways to mount the NFS volume?

Thanks,
Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-14 Thread Ben Nemec

Okay, thanks.  There's no Sigyn in openstack-oslo so I think we're good. :-)

On 08/14/2018 10:37 AM, Jay S Bryant wrote:

Ben,

Don't fully understand why it was kicking me.  I guess one of the 
behaviors that is considered suspicious is trying to message a bunch of 
nicks at once.  I had tried reducing the number of people in my ping but 
it still kicked me and so I decided to not risk it again.


Sounds like the moral of the story is if sigyn is in the channel, be 
careful.  :-)


Jay


On 8/13/2018 4:06 PM, Ben Nemec wrote:



On 08/08/2018 12:04 PM, Jay S Bryant wrote:

Team,

A reminder that we have our weekly Cinder meeting on Wednesdays at 
16:00 UTC.  I bring this up as I can no longer send the courtesy 
pings without being kicked from IRC.  So, if you wish to join the 
meeting please add a reminder to your calendar of choice.


Do you have any idea why you're being kicked?  I'm wondering how to 
avoid getting into this situation with the Oslo pings.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-14 Thread Amy Marrich
That bot is indeed missing from the channel

Amy (spotz)

On Mon, Aug 13, 2018 at 5:44 PM, Jeremy Stanley  wrote:

> On 2018-08-13 16:29:27 -0500 (-0500), Amy Marrich wrote:
> > I know we did a ping last week in #openstack-ansible for our meeting no
> > issue. I wonder if it's a length of names thing or a channel setting.
> [...]
>
> Freenode's Sigyn bot may not have been invited to
> #openstack-ansible. We might want to consider kicking it from
> channels while they have nick registration enforced.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-14 Thread Jay S Bryant



On 8/13/2018 5:44 PM, Jeremy Stanley wrote:

On 2018-08-13 16:29:27 -0500 (-0500), Amy Marrich wrote:

I know we did a ping last week in #openstack-ansible for our meeting no
issue. I wonder if it's a length of names thing or a channel setting.

[...]

Freenode's Sigyn bot may not have been invited to
#openstack-ansible. We might want to consider kicking it from
channels while they have nick registration enforced.

It does seem that we don't really need the monitoring if registration is 
enforced.  I would be up for doing this.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-14 Thread Jay S Bryant

Ben,

Don't fully understand why it was kicking me.  I guess one of the 
behaviors that is considered suspicious is trying to message a bunch of 
nicks at once.  I had tried reducing the number of people in my ping but 
it still kicked me and so I decided to not risk it again.


Sounds like the moral of the story is if sigyn is in the channel, be 
careful.  :-)


Jay


On 8/13/2018 4:06 PM, Ben Nemec wrote:



On 08/08/2018 12:04 PM, Jay S Bryant wrote:

Team,

A reminder that we have our weekly Cinder meeting on Wednesdays at 
16:00 UTC.  I bring this up as I can no longer send the courtesy 
pings without being kicked from IRC.  So, if you wish to join the 
meeting please add a reminder to your calendar of choice.


Do you have any idea why you're being kicked?  I'm wondering how to 
avoid getting into this situation with the Oslo pings.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-13 Thread Jeremy Stanley
On 2018-08-13 16:29:27 -0500 (-0500), Amy Marrich wrote:
> I know we did a ping last week in #openstack-ansible for our meeting no
> issue. I wonder if it's a length of names thing or a channel setting.
[...]

Freenode's Sigyn bot may not have been invited to
#openstack-ansible. We might want to consider kicking it from
channels while they have nick registration enforced.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-13 Thread Amy Marrich
I know we did a ping last week in #openstack-ansible for our meeting no
issue. I wonder if it's a length of names thing or a channel setting.

Amy (spotz)

On Mon, Aug 13, 2018 at 4:25 PM, Eric Fried  wrote:

> Are you talking about the nastygram from "Sigyn" saying:
>
> "Your actions in # tripped automated anti-spam measures
> (nicks/hilight spam), but were ignored based on your time in channel;
> stop now, or automated action will still be taken. If you have any
> questions, please don't hesitate to contact a member of staff"
>
> I'm getting this too, and (despite the implication to the contrary) it
> sometimes cuts off my messages in an unpredictable spot.
>
> I'm contacting "a member of staff" to see if there's any way to get
> "whitelisted" for big messages. In the meantime, the only solution I'm
> aware of is to chop your pasteypaste up into smaller chunks, and wait a
> couple seconds between pastes.
>
> -efried
>
> On 08/13/2018 04:06 PM, Ben Nemec wrote:
> >
> >
> > On 08/08/2018 12:04 PM, Jay S Bryant wrote:
> >> Team,
> >>
> >> A reminder that we have our weekly Cinder meeting on Wednesdays at
> >> 16:00 UTC.  I bring this up as I can no longer send the courtesy pings
> >> without being kicked from IRC.  So, if you wish to join the meeting
> >> please add a reminder to your calendar of choice.
> >
> > Do you have any idea why you're being kicked?  I'm wondering how to
> > avoid getting into this situation with the Oslo pings.
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-13 Thread Eric Fried
Are you talking about the nastygram from "Sigyn" saying:

"Your actions in # tripped automated anti-spam measures
(nicks/hilight spam), but were ignored based on your time in channel;
stop now, or automated action will still be taken. If you have any
questions, please don't hesitate to contact a member of staff"

I'm getting this too, and (despite the implication to the contrary) it
sometimes cuts off my messages in an unpredictable spot.

I'm contacting "a member of staff" to see if there's any way to get
"whitelisted" for big messages. In the meantime, the only solution I'm
aware of is to chop your pasteypaste up into smaller chunks, and wait a
couple seconds between pastes.

-efried

On 08/13/2018 04:06 PM, Ben Nemec wrote:
> 
> 
> On 08/08/2018 12:04 PM, Jay S Bryant wrote:
>> Team,
>>
>> A reminder that we have our weekly Cinder meeting on Wednesdays at
>> 16:00 UTC.  I bring this up as I can no longer send the courtesy pings
>> without being kicked from IRC.  So, if you wish to join the meeting
>> please add a reminder to your calendar of choice.
> 
> Do you have any idea why you're being kicked?  I'm wondering how to
> avoid getting into this situation with the Oslo pings.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-13 Thread Ben Nemec



On 08/08/2018 12:04 PM, Jay S Bryant wrote:

Team,

A reminder that we have our weekly Cinder meeting on Wednesdays at 16:00 
UTC.  I bring this up as I can no longer send the courtesy pings without 
being kicked from IRC.  So, if you wish to join the meeting please add a 
reminder to your calendar of choice.


Do you have any idea why you're being kicked?  I'm wondering how to 
avoid getting into this situation with the Oslo pings.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cinder 13.0.0.0rc1 (rocky)

2018-08-09 Thread no-reply

Hello everyone,

A new release candidate for cinder for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/cinder/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/rocky

Release notes for cinder can be found at:

http://docs.openstack.org/releasenotes/cinder/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-08 Thread Sean McGinnis
On Wed, Aug 08, 2018 at 05:15:26PM +, Sean McGinnis wrote:
> On Tue, Aug 07, 2018 at 05:27:06PM -0500, Monty Taylor wrote:
> > On 08/07/2018 05:03 PM, Akihiro Motoki wrote:
> > >Hi Cinder and API-SIG folks,
> > >
> > >During reviewing a horizon bug [0], I noticed the behavior of Cinder API
> > >3.0 was changed.
> > >Cinder introduced more strict schema validation for creating/updating
> > >volume encryption type
> > >during Rocky and a new micro version 3.53 was introduced[1].
> > >
> > >Previously, Cinder API like 3.0 accepts unused fields in POST requests
> > >but after [1] landed unused fields are now rejected even when Cinder API
> > >3.0 is used.
> > >In my understanding on the microversioning, the existing behavior for
> > >older versions should be kept.
> > >Is it correct?
> > 
> > I agree with your assessment that 3.0 was used there - and also that I would
> > expect the api validation to only change if 3.53 microversion was used.
> > 
> 
> I filed a bug to track this:
> 
> https://bugs.launchpad.net/cinder/+bug/1786054
> 

Sorry, between lack of attention to detail (lack of coffee?) and an incorrect
link, I think I went down the wrong rabbit hole.

The change was actually introduced in [0]. I have submitted [1] to allow the
additional parameters in the volume type encryption API. This was definitely an
oversight when we allowed that one through.

Apologies for the hassle this has caused.

[0] https://review.openstack.org/#/c/561140/
[1] https://review.openstack.org/#/c/590014/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-08 Thread Sean McGinnis
On Tue, Aug 07, 2018 at 05:27:06PM -0500, Monty Taylor wrote:
> On 08/07/2018 05:03 PM, Akihiro Motoki wrote:
> >Hi Cinder and API-SIG folks,
> >
> >During reviewing a horizon bug [0], I noticed the behavior of Cinder API
> >3.0 was changed.
> >Cinder introduced more strict schema validation for creating/updating
> >volume encryption type
> >during Rocky and a new micro version 3.53 was introduced[1].
> >
> >Previously, Cinder API like 3.0 accepts unused fields in POST requests
> >but after [1] landed unused fields are now rejected even when Cinder API
> >3.0 is used.
> >In my understanding on the microversioning, the existing behavior for
> >older versions should be kept.
> >Is it correct?
> 
> I agree with your assessment that 3.0 was used there - and also that I would
> expect the api validation to only change if 3.53 microversion was used.
> 

I filed a bug to track this:

https://bugs.launchpad.net/cinder/+bug/1786054

But something doesn't seem right from what I've seen. I've put up a patch to
add some extra unit testing around this. I expected some of those unit tests to
fail, but everything seemed happy and working the way it is supposed to with
prior to 3.53 accepting anything and 3.53 or later rejecting extra parameters.

Since that didn't work, I tried reproducing this against a running system using
curl. With no version specified (defaulting to the base 3.0 microversion)
creation succeeded:

curl -g -i -X POST
http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H
"Accept: "Content-Type: application/json" -H "User-Agent: python-cinderclient"
-H "X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description":
null, "multiattach": false, "source_volid": null, "consistencygroup_id": null,
"snapshot_id": null, "size": 1, "name": "New", "imageRef": null,
"availability_zone": null, "volume_type": null, "metadata": {}, "project_id":
"testing", "junk": "garbage"}}'

I then tried specifying the microversion that introduces the strict schema
checking to make sure I was able to get the appropriate failure, which worked
as expected:

curl -g -i -X POST
http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H
"Accept: "Content-Type: application/json" -H "User-Agent: python-cinderclient"
-H "X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description":
null, "multiattach": false, "source_volid": null, "consistencygroup_id": null,
"snapshot_id": null, "size": 1, "name": "New-mv353", "imageRef": null,
"availability_zone": null, "volume_type": null, "metadata": {}, "project_id":
"testing", "junk": "garbage"}}' -H "OpenStack-API-Version: volume 3.53"
HTTP/1.1 400 Bad Request
...

And to test boundary conditions, I then specified the microversion just prior
to the one that enabled strict checking:

curl -g -i -X POST
http://192.168.1.234/volume/v3/95ae21ce92a34b3c92601f3304ea0a46/volumes -H "Ac
"Content-Type: application/json" -H "User-Agent: python-cinderclient" -H
"X-Auth-Token: $OS_TOKEN" -d '{"volume": {"backup_id": null, "description":
null, "multiattach": false, "source_volid": null, "consistencygroup_id": null,
"snapshot_id": null, "size": 1, "name": "New-mv352", "imageRef": null,
"availability_zone": null, "volume_type": null, "metadata": {}, "project_id":
"testing", "junk": "garbage"}}' -H "OpenStack-API-Version: volume 3.52"
HTTP/1.1 202 Accepted

In all cases except the strict checking one, the volume was created
successfully even though the junk extra parameters ("project_id": "testing",
"junk": "garbage") were provided.

So I'm missing something here. Is it possible horizon is requesting the latest
API version and not defaulting to 3.0?

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-08 Thread Jay S Bryant

Team,

A reminder that we have our weekly Cinder meeting on Wednesdays at 16:00 
UTC.  I bring this up as I can no longer send the courtesy pings without 
being kicked from IRC.  So, if you wish to join the meeting please add a 
reminder to your calendar of choice.


Thank you!

Jay



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-08 Thread Sean McGinnis
>  > > 
>  > > Previously, Cinder API like 3.0 accepts unused fields in POST requests
>  > > but after [1] landed unused fields are now rejected even when Cinder API 
>  > > 3.0 is used.
>  > > In my understanding on the microversioning, the existing behavior for 
>  > > older versions should be kept.
>  > > Is it correct?
>  > 
>  > I agree with your assessment that 3.0 was used there - and also that I 
>  > would expect the api validation to only change if 3.53 microversion was 
>  > used.
> 
> +1. As you know, neutron also implemented strict validation in Rocky but with 
> discovery via config option and extensions mechanism. Same way Cinder should 
> make it with backward compatible way till 3.53 version. 
> 

I agree. I _thought_ that was the way it was implemented, but apparently
something was missed.

I will try to look at this soon and see what would need to be changed to get
this behaving correctly. Unless someone else has the time and can beat me to
it, which would be very much appreciated.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-07 Thread Ghanshyam Mann



  On Wed, 08 Aug 2018 07:27:06 +0900 Monty Taylor  
wrote  
 > On 08/07/2018 05:03 PM, Akihiro Motoki wrote:
 > > Hi Cinder and API-SIG folks,
 > > 
 > > During reviewing a horizon bug [0], I noticed the behavior of Cinder API 
 > > 3.0 was changed.
 > > Cinder introduced more strict schema validation for creating/updating 
 > > volume encryption type
 > > during Rocky and a new micro version 3.53 was introduced[1].
 > > 
 > > Previously, Cinder API like 3.0 accepts unused fields in POST requests
 > > but after [1] landed unused fields are now rejected even when Cinder API 
 > > 3.0 is used.
 > > In my understanding on the microversioning, the existing behavior for 
 > > older versions should be kept.
 > > Is it correct?
 > 
 > I agree with your assessment that 3.0 was used there - and also that I 
 > would expect the api validation to only change if 3.53 microversion was 
 > used.

+1. As you know, neutron also implemented strict validation in Rocky but with 
discovery via config option and extensions mechanism. Same way Cinder should 
make it with backward compatible way till 3.53 version. 

-gmann 

 > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-07 Thread Monty Taylor

On 08/07/2018 05:03 PM, Akihiro Motoki wrote:

Hi Cinder and API-SIG folks,

During reviewing a horizon bug [0], I noticed the behavior of Cinder API 
3.0 was changed.
Cinder introduced more strict schema validation for creating/updating 
volume encryption type

during Rocky and a new micro version 3.53 was introduced[1].

Previously, Cinder API like 3.0 accepts unused fields in POST requests
but after [1] landed unused fields are now rejected even when Cinder API 
3.0 is used.
In my understanding on the microversioning, the existing behavior for 
older versions should be kept.

Is it correct?


I agree with your assessment that 3.0 was used there - and also that I 
would expect the api validation to only change if 3.53 microversion was 
used.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-07 Thread Akihiro Motoki
Hi Cinder and API-SIG folks,

During reviewing a horizon bug [0], I noticed the behavior of Cinder API
3.0 was changed.
Cinder introduced more strict schema validation for creating/updating
volume encryption type
during Rocky and a new micro version 3.53 was introduced[1].

Previously, Cinder API like 3.0 accepts unused fields in POST requests
but after [1] landed unused fields are now rejected even when Cinder API
3.0 is used.
In my understanding on the microversioning, the existing behavior for older
versions should be kept.
Is it correct?

Thanks,
Akihiro

[0] https://bugs.launchpad.net/horizon/+bug/1783467
[1] https://review.openstack.org/#/c/573093/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-08-01 Thread John Griffith
On Fri, Jul 27, 2018 at 8:44 AM Matt Riedemann  wrote:

> On 7/16/2018 4:20 AM, Gorka Eguileor wrote:
> > If I remember correctly the driver was deprecated because it had no
> > maintainer or CI.  In Cinder we require our drivers to have both,
> > otherwise we can't guarantee that they actually work or that anyone will
> > fix it if it gets broken.
>
> Would this really require 3rd party CI if it's just local block storage
> on the compute node (in devstack)? We could do that with an upstream CI
> job right? We already have upstream CI jobs for things like rbd and nfs.
> The 3rd party CI requirements generally are for proprietary storage
> backends.
>
> I'm only asking about the CI side of this, the other notes from Sean
> about tweaking the LVM volume backend and feature parity are good
> reasons for removal of the unmaintained driver.
>
> Another option is using the nova + libvirt + lvm image backend for local
> (to the VM) ephemeral disk:
>
>
> https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/virt/libvirt/imagebackend.py#L653
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


We've had this conversation multiple times, here were the results from past
conversations and the reasons we deprecated:
1. Driver was not being tested at all (no CI, no upstream tests etc)
2. We sent out numerous requests trying to determine if anybody was using
the driver, didn't receive much feedback
3. The driver didn't work for an entire release, this indicated that
perhaps it wasn't that valuable
4. The driver is unable to implement a number of the required features for
a Cinder Block Device
5. Digging deeper into performance tests most comparisons were doing things
like
a. Using the shared single nic that's used for all of the cluster
communications (ie DB, APIs, Rabbit etc)
b. Misconfigured deployment, ie using a 1Gig Nic for iSCSI connections
(also see above)

The decision was that raw-block was not by definition a "Cinder Device",
and given that it wasn't really tested or
maintained that it should be removed.  LVM is actually quite good, we did
some pretty extensive testing and even
presented it as a session in Barcelona that showed perf within
approximately 10%.  I'm skeptical any time I see
dramatic comparisons of 1/2 performance, but I could be completely wrong.

I would be much more interested in putting efforts towards trying to figure
out why you have such a large perf
delta and see if we can address that as opposed to trying to bring back and
maintain a driver that only half
works.

Or as Jay Pipes mentioned, don't use Cinder in your case.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] - Barbican w/Live Migration in DevStack Multinode

2018-07-30 Thread Walsh, Helen
Hi OpenStack Community,

I am having some issues with key management in a multinode devstack (from 
master branch 27th July '18) environment where Barbican is the configured 
key_manager.  I have followed setup instructions from the following pages:

  *   https://docs.openstack.org/barbican/latest/contributor/devstack.html 
(manual configuration)
  *   
https://docs.openstack.org/cinder/latest/configuration/block-storage/volume-encryption.html

So far:

  *   Unencrypted block volumes can be attached to instances on any compute node
  *   Instances with unencrypted volumes can also be live migrated to other 
compute node
  *   Encrypted bootable volumes created successfully
  *   Instances can be launched using these encrypted volumes when the instance 
is spawned on demo_machine1 (controller & compute node)
  *   Instances cannot be launched using encrypted volumes when the instance is 
spawned on demo_machine2 or demo_machine3 (compute only), the same failure can 
be seen in nova logs from both compute nodes:

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: DEBUG cinderclient.v3.client 
[None req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] GET call to 
cinderv3 for 
http://10.0.0.63/volume/v3/3f22a0262a7b4832a08c24ac0295cbd9/volumes/296148bf-edb8-4c9f-88c2-44464907f7e7/encryption
 used request id req-71fa7f20-c0bc-46c3-9f07-5866344d31a1 {{(pid=25686) request 
/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:844}}

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: DEBUG os_brick.encryptors 
[None req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Using volume 
encryption metadata '{u'cipher': u'aes-xts-plain64', u'encryption_key_id': 
u'da7ee21c-67ff-4d74-95a0-18ee6c25d85a', u'provider': u'luks', u'key_size': 
256, u'control_location': u'front-end'}' for connection: {'status': 
u'attaching', 'detached_at': u'', u'volume_id': 
u'296148bf-edb8-4c9f-88c2-44464907f7e7', 'attach_mode': u'null', 
'driver_volume_type': u'iscsi', 'instance': 
u'e0dc6eac-09bb-4232-bea7-7b8b161cfa31', 'attached_at': 
u'2018-07-30T13:35:17.00', 'serial': 
u'296148bf-edb8-4c9f-88c2-44464907f7e7', 'data': {'device_path': 
'/dev/disk/by-id/scsi-SEMC_SYMMETRIX_900049_wy000', u'target_discovered': True, 
u'encrypted': True, u'qos_specs': None, u'target_iqn': 
u'iqn.1992-04.com.emc:69700bcbb7112504018f', u'target_portal': 
u'192.168.0.60:3260', u'volume_id': u'296148bf-edb8-4c9f-88c2-44464907f7e7', 
u'target_lun': 1, u'access_mode': u'rw'}} {{(pid=25686) get_encryption_metadata 
/usr/local/lib/python2.7/dist-packages/os_brick/encryptors/__init__.py:125}}

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: WARNING 
keystoneauth.identity.generic.base [None 
req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Failed to discover 
available identity versions when contacting http://localhost/identity/v3. 
Attempting to parse version from URL.: NotFound: Not Found (HTTP 404)

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: ERROR 
castellan.key_manager.barbican_key_manager [None 
req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Error creating Barbican 
client: Could not find versioned identity endpoints when attempting to 
authenticate. Please check that your auth_url is correct. Not Found (HTTP 404): 
DiscoveryFailure: Could not find versioned identity endpoints when attempting 
to authenticate. Please check that your auth_url is correct. Not Found (HTTP 
404)

All instance of Nova have [key_manager] configured as follows:
[key_manager]
backend = barbican
auth_url = http://10.0.0.63/identity/
### Tried with and without the below config options, same result
# auth_type = password
# password = devstack
# username = barbican

Any assistance here would be greatly appreciated, I have spent a lot of time 
looking for some additional information for the use of Barbican in multinode 
devstack environments or with live migration but there is nothing out there, 
everything is for all-in-one environments and I'm not having any issues when 
everything is on one node. I am wondering if at this point there is something I 
am missing in terms of services in a multinode devstack environment, 
qualification of barbican in a multinode environment is outside of the 
recommended test config but following the docs it looks very straight forward.

Some information on the three nodes in my environment are below, if there is 
any other information I can provide let me know, thanks for the help!

Node & Service Breakdown
Node 1 (Controller & Compute)
stack@demo_machine1:~$ openstack service list
+--+-++
| ID   | Name| Type   |
+--+-++
| 43a1334c755c4c81969565097cc9c30c | cinder  | volume |
| 52a8927c09154e33900f24c7c95a9f8b | cinderv2| volumev2   |
| 5427a9dff3b6477197062e1747843c4d | nova_legacy | compute_legacy |
| 

Re: [openstack-dev] [cinder] about block device driver

2018-07-27 Thread Matt Riedemann

On 7/16/2018 4:20 AM, Gorka Eguileor wrote:

If I remember correctly the driver was deprecated because it had no
maintainer or CI.  In Cinder we require our drivers to have both,
otherwise we can't guarantee that they actually work or that anyone will
fix it if it gets broken.


Would this really require 3rd party CI if it's just local block storage 
on the compute node (in devstack)? We could do that with an upstream CI 
job right? We already have upstream CI jobs for things like rbd and nfs. 
The 3rd party CI requirements generally are for proprietary storage 
backends.


I'm only asking about the CI side of this, the other notes from Sean 
about tweaking the LVM volume backend and feature parity are good 
reasons for removal of the unmaintained driver.


Another option is using the nova + libvirt + lvm image backend for local 
(to the VM) ephemeral disk:


https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/virt/libvirt/imagebackend.py#L653

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-24 Thread Sean McGinnis
On Tue, Jul 24, 2018 at 06:07:24PM +0800, Rambo wrote:
> Hi,all
> 
> 
>  In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
> being deprecated, and was eventually be removed with the Queens release.
> 
> 
> https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
>  
> 
> 
>  However,I want to use it out of tree,but I don't know how to use it out 
> of tree,Can you share me a doc? Thank you very much!
> 

I don't think we have any community documentation on how to use out of tree
drivers, but it's fairly straightforward.

You can just drop in that block_device.py file in the cinder/volumes/drivers
directory and configure its use in cinder.conf using the same volume_driver
setting as before.

I'm not sure if anything has been changed since Ocata that would require
updates to the driver, but I would expect most base functionality should still
work. But just a word of warning that there may be some updates to the driver
needed if you find issues with it.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-24 Thread Lee Yarwood
On 20-07-18 08:10:37, Erlon Cruz wrote:
> Nice, good to know. Thanks all for the feedback. We will fix that in our
> drivers.

FWIW Nova does not and AFAICT never has called os-force_detach.

We previously used os-terminate_connection with v2 where the connector
was optional. Even then we always provided one, even providing the
destination connector during an evacuation when the source connector
wasn't stashed in connection_info.
 
> @Walter, so, in this case, if Cinder has the connector, it should not need
> to call the driver passing a None object right?

Yeah I don't think this is an issue with v3 given the connector is
stashed with the attachment, so all we require is a reference to the
attachment to cleanup the connection during evacuations etc.

Lee
 
> Erlon
> 
> Em qua, 18 de jul de 2018 às 12:56, Walter Boring 
> escreveu:
> 
> > The whole purpose of this test is to simulate the case where Nova doesn't
> > know where the vm is anymore,
> > or may simply not exist, but we need to clean up the cinder side of
> > things.   That being said, with the new
> > attach API, the connector is being saved in the cinder database for each
> > volume attachment.
> >
> > Walt
> >
> > On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor 
> > wrote:
> >
> >> On 17/07, Sean McGinnis wrote:
> >> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> >> > > Hi Cinder and Nova folks,
> >> > >
> >> > > Working on some tests for our drivers, I stumbled upon this tempest
> >> test
> >> > > 'force_detach_volume'
> >> > > that is calling Cinder API passing a 'None' connector. At the time
> >> this was
> >> > > added several CIs
> >> > > went down, and people started discussing whether this
> >> (accepting/sending a
> >> > > None connector)
> >> > > would be the proper behavior for what is expected to a driver to
> >> do[1]. So,
> >> > > some of CIs started
> >> > > just skipping that test[2][3][4] and others implemented fixes that
> >> made the
> >> > > driver to disconnected
> >> > > the volume from all hosts if a None connector was received[5][6][7].
> >> >
> >> > Right, it was determined the correct behavior for this was to
> >> disconnect the
> >> > volume from all hosts. The CIs that are skipping this test should stop
> >> doing so
> >> > (once their drivers are fixed of course).
> >> >
> >> > >
> >> > > While implementing this fix seems to be straightforward, I feel that
> >> just
> >> > > removing the volume
> >> > > from all hosts is not the correct thing to do mainly considering that
> >> we
> >> > > can have multi-attach.
> >> > >
> >> >
> >> > I don't think multiattach makes a difference here. Someone is forcibly
> >> > detaching the volume and not specifying an individual connection. So
> >> based on
> >> > that, Cinder should be removing any connections, whether that is to one
> >> or
> >> > several hosts.
> >> >
> >>
> >> Hi,
> >>
> >> I agree with Sean, drivers should remove all connections for the volume.
> >>
> >> Even without multiattach there are cases where you'll have multiple
> >> connections for the same volume, like in a Live Migration.
> >>
> >> It's also very useful when Nova and Cinder get out of sync and your
> >> volume has leftover connections. In this case if you try to delete the
> >> volume you get a "volume in use" error from some drivers.
> >>
> >> Cheers,
> >> Gorka.
> >>
> >>
> >> > > So, my questions are: What is the best way to fix this problem? Should
> >> > > Cinder API continue to
> >> > > accept detachments with None connectors? If, so, what would be the
> >> effects
> >> > > on other Nova
> >> > > attachments for the same volume? Is there any side effect if the
> >> volume is
> >> > > not multi-attached?
> >> > >
> >> > > Additionally to this thread here, I should bring this topic to
> >> tomorrow's
> >> > > Cinder's meeting,
> >> > > so please join if you have something to share.
> >> > >
> >> >
> >> > +1 - good plan.
> >> >
> >> >
> >> >
> >> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage 

[openstack-dev] [cinder] about block device driver

2018-07-24 Thread Rambo
Hi,all


 In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
being deprecated, and was eventually be removed with the Queens release.


https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
 


 However,I want to use it out of tree,but I don't know how to use it out of 
tree,Can you share me a doc? Thank you very much!


















Best Regards
Rambo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-20 Thread Erlon Cruz
Nice, good to know. Thanks all for the feedback. We will fix that in our
drivers.

@Walter, so, in this case, if Cinder has the connector, it should not need
to call the driver passing a None object right?

Erlon

Em qua, 18 de jul de 2018 às 12:56, Walter Boring 
escreveu:

> The whole purpose of this test is to simulate the case where Nova doesn't
> know where the vm is anymore,
> or may simply not exist, but we need to clean up the cinder side of
> things.   That being said, with the new
> attach API, the connector is being saved in the cinder database for each
> volume attachment.
>
> Walt
>
> On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor 
> wrote:
>
>> On 17/07, Sean McGinnis wrote:
>> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
>> > > Hi Cinder and Nova folks,
>> > >
>> > > Working on some tests for our drivers, I stumbled upon this tempest
>> test
>> > > 'force_detach_volume'
>> > > that is calling Cinder API passing a 'None' connector. At the time
>> this was
>> > > added several CIs
>> > > went down, and people started discussing whether this
>> (accepting/sending a
>> > > None connector)
>> > > would be the proper behavior for what is expected to a driver to
>> do[1]. So,
>> > > some of CIs started
>> > > just skipping that test[2][3][4] and others implemented fixes that
>> made the
>> > > driver to disconnected
>> > > the volume from all hosts if a None connector was received[5][6][7].
>> >
>> > Right, it was determined the correct behavior for this was to
>> disconnect the
>> > volume from all hosts. The CIs that are skipping this test should stop
>> doing so
>> > (once their drivers are fixed of course).
>> >
>> > >
>> > > While implementing this fix seems to be straightforward, I feel that
>> just
>> > > removing the volume
>> > > from all hosts is not the correct thing to do mainly considering that
>> we
>> > > can have multi-attach.
>> > >
>> >
>> > I don't think multiattach makes a difference here. Someone is forcibly
>> > detaching the volume and not specifying an individual connection. So
>> based on
>> > that, Cinder should be removing any connections, whether that is to one
>> or
>> > several hosts.
>> >
>>
>> Hi,
>>
>> I agree with Sean, drivers should remove all connections for the volume.
>>
>> Even without multiattach there are cases where you'll have multiple
>> connections for the same volume, like in a Live Migration.
>>
>> It's also very useful when Nova and Cinder get out of sync and your
>> volume has leftover connections. In this case if you try to delete the
>> volume you get a "volume in use" error from some drivers.
>>
>> Cheers,
>> Gorka.
>>
>>
>> > > So, my questions are: What is the best way to fix this problem? Should
>> > > Cinder API continue to
>> > > accept detachments with None connectors? If, so, what would be the
>> effects
>> > > on other Nova
>> > > attachments for the same volume? Is there any side effect if the
>> volume is
>> > > not multi-attached?
>> > >
>> > > Additionally to this thread here, I should bring this topic to
>> tomorrow's
>> > > Cinder's meeting,
>> > > so please join if you have something to share.
>> > >
>> >
>> > +1 - good plan.
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-18 Thread Walter Boring
The whole purpose of this test is to simulate the case where Nova doesn't
know where the vm is anymore,
or may simply not exist, but we need to clean up the cinder side of
things.   That being said, with the new
attach API, the connector is being saved in the cinder database for each
volume attachment.

Walt

On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor  wrote:

> On 17/07, Sean McGinnis wrote:
> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> > > Hi Cinder and Nova folks,
> > >
> > > Working on some tests for our drivers, I stumbled upon this tempest
> test
> > > 'force_detach_volume'
> > > that is calling Cinder API passing a 'None' connector. At the time
> this was
> > > added several CIs
> > > went down, and people started discussing whether this
> (accepting/sending a
> > > None connector)
> > > would be the proper behavior for what is expected to a driver to
> do[1]. So,
> > > some of CIs started
> > > just skipping that test[2][3][4] and others implemented fixes that
> made the
> > > driver to disconnected
> > > the volume from all hosts if a None connector was received[5][6][7].
> >
> > Right, it was determined the correct behavior for this was to disconnect
> the
> > volume from all hosts. The CIs that are skipping this test should stop
> doing so
> > (once their drivers are fixed of course).
> >
> > >
> > > While implementing this fix seems to be straightforward, I feel that
> just
> > > removing the volume
> > > from all hosts is not the correct thing to do mainly considering that
> we
> > > can have multi-attach.
> > >
> >
> > I don't think multiattach makes a difference here. Someone is forcibly
> > detaching the volume and not specifying an individual connection. So
> based on
> > that, Cinder should be removing any connections, whether that is to one
> or
> > several hosts.
> >
>
> Hi,
>
> I agree with Sean, drivers should remove all connections for the volume.
>
> Even without multiattach there are cases where you'll have multiple
> connections for the same volume, like in a Live Migration.
>
> It's also very useful when Nova and Cinder get out of sync and your
> volume has leftover connections. In this case if you try to delete the
> volume you get a "volume in use" error from some drivers.
>
> Cheers,
> Gorka.
>
>
> > > So, my questions are: What is the best way to fix this problem? Should
> > > Cinder API continue to
> > > accept detachments with None connectors? If, so, what would be the
> effects
> > > on other Nova
> > > attachments for the same volume? Is there any side effect if the
> volume is
> > > not multi-attached?
> > >
> > > Additionally to this thread here, I should bring this topic to
> tomorrow's
> > > Cinder's meeting,
> > > so please join if you have something to share.
> > >
> >
> > +1 - good plan.
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-18 Thread Gorka Eguileor
On 17/07, Sean McGinnis wrote:
> On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> > Hi Cinder and Nova folks,
> >
> > Working on some tests for our drivers, I stumbled upon this tempest test
> > 'force_detach_volume'
> > that is calling Cinder API passing a 'None' connector. At the time this was
> > added several CIs
> > went down, and people started discussing whether this (accepting/sending a
> > None connector)
> > would be the proper behavior for what is expected to a driver to do[1]. So,
> > some of CIs started
> > just skipping that test[2][3][4] and others implemented fixes that made the
> > driver to disconnected
> > the volume from all hosts if a None connector was received[5][6][7].
>
> Right, it was determined the correct behavior for this was to disconnect the
> volume from all hosts. The CIs that are skipping this test should stop doing 
> so
> (once their drivers are fixed of course).
>
> >
> > While implementing this fix seems to be straightforward, I feel that just
> > removing the volume
> > from all hosts is not the correct thing to do mainly considering that we
> > can have multi-attach.
> >
>
> I don't think multiattach makes a difference here. Someone is forcibly
> detaching the volume and not specifying an individual connection. So based on
> that, Cinder should be removing any connections, whether that is to one or
> several hosts.
>

Hi,

I agree with Sean, drivers should remove all connections for the volume.

Even without multiattach there are cases where you'll have multiple
connections for the same volume, like in a Live Migration.

It's also very useful when Nova and Cinder get out of sync and your
volume has leftover connections. In this case if you try to delete the
volume you get a "volume in use" error from some drivers.

Cheers,
Gorka.


> > So, my questions are: What is the best way to fix this problem? Should
> > Cinder API continue to
> > accept detachments with None connectors? If, so, what would be the effects
> > on other Nova
> > attachments for the same volume? Is there any side effect if the volume is
> > not multi-attached?
> >
> > Additionally to this thread here, I should bring this topic to tomorrow's
> > Cinder's meeting,
> > so please join if you have something to share.
> >
>
> +1 - good plan.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-17 Thread Sean McGinnis
On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> Hi Cinder and Nova folks,
> 
> Working on some tests for our drivers, I stumbled upon this tempest test
> 'force_detach_volume'
> that is calling Cinder API passing a 'None' connector. At the time this was
> added several CIs
> went down, and people started discussing whether this (accepting/sending a
> None connector)
> would be the proper behavior for what is expected to a driver to do[1]. So,
> some of CIs started
> just skipping that test[2][3][4] and others implemented fixes that made the
> driver to disconnected
> the volume from all hosts if a None connector was received[5][6][7].

Right, it was determined the correct behavior for this was to disconnect the
volume from all hosts. The CIs that are skipping this test should stop doing so
(once their drivers are fixed of course).

> 
> While implementing this fix seems to be straightforward, I feel that just
> removing the volume
> from all hosts is not the correct thing to do mainly considering that we
> can have multi-attach.
> 

I don't think multiattach makes a difference here. Someone is forcibly
detaching the volume and not specifying an individual connection. So based on
that, Cinder should be removing any connections, whether that is to one or
several hosts.

> So, my questions are: What is the best way to fix this problem? Should
> Cinder API continue to
> accept detachments with None connectors? If, so, what would be the effects
> on other Nova
> attachments for the same volume? Is there any side effect if the volume is
> not multi-attached?
> 
> Additionally to this thread here, I should bring this topic to tomorrow's
> Cinder's meeting,
> so please join if you have something to share.
> 

+1 - good plan.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-17 Thread Erlon Cruz
Hi Cinder and Nova folks,

Working on some tests for our drivers, I stumbled upon this tempest test
'force_detach_volume'
that is calling Cinder API passing a 'None' connector. At the time this was
added several CIs
went down, and people started discussing whether this (accepting/sending a
None connector)
would be the proper behavior for what is expected to a driver to do[1]. So,
some of CIs started
just skipping that test[2][3][4] and others implemented fixes that made the
driver to disconnected
the volume from all hosts if a None connector was received[5][6][7].

While implementing this fix seems to be straightforward, I feel that just
removing the volume
from all hosts is not the correct thing to do mainly considering that we
can have multi-attach.

So, my questions are: What is the best way to fix this problem? Should
Cinder API continue to
accept detachments with None connectors? If, so, what would be the effects
on other Nova
attachments for the same volume? Is there any side effect if the volume is
not multi-attached?

Additionally to this thread here, I should bring this topic to tomorrow's
Cinder's meeting,
so please join if you have something to share.

Erlon

___
[1] https://bugs.launchpad.net/cinder/+bug/1686278
[2]
https://openstack-ci-logs.aws.infinidat.com/14/578114/2/check/dsvm-tempest-infinibox-fc/14fa930/console.html
[3]
http://54.209.116.144/14/578114/2/check/kaminario-dsvm-tempest-full-iscsi/ce750c8/console.html
[4]
http://logs.openstack.netapp.com/logs/14/578114/2/upstream-check/cinder-cDOT-iSCSI/8e2c549/console.html#_2018-07-16_20_06_16_937286
[5]
https://review.openstack.org/#/c/551832/1/cinder/volume/drivers/dell_emc/vnx/adapter.py
[6]
https://review.openstack.org/#/c/550324/2/cinder/volume/drivers/hpe/hpe_3par_common.py
[7]
https://review.openstack.org/#/c/536778/2/cinder/volume/drivers/infinidat.py
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   8   9   10   >