[openstack-dev] Regarding cache-based cross-VM side channel attacks in OpenStack

2018-08-23 Thread Darshan Tank
Dear Sir,

I would like to know, whether cache-based cross-VM side channel attacks are
possible in OpenStack VM or not ?

If the answer of above question is no, then what are the mechanisms
employed in OpenStack to prevent or to mitigate such types of security
threats?

I'm looking forward to hearing from you.

Thanks in advance for your support.

With Warm Regards,
*Darshan Tank *

[image: Please consider the environment before printing]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cyborg] Zoom URL for Aug 29 meeting

2018-08-23 Thread Nadathur, Sundar
Please use this invite instead, because it does not have the time limits 
of the old one (updated in  Cyborg wiki as well).


Time: Aug 29, 2018 10:00 AM Eastern Time (US and Canada)

Join from PC, Mac, Linux, iOS or Android: *https://zoom.us/j/395326369*

Or iPhone one-tap :
    US: +16699006833,,395326369#  or +16465588665,,395326369#

Or Telephone:
    Dial(for higher quality, dial a number based on your current 
location):

        US: +1 669 900 6833  or +1 646 558 8665

    Meeting ID: 395 326 369

    International numbers available: https://zoom.us/u/eGbqK3pMh

Thanks,
Sundar


On 8/22/2018 11:39 PM, Nadathur, Sundar wrote:


For the August 29 weekly meeting [1], the main agenda is the 
discussion of Cyborg device/data models.


We will use this meeting invite to present slides:

Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/189707867

Or iPhone one-tap :
    US: +16465588665,,189707867#  or +14086380986,,189707867#
Or Telephone:
    Dial(for higher quality, dial a number based on your current 
location):

    US: +1 646 558 8665  or +1 408 638 0986
    Meeting ID: 189 707 867
    International numbers available: https://zoom.us/u/dnYoZcYYJ

[1] https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting

Regards,
Sundar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Doug Hellmann


> On Aug 23, 2018, at 4:01 PM, Ben Nemec  wrote:
> 
> 
> 
>> On 08/23/2018 12:25 PM, Doug Hellmann wrote:
>> Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500:
>>> Do you mean an actual fixture, that would be used like:
>>> 
>>>  class MyTestCase(testtools.TestCase):
>>>  def setUp(self):
>>>  self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids
>>> 
>>>  def test_foo(self):
>>>  do_a_thing_with(self.uuids.foo)
>>> 
>>> ?
>>> 
>>> That's... okay I guess, but the refactoring necessary to cut over to it
>>> will now entail adding 'self.' to every reference. Is there any way
>>> around that?
>> That is what I had envisioned, yes.  In the absence of a global,
>> which we do not want, what other API would you propose?
> 
> If we put it in oslotest instead, would the global still be a problem? 
> Especially since mock has already established a pattern for this 
> functionality?

I guess all of the people who complained so loudly about the global in 
oslo.config are gone?

If we don’t care about the global then we could just put the code from Eric’s 
threadsafe version in oslo.utils somewhere. 

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cinder 13.0.0.0rc3 (rocky)

2018-08-23 Thread no-reply

Hello everyone,

A new release candidate for cinder for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/cinder/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/cinder/log/?h=stable/rocky

Release notes for cinder can be found at:

https://docs.openstack.org/releasenotes/cinder/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][horizon] Issues we found when using Community Images

2018-08-23 Thread Andy Botting
Hi Jeremy,


> Can you comment more on what needs to be updated in Sahara? Are they
> simply issues in the UI (sahara-dashboard) or is there a problem
> consuming community images on the server side?


We haven't looked into it much yet, so I couldn't tell you.

I think it would be great to extend the Glance API to include a
visibility=all filter, so we can actually get ALL available images in a
single request, then projects could switch over to this.

It might need some thought on how to manage the new API request when using
an older version of Glance that didn't support visibility=all, but I'm sure
that could be worked out.

It would be great to hear from one of the Glance devs what they think about
this approach.

cheers,
Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api][grapql] Proof of Concept

2018-08-23 Thread Gilles Dubreuil



On 24/08/18 04:58, Slawomir Kaplonski wrote:

Hi Miguel,

I’m not sure but maybe You were looking for those patches:

https://review.openstack.org/#/q/project:openstack/neutron+branch:feature/graphql



Yes that's the one, it's under Tristan Cacqueray name as he helped 
getting started.



Wiadomość napisana przez Miguel Lavalle  w dniu 
23.08.2018, o godz. 18:57:

Hi Gilles,

Ed pinged me earlier today in IRC in regards to this topic. After reading your 
message, I assumed that you had patches up for review in Gerrit. I looked for 
them, with the intent to list them in the agenda of the next Neutron team 
meeting, to draw attention to them. I couldn't find any, though: 
https://review.openstack.org/#/q/owner:%22Gilles+Dubreuil+%253Cgdubreui%2540redhat.com%253E%22

So, how can we help? This is our meetings schedule: 
http://eavesdrop.openstack.org/#Neutron_Team_Meeting. Given that you are Down 
Under at UTC+10, the most convenient meeting for you is the one on Monday (even 
weeks), which would be Tuesday at 7am for you. Please note that we have an on 
demand section in our agenda: 
https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda. Feel free to 
add topics in that section when you have something to discuss with the Neutron 
team.


Now that we have a working base API serving GraphQL requests we need to 
do provide the data in respect of Oslo Policy and such.


Thanks for the pointers, I'll add the latter to the Agenda and will be 
at next meeting.





Best regards

Miguel

On Sun, Aug 19, 2018 at 10:57 PM, Gilles Dubreuil  wrote:


On 25/07/18 23:48, Ed Leafe wrote:
On Jun 6, 2018, at 7:35 PM, Gilles Dubreuil  wrote:
The branch is now available under feature/graphql on the neutron core 
repository [1].
I wanted to follow up with you on this effort. I haven’t seen any activity on 
StoryBoard for several weeks now, and wanted to be sure that there was nothing 
blocking you that we could help with.


-- Ed Leafe



Hi Ed,

Thanks for following up.

There has been 2 essential counterproductive factors to the effort.

The first is that I've been busy attending issues on other part of my job.
The second one is the lack of response/follow-up from the Neutron core team.

We have all the plumbing in place but we need to layer the data through oslo 
policies.

Cheers,
Gilles


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

—
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Gilles Dubreuil
Senior Software Engineer - Red Hat - Openstack DFG Integration
Email: gil...@redhat.com
GitHub/IRC: gildub
Mobile: +61 400 894 219


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [community][Rocky] Community Meeting: Rocky + project updates

2018-08-23 Thread Anne Bertucio
Hi all,

Updated meeting information below for the OpenStack Community Meeting on August 
30 at 3pm UTC. We’ll cover what’s new in the Rocky release, hear updates from 
the Airship, Kata Containers, StarlingX and Zuul projects, and get a preview of 
the Berlin Summit. Hope you can join us, but if not, it will be recorded!

When: Aug 30, 2018 8:00 AM Pacific Time (US and Canada)
Topic: OpenStack Community Meeting 

Please click the link below to join the webinar: 
https://zoom.us/j/551803657

Or iPhone one-tap :
US: +16699006833,,551803657#  or +16468769923,,551803657# 
Or Telephone:
Dial(for higher quality, dial a number based on your current location): 
US: +1 669 900 6833  or +1 646 876 9923 
Webinar ID: 551 803 657
International numbers available: https://zoom.us/u/bh2jVweqf

Cheers,
Anne Bertucio
OpenStack Foundation
a...@openstack.org | irc: annabelleB





> On Aug 16, 2018, at 9:46 AM, Anne Bertucio  wrote:
> 
> Hi all,
> 
> Save the date for an OpenStack community meeting on August 30 at 3pm UTC. 
> This is the evolution of the “Marketing Community Release Preview” meeting 
> that we’ve had each cycle. While that meeting has always been open to all, we 
> wanted to expand the topics and encourage anyone who was interested in 
> getting updates on the Rocky release or the newer projects at OSF to attend. 
> 
> We’ll cover:
> —What’s new in Rocky
> (This info will still be at a fairly high level, so might not be new 
> information if you’re someone who stays up to date in the dev ML or is 
> actively involved in upstream work)
> 
> —Updates from Airship, Kata Containers, StarlingX, and Zuul
> 
> —What you can expect at the Berlin Summit in November
> 
> This meeting will be run over Zoom (look for info closer to the 30th) and 
> will be recorded, so if you can’t make the time, don’t panic! 
> 
> Cheers,
> Anne Bertucio
> OpenStack Foundation
> a...@openstack.org  | irc: annabelleB
> 
> 
> 
> 
> 
> ___
> Marketing mailing list
> market...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/marketing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sahara-dashboard 9.0.0.0rc2 (rocky)

2018-08-23 Thread no-reply

Hello everyone,

A new release candidate for sahara-dashboard for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/sahara-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


https://git.openstack.org/cgit/openstack/sahara-dashboard/log/?h=stable/rocky

Release notes for sahara-dashboard can be found at:

https://docs.openstack.org/releasenotes/sahara-dashboard/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sahara 9.0.0.0rc2 (rocky)

2018-08-23 Thread no-reply

Hello everyone,

A new release candidate for sahara for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/sahara/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/sahara/log/?h=stable/rocky

Release notes for sahara can be found at:

https://docs.openstack.org/releasenotes/sahara/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31

2018-08-23 Thread Sean McGinnis
> >
> > We are still missing releases for the following tempest plugins. Some are
> > pending getting pypi and release jobs set up, but please try to prioritize
> > getting these done as soon as possible.
> >
> > barbican-tempest-plugin
> > blazar-tempest-plugin
> > cloudkitty-tempest-plugin
> > congress-tempest-plugin
> > ec2api-tempest-plugin
> > magnum-tempest-plugin
> > mistral-tempest-plugin
> > monasca-kibana-plugin
> > monasca-tempest-plugin
> > murano-tempest-plugin
> > networking-generic-switch-tempest-plugin
> > oswin-tempest-plugin
> > senlin-tempest-plugin
> > telemetry-tempest-plugin
> > tripleo-common-tempest-plugin
> 
> To speak for the tripleo-common-template-plugin, it's currently not
> used and there aren't any tests so I don't think it's in a spot for
> it's first release during Rocky. I'm not sure the current status of
> this effort so it'll be something we'll need to raise at the PTG.
> 

Thanks Alex. Odd that a repo was created with no tests.

I think the goal was to split out in-repo tempest tests, not to ensure that
every project has one whether they need it or not. I wonder if we should
"retire" this repo until it is actually needed.

I will propose a patch to the releases repo to drop the deliverable file at
least. That will keep it from showing up in our list of unreleased repos.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes

2018-08-23 Thread Mark Goddard
+1

On Thu, 23 Aug 2018, 20:43 Jim Rollenhagen,  wrote:

> ++
>
>
> // jim
>
> On Thu, Aug 23, 2018 at 2:24 PM, Julia Kreger  > wrote:
>
>> Greetings everyone!
>>
>> In our team meeting this week we stumbled across the subject of
>> promoting contributors to be sub-project's core reviewers.
>> Traditionally it is something we've only addressed as needed or
>> desired by consensus with-in those sub-projects, but we were past due
>> time to take a look at the entire picture since not everything should
>> fall to ironic-core.
>>
>> And so, I've taken a look at our various repositories and I'm
>> proposing the following additions:
>>
>> For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya
>> Etingof[1]. Ilya has been actively involved with sushy, sushy-tools,
>> and virtualbmc this past cycle. I've found many of his reviews and
>> non-voting review comments insightful and willing to understand. He
>> has taken on some of the effort that is needed to maintain and keep
>> these tools usable for the community, and as such adding him to the
>> core group for these repositories makes lots of sense.
>>
>> For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2].
>> Kaifeng has taken on some hard problems in ironic and
>> ironic-inspector, as well as brought up insightful feedback in
>> ironic-specs. They are demonstrating a solid understanding that I only
>> see growing as time goes on.
>>
>> For sushy-core: Debayan Ray[3]. Debayan has been involved with the
>> community for some time and has worked on sushy from early on in its
>> life. He has indicated it is near and dear to him, and he has been
>> actively reviewing and engaging in discussion on patchsets as his time
>> has permitted.
>>
>> With any addition it is good to look at inactivity as well. It saddens
>> me to say that we've had some contributors move on as priorities have
>> shifted to where they are no longer involved with the ironic
>> community. Each person listed below has been inactive for a year or
>> more and is no longer active in the ironic community. As such I've
>> removed their group membership from the sub-project core reviewer
>> groups. Should they return, we will welcome them back to the community
>> with open arms.
>>
>> bifrost-core: Stephanie Miller[4]
>> ironic-inspector-core: Anton Arefivev[5]
>> ironic-ui-core: Peter Peila[6], Beth Elwell[7]
>>
>> Thanks,
>>
>> -Julia
>>
>> [1]: http://stackalytics.com/?user_id=etingof=marks
>> [2]: http://stackalytics.com/?user_id=kaifeng=marks
>> [3]: http://stackalytics.com/?user_id=deray=marks=all
>> [4]:
>> http://stackalytics.com/?metric=marks=all_id=stephan
>> [5]: http://stackalytics.com/?user_id=aarefiev=marks
>> [6]: http://stackalytics.com/?metric=marks=all_id=ppiela
>> [7]:
>> http://stackalytics.com/?metric=marks=all_id=bethelwell=ironic-ui
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] need help triaging a vmware driver bug

2018-08-23 Thread melanie witt

On Fri, 17 Aug 2018 10:50:30 +0300, Radoslav Gerganov wrote:

Hi,

On 17.08.2018 04:10, melanie witt wrote:


Can anyone help triage this bug?



I have requested more info from the person who submitted this and provided some 
tips how to correlate nova-compute logs to vCenter logs in order to better 
understand what went wrong.
Would it be possible to include this kind of information in the Launchpad bug 
template for VMware related bugs?


Thank you for your help, Rado.

So, I think we could add something to the launchpad bug template to link 
to a doc that explains tips about reporting VMware related bugs. I 
suggest linking to a doc because the bug template is already really long 
and looks like it would be best to have something short, like, "For tips 
on reporting VMware virt driver bugs, see this doc: " and provide 
a link to, for example, a openstack wiki about the VMware virt driver 
(is there one?). The question is, where can we put the doc? Wiki? Or 
maybe here at the bottom [1]? Let me know what you think.


-melanie

[1] 
https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-vmware.html






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Ben Nemec



On 08/23/2018 12:25 PM, Doug Hellmann wrote:

Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500:

Do you mean an actual fixture, that would be used like:

  class MyTestCase(testtools.TestCase):
  def setUp(self):
  self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids

  def test_foo(self):
  do_a_thing_with(self.uuids.foo)

?

That's... okay I guess, but the refactoring necessary to cut over to it
will now entail adding 'self.' to every reference. Is there any way
around that?


That is what I had envisioned, yes.  In the absence of a global,
which we do not want, what other API would you propose?


If we put it in oslotest instead, would the global still be a problem? 
Especially since mock has already established a pattern for this 
functionality?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31

2018-08-23 Thread Alex Schultz
On Thu, Aug 23, 2018 at 10:12 AM, Sean McGinnis  wrote:
> This is the final countdown email for the Rocky development cycle. Thanks to
> everyone involved in the Rocky release!
>
> Development Focus
> -
>
> Teams attending the PTG should be preparing for those discussions and 
> capturing
> information in the etherpads:
>
> https://wiki.openstack.org/wiki/PTG/Stein/Etherpads
>
> General Information
> ---
>
> The release team plans on doing the final Rocky release on 29 August. We will
> re-tag the last commit used for the final RC using the final version number.
>
> If you have not already done so, now would be a good time to take a look at 
> the
> Stein schedule and start planning team activities:
>
> https://releases.openstack.org/stein/schedule.html
>
> Actions
> -
>
> PTLs and release liaisons should watch for the final release patch from the
> release team. While not required, we would appreciate having an ack from each
> team before we approve it on the 29th.
>
> We are still missing releases for the following tempest plugins. Some are
> pending getting pypi and release jobs set up, but please try to prioritize
> getting these done as soon as possible.
>
> barbican-tempest-plugin
> blazar-tempest-plugin
> cloudkitty-tempest-plugin
> congress-tempest-plugin
> ec2api-tempest-plugin
> magnum-tempest-plugin
> mistral-tempest-plugin
> monasca-kibana-plugin
> monasca-tempest-plugin
> murano-tempest-plugin
> networking-generic-switch-tempest-plugin
> oswin-tempest-plugin
> senlin-tempest-plugin
> telemetry-tempest-plugin
> tripleo-common-tempest-plugin

To speak for the tripleo-common-template-plugin, it's currently not
used and there aren't any tests so I don't think it's in a spot for
it's first release during Rocky. I'm not sure the current status of
this effort so it'll be something we'll need to raise at the PTG.

> trove-tempest-plugin
> watcher-tempest-plugin
> zaqar-tempest-plugin
>
> Upcoming Deadlines & Dates
> --
>
> Final RC deadline: August 23
> Rocky Release: August 29
> Cycle trailing RC deadline: August 30
> Stein PTG: September 10-14
> Cycle trailing Rocky release: November 28
>
> --
> Sean McGinnis (smcginnis)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes

2018-08-23 Thread Jim Rollenhagen
++


// jim

On Thu, Aug 23, 2018 at 2:24 PM, Julia Kreger 
wrote:

> Greetings everyone!
>
> In our team meeting this week we stumbled across the subject of
> promoting contributors to be sub-project's core reviewers.
> Traditionally it is something we've only addressed as needed or
> desired by consensus with-in those sub-projects, but we were past due
> time to take a look at the entire picture since not everything should
> fall to ironic-core.
>
> And so, I've taken a look at our various repositories and I'm
> proposing the following additions:
>
> For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya
> Etingof[1]. Ilya has been actively involved with sushy, sushy-tools,
> and virtualbmc this past cycle. I've found many of his reviews and
> non-voting review comments insightful and willing to understand. He
> has taken on some of the effort that is needed to maintain and keep
> these tools usable for the community, and as such adding him to the
> core group for these repositories makes lots of sense.
>
> For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2].
> Kaifeng has taken on some hard problems in ironic and
> ironic-inspector, as well as brought up insightful feedback in
> ironic-specs. They are demonstrating a solid understanding that I only
> see growing as time goes on.
>
> For sushy-core: Debayan Ray[3]. Debayan has been involved with the
> community for some time and has worked on sushy from early on in its
> life. He has indicated it is near and dear to him, and he has been
> actively reviewing and engaging in discussion on patchsets as his time
> has permitted.
>
> With any addition it is good to look at inactivity as well. It saddens
> me to say that we've had some contributors move on as priorities have
> shifted to where they are no longer involved with the ironic
> community. Each person listed below has been inactive for a year or
> more and is no longer active in the ironic community. As such I've
> removed their group membership from the sub-project core reviewer
> groups. Should they return, we will welcome them back to the community
> with open arms.
>
> bifrost-core: Stephanie Miller[4]
> ironic-inspector-core: Anton Arefivev[5]
> ironic-ui-core: Peter Peila[6], Beth Elwell[7]
>
> Thanks,
>
> -Julia
>
> [1]: http://stackalytics.com/?user_id=etingof=marks
> [2]: http://stackalytics.com/?user_id=kaifeng=marks
> [3]: http://stackalytics.com/?user_id=deray=marks=all
> [4]: http://stackalytics.com/?metric=marks=all_id=stephan
> [5]: http://stackalytics.com/?user_id=aarefiev=marks
> [6]: http://stackalytics.com/?metric=marks=all_id=ppiela
> [7]: http://stackalytics.com/?metric=marks=all_
> id=bethelwell=ironic-ui
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky blueprint burndown chart

2018-08-23 Thread Matt Riedemann

On 8/15/2018 3:47 PM, melanie witt wrote:
I think part of the miss on the number of approvals might be because we 
extended the spec freeze date to milestone r-2 because of runways, 
thinking that if we completed enough things, we could approve more 
things. We didn't predict that accurately but with the experience, my 
hope is we can do better in Stein. We could consider moving spec freeze 
back to milestone s-1 or have rough criteria on whether to approve more 
blueprints close to s-2 (for example, if 30%? of approved blueprints 
have been completed, OK to approve more).


If you have feedback or thoughts on any of this, feel free to reply to 
this thread or add your comments to the Rocky retrospective etherpad [4] 
and we can discuss at the PTG.


The completion percentage was about the same as Queens, which is good to 
know. And I think is good at around 80%. Some things get deferred not 
because of a lack of reviewer attention but because the contributor 
stalled out or had higher priority work to complete.


We approved more stuff in Rocky because we had more time to approve 
stuff (spec freeze in Queens was the first milestone, it was the second 
milestone in Rocky).


So with completion rates about the same but with more stuff 
approved/completed in Rocky, what is the difference? From a relatively 
intangible / gut feeling standpoint, I would say one answer is in Queens 
we had a pretty stable, issue free release period but I can't say that 
is the same for Rocky where we're down to the wire getting stuff done 
for our third release candidate on the final day for release candidates. 
So it stands to reason that the earlier we cut the approvals on new 
stuff and have more burn in time for what we do complete, we have a 
smoother release at the end. That's not really rocket science, it's 
common sense. So I think going back to spec freeze on s-1 is likely a 
good idea in Stein now that we know how runways went. We can always make 
exceptions for high priority stuff if needed after s-1, like we did with 
reshaper in Rocky (even though we didn't get it done).


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Chris Dent

On Thu, 23 Aug 2018, Dan Smith wrote:


...and it doesn't work like mock.sentinel does, which is part of the
value. I really think we should put this wherever it needs to be so that
it can continue to be as useful as is is today. Even if that means just
copying it into another project -- it's not that complicated of a thing.


Yeah, I agree. I had hoped that we could make something that was
generally useful, but its main value is its interface and if we
can't have that interface in a library, having it per codebase is no
biggie. For example it's been copied straight from nova into the
placement extractions experiments with no changes and, as one would
expect, works just fine.

Unless people are wed to doing something else, Dan's right, let's
just do that.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Dan Smith
> The compromise, using the patch as currently written [1], would entail
> adding one line at the top of each test file:
>
>  uuids = uuidsentinel.UUIDSentinels()
>
> ...as seen (more or less) at [2]. The subtle difference being that this
> `uuids` wouldn't share a namespace across the whole process, only within
> that file. Given current usage, that shouldn't cause a problem, but it's
> a change.

...and it doesn't work like mock.sentinel does, which is part of the
value. I really think we should put this wherever it needs to be so that
it can continue to be as useful as is is today. Even if that means just
copying it into another project -- it's not that complicated of a thing.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] neutron-fwaas 13.0.0.0rc2 (rocky)

2018-08-23 Thread no-reply

Hello everyone,

A new release candidate for neutron-fwaas for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/neutron-fwaas/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/neutron-fwaas/log/?h=stable/rocky

Release notes for neutron-fwaas can be found at:

https://docs.openstack.org/releasenotes/neutron-fwaas/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api][grapql] Proof of Concept

2018-08-23 Thread Slawomir Kaplonski
Hi Miguel,

I’m not sure but maybe You were looking for those patches:

https://review.openstack.org/#/q/project:openstack/neutron+branch:feature/graphql


> Wiadomość napisana przez Miguel Lavalle  w dniu 
> 23.08.2018, o godz. 18:57:
> 
> Hi Gilles,
> 
> Ed pinged me earlier today in IRC in regards to this topic. After reading 
> your message, I assumed that you had patches up for review in Gerrit. I 
> looked for them, with the intent to list them in the agenda of the next 
> Neutron team meeting, to draw attention to them. I couldn't find any, though: 
> https://review.openstack.org/#/q/owner:%22Gilles+Dubreuil+%253Cgdubreui%2540redhat.com%253E%22
> 
> So, how can we help? This is our meetings schedule: 
> http://eavesdrop.openstack.org/#Neutron_Team_Meeting. Given that you are Down 
> Under at UTC+10, the most convenient meeting for you is the one on Monday 
> (even weeks), which would be Tuesday at 7am for you. Please note that we have 
> an on demand section in our agenda: 
> https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda. Feel free 
> to add topics in that section when you have something to discuss with the 
> Neutron team.
> 
> Best regards
> 
> Miguel
> 
> On Sun, Aug 19, 2018 at 10:57 PM, Gilles Dubreuil  wrote:
> 
> 
> On 25/07/18 23:48, Ed Leafe wrote:
> On Jun 6, 2018, at 7:35 PM, Gilles Dubreuil  wrote:
> The branch is now available under feature/graphql on the neutron core 
> repository [1].
> I wanted to follow up with you on this effort. I haven’t seen any activity on 
> StoryBoard for several weeks now, and wanted to be sure that there was 
> nothing blocking you that we could help with.
> 
> 
> -- Ed Leafe
> 
> 
> 
> Hi Ed,
> 
> Thanks for following up.
> 
> There has been 2 essential counterproductive factors to the effort.
> 
> The first is that I've been busy attending issues on other part of my job.
> The second one is the lack of response/follow-up from the Neutron core team.
> 
> We have all the plumbing in place but we need to layer the data through oslo 
> policies.
> 
> Cheers,
> Gilles
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Eric Fried
The compromise, using the patch as currently written [1], would entail
adding one line at the top of each test file:

 uuids = uuidsentinel.UUIDSentinels()

...as seen (more or less) at [2]. The subtle difference being that this
`uuids` wouldn't share a namespace across the whole process, only within
that file. Given current usage, that shouldn't cause a problem, but it's
a change.

-efried

[1] https://review.openstack.org/#/c/594068/9
[2]
https://review.openstack.org/#/c/594068/9/oslotest/tests/unit/test_uuidsentinel.py@22

On 08/23/2018 12:41 PM, Jay Pipes wrote:
> On 08/23/2018 01:25 PM, Doug Hellmann wrote:
>> Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500:
>>> Do you mean an actual fixture, that would be used like:
>>>
>>>   class MyTestCase(testtools.TestCase):
>>>   def setUp(self):
>>>   self.uuids =
>>> self.useFixture(oslofx.UUIDSentinelFixture()).uuids
>>>
>>>   def test_foo(self):
>>>   do_a_thing_with(self.uuids.foo)
>>>
>>> ?
>>>
>>> That's... okay I guess, but the refactoring necessary to cut over to it
>>> will now entail adding 'self.' to every reference. Is there any way
>>> around that?
>>
>> That is what I had envisioned, yes.  In the absence of a global,
>> which we do not want, what other API would you propose?
> 
> As dansmith mentioned, the niceness and simplicity of being able to do:
> 
>  import nova.tests.uuidsentinel as uuids
> 
>  ..
> 
>  def test_something(self):
>  my_uuid = uuids.instance1
> 
> is remarkably powerful and is something I would want to keep.
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes

2018-08-23 Thread Julia Kreger
Greetings everyone!

In our team meeting this week we stumbled across the subject of
promoting contributors to be sub-project's core reviewers.
Traditionally it is something we've only addressed as needed or
desired by consensus with-in those sub-projects, but we were past due
time to take a look at the entire picture since not everything should
fall to ironic-core.

And so, I've taken a look at our various repositories and I'm
proposing the following additions:

For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya
Etingof[1]. Ilya has been actively involved with sushy, sushy-tools,
and virtualbmc this past cycle. I've found many of his reviews and
non-voting review comments insightful and willing to understand. He
has taken on some of the effort that is needed to maintain and keep
these tools usable for the community, and as such adding him to the
core group for these repositories makes lots of sense.

For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2].
Kaifeng has taken on some hard problems in ironic and
ironic-inspector, as well as brought up insightful feedback in
ironic-specs. They are demonstrating a solid understanding that I only
see growing as time goes on.

For sushy-core: Debayan Ray[3]. Debayan has been involved with the
community for some time and has worked on sushy from early on in its
life. He has indicated it is near and dear to him, and he has been
actively reviewing and engaging in discussion on patchsets as his time
has permitted.

With any addition it is good to look at inactivity as well. It saddens
me to say that we've had some contributors move on as priorities have
shifted to where they are no longer involved with the ironic
community. Each person listed below has been inactive for a year or
more and is no longer active in the ironic community. As such I've
removed their group membership from the sub-project core reviewer
groups. Should they return, we will welcome them back to the community
with open arms.

bifrost-core: Stephanie Miller[4]
ironic-inspector-core: Anton Arefivev[5]
ironic-ui-core: Peter Peila[6], Beth Elwell[7]

Thanks,

-Julia

[1]: http://stackalytics.com/?user_id=etingof=marks
[2]: http://stackalytics.com/?user_id=kaifeng=marks
[3]: http://stackalytics.com/?user_id=deray=marks=all
[4]: http://stackalytics.com/?metric=marks=all_id=stephan
[5]: http://stackalytics.com/?user_id=aarefiev=marks
[6]: http://stackalytics.com/?metric=marks=all_id=ppiela
[7]: 
http://stackalytics.com/?metric=marks=all_id=bethelwell=ironic-ui

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31

2018-08-23 Thread Sean McGinnis
> >
> > We are still missing releases for the following tempest plugins. Some are
> > pending getting pypi and release jobs set up, but please try to prioritize
> > getting these done as soon as possible.
> >
> > barbican-tempest-plugin
> > blazar-tempest-plugin
> > cloudkitty-tempest-plugin
> > congress-tempest-plugin
> > ec2api-tempest-plugin
> > magnum-tempest-plugin
> > mistral-tempest-plugin
> > monasca-kibana-plugin
> > monasca-tempest-plugin
> > murano-tempest-plugin
> > networking-generic-switch-tempest-plugin
> > oswin-tempest-plugin
> > senlin-tempest-plugin
> > telemetry-tempest-plugin
> > tripleo-common-tempest-plugin
> > trove-tempest-plugin
> > watcher-tempest-plugin
> > zaqar-tempest-plugin
> 
> tempest-horizon is missing from the list. horizon team needs to
> release tempest-horizon.
> It does not follow the naming convention so it seems to be missed from the 
> list.
> 
> Thanks,
> Akihiro Motoki (amotoki)
> 

Ah, good catch Akihiro, thanks!

Maybe if it can be done quickly, before a release might be a good time to
update the package name to match the convention used elsewhere. But we are
running short on time and there's probably more involved in doing that than
just updating the package name.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Jay Pipes

On 08/23/2018 01:25 PM, Doug Hellmann wrote:

Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500:

Do you mean an actual fixture, that would be used like:

  class MyTestCase(testtools.TestCase):
  def setUp(self):
  self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids

  def test_foo(self):
  do_a_thing_with(self.uuids.foo)

?

That's... okay I guess, but the refactoring necessary to cut over to it
will now entail adding 'self.' to every reference. Is there any way
around that?


That is what I had envisioned, yes.  In the absence of a global,
which we do not want, what other API would you propose?


As dansmith mentioned, the niceness and simplicity of being able to do:

 import nova.tests.uuidsentinel as uuids

 ..

 def test_something(self):
 my_uuid = uuids.instance1

is remarkably powerful and is something I would want to keep.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Doug Hellmann
Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500:
> Do you mean an actual fixture, that would be used like:
> 
>  class MyTestCase(testtools.TestCase):
>  def setUp(self):
>  self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids
> 
>  def test_foo(self):
>  do_a_thing_with(self.uuids.foo)
> 
> ?
> 
> That's... okay I guess, but the refactoring necessary to cut over to it
> will now entail adding 'self.' to every reference. Is there any way
> around that?

That is what I had envisioned, yes.  In the absence of a global,
which we do not want, what other API would you propose?

Doug

> 
> efried
> 
> On 08/23/2018 07:40 AM, Jay Pipes wrote:
> > On 08/23/2018 08:06 AM, Doug Hellmann wrote:
> >> Excerpts from Davanum Srinivas (dims)'s message of 2018-08-23 06:46:38
> >> -0400:
> >>> Where exactly Eric? I can't seem to find the import:
> >>>
> >>> http://codesearch.openstack.org/?q=(from%7Cimport).*oslotest=nope==oslo.utils
> >>>
> >>>
> >>> -- dims
> >>
> >> oslo.utils depends on oslotest via test-requirements.txt and oslotest is
> >> used within the test modules in oslo.utils.
> >>
> >> As I've said on both reviews, I think we do not want a global
> >> singleton instance of this sentinal class. We do want a formal test
> >> fixture.  Either library can export a test fixture and olso.utils
> >> already has oslo_utils.fixture.TimeFixture so there's precedent to
> >> adding it there, so I have a slight preference for just doing that.
> >>
> >> That said, oslo_utils.uuidutils.generate_uuid() is simply returning
> >> str(uuid.uuid4()). We have it wrapped up as a function so we can
> >> mock it out in other tests, but we hardly need to rely on that if
> >> we're making a test fixture for oslotest.
> >>
> >> My vote is to add a new fixture class to oslo_utils.fixture.
> > 
> > OK, thanks for the helpful explanation, Doug. Works for me.
> > 
> > -jay
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-23 Thread Matt Riedemann

On 8/23/2018 4:00 AM, Thierry Carrez wrote:
In the OpenStack governance model, contributors to a given piece of code 
control its destiny.


This is pretty damn fuzzy. So if someone wants to split out nova-compute 
into a new repo/project/governance with a REST API and all that, 
nova-core has no say in the matter?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-23 Thread Matt Riedemann

On 8/22/2018 1:25 PM, Jeremy Stanley wrote:

On 2018-08-22 11:03:43 -0700 (-0700), melanie witt wrote:
[...]

I think it's about context. If two separate projects do their own priority
and goal setting, separately, I think they will naturally be more different
than they would be if they were one project. Currently, we agree on goals
and priorities together, in the compute context. If placement has its own
separate context, the priority setting and goal planning will be done in the
context of placement. In two separate groups, someone who is a member of
both the Nova and Placement teams would have to persuade Placement-only
members to agree to prioritize a particular item. This may sound subtle, but
it's a notable difference in how things work when it's one team vs two
separate teams. I think having shared context and alignment, at this point
in time, when we have outstanding closely coupled nova/placement work to do,
is critical in delivering for operators and users who are depending on us.

[...]

I'm clearly missing some critical detail about the relationships in
the Nova team. Don't the Nova+Placement contributors already have to
convince the Placement-only contributors what to prioritize working
on? 


Yes. But it's not a huge gun to the head kind of situation. It's more 
like, "We (nova) need X (in Placement) otherwise we can't get to Y." 
There are people that clearly work more on placement than the rest of 
nova (Chris and Tetsuro come to mind). So what normally happens is 
Chris, or Eric, or Jay, or someone will work on the Placement side stuff 
and we'll be stacking the nova-side client bits on top. That's exactly 
how [1] worked. Chris did the placement stuff that Dan need to do the 
nova stuff. For [2] Chris and Eric are both working on the placement 
stuff and Eric has done the framework stuff in nova for the virt drivers 
to interface with.


Despite what is coming up in the ML thread and the tc channel, I myself 
am not seeing a horde of feature requests breaking down the door and 
being ignored/rejected because they are placement-only things that nova 
doesn't itself need. Cyborg is probably as close to consuming/using 
placement as we have outside of nova. Apparently blazar and zun have 
thought about using placement, but I'm not aware of anything more than 
talk so far. If those projects (or other people) "feel" like their 
requests will be rejected because the mean old nova monsters don't like 
non-nova things, then I would say that feeling is unjustified until the 
specific technical feature requests are brought up.



Or are you saying that if they disagree that's fine because the
Nova+Placement contributors will get along just fine without the
Placement-only contributors helping them get it done?


It's a mixed team for the most part. As I said, Jay and Eric work on 
both nova and placement. Chris and Tetsuro are mostly Placement but the 
work they are doing is to enable things that nova needs. I would not say 
"get along just fine". The technical/talent gap would be felt, which is 
true of losing any strong contributors to a piece of a project - that's 
true of any time someone leaves the community, whether on their own 
choosing (e.g. danpb/sdague) or not (e.g. alaski/johnthetubaguy).


[1] 
https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/migration-allocations.html
[2] 
https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/reshape-provider-tree.html


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-23 Thread Gorka Eguileor
On 23/08, Dan Smith wrote:
> > I think Nova should never have to rely on Cinder's hosts/backends
> > information to do migrations or any other operation.
> >
> > In this case even if Nova had that info, it wouldn't be the solution.
> > Cinder would reject migrations if there's an incompatibility on the
> > Volume Type (AZ, Referenced backend, capabilities...)
>
> I think I'm missing a bunch of cinder knowledge required to fully grok
> this situation and probably need to do some reading. Is there some
> reason that a volume type can't exist in multiple backends or something?
> I guess I think of volume type as flavor, and the same definition in two
> places would be interchangeable -- is that not the case?
>

Hi,

I just know the basics of flavors, and they are kind of similar, though
I'm sure there are quite a few differences.

Sure, multiple storage arrays can meet the requirements of a Volume
Type, but then when you create the volume you don't know where it's
going to land. If your volume type is too generic you volume could land
somewhere your cell cannot reach.


> > I don't know anything about Nova cells, so I don't know the specifics of
> > how we could do the mapping between them and Cinder backends, but
> > considering the limited range of possibilities in Cinder I would say we
> > only have Volume Types and AZs to work a solution.
>
> I think the only mapping we need is affinity or distance. The point of
> needing to migrate the volume would purely be because moving cells
> likely means you moved physically farther away from where you were,
> potentially with different storage connections and networking. It
> doesn't *have* to mean that, but I think in reality it would. So the
> question I think Matt is looking to answer here is "how do we move an
> instance from a DC in building A to building C and make sure the
> volume gets moved to some storage local in the new building so we're
> not just transiting back to the original home for no reason?"
>
> Does that explanation help or are you saying that's fundamentally hard
> to do/orchestrate?
>
> Fundamentally, the cells thing doesn't even need to be part of the
> discussion, as the same rules would apply if we're just doing a normal
> migration but need to make sure that storage remains affined to compute.
>

We could probably work something out using the affinity filter, but
right now we don't have a way of doing what you need.

We could probably rework the migration to accept scheduler hints to be
used with the affinity filter and to accept calls with the host or the
hints, that way it could migrate a volume without knowing the
destination host and decide it based on affinity.

We may have to do more modifications, but it could be a way to do it.



> > I don't know how the Nova Placement works, but it could hold an
> > equivalency mapping of volume types to cells as in:
> >
> >  Cell#1Cell#2
> >
> > VolTypeA <--> VolTypeD
> > VolTypeB <--> VolTypeE
> > VolTypeC <--> VolTypeF
> >
> > Then it could do volume retypes (allowing migration) and that would
> > properly move the volumes from one backend to another.
>
> The only way I can think that we could do this in placement would be if
> volume types were resource providers and we assigned them traits that
> had special meaning to nova indicating equivalence. Several of the words
> in that sentence are likely to freak out placement people, myself
> included :)
>
> So is the concern just that we need to know what volume types in one
> backend map to those in another so that when we do the migration we know
> what to ask for? Is "they are the same name" not enough? Going back to
> the flavor analogy, you could kinda compare two flavor definitions and
> have a good idea if they're equivalent or not...
>
> --Dan

In Cinder you don't get that from Volume Types, unless all your backends
have the same hardware and are configured exactly the same.

There can be some storage specific information there, which doesn't
correlate to anything on other hardware.  Volume types may refer to a
specific pool that has been configured in the array to use specific type
of disks.  But even the info on the type of disks is unknown to the
volume type.

I haven't checked the PTG agenda yet, but is there a meeting on this?
Because we may want to have one to try to understand the requirements
and figure out if there's a way to do it with current Cinder
functionality of if we'd need something new.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Dan Smith
> Do you mean an actual fixture, that would be used like:
>
>  class MyTestCase(testtools.TestCase):
>  def setUp(self):
>  self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids
>
>  def test_foo(self):
>  do_a_thing_with(self.uuids.foo)
>
> ?
>
> That's... okay I guess, but the refactoring necessary to cut over to it
> will now entail adding 'self.' to every reference. Is there any way
> around that?

I don't think it's okay. It makes it a lot more work to use it, where
merely importing it (exactly like mock.sentinel) is a large factor in
how incredibly convenient it is.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-23 Thread Dmitry Tantsur

On 08/17/2018 07:45 AM, Cédric Jeanneret wrote:



On 08/17/2018 12:25 AM, Steve Baker wrote:



On 15/08/18 21:32, Cédric Jeanneret wrote:

Dear Community,

As you may know, a move toward Podman as replacement of Docker is starting.

One of the issues with podman is the lack of daemon, precisely the lack
of a socket allowing to send commands and get a "computer formatted
output" (like JSON or YAML or...).

In order to work that out, Podman has added support for varlink¹, using
the "socket activation" feature in Systemd.

On my side, I would like to push forward the integration of varlink in
TripleO deployed containers, especially since it will allow the following:
# proper interface with Paunch (via python link)

I'm not sure this would be desirable. If we're going to all container
management via a socket I think we'd be better supported by using CRI-O.
One of the advantages I see of podman is being able to manage services
with systemd again.


Using the socket wouldn't prevent a "per service" systemd unit. Varlink
would just provide another way to manage the containers.
It's NOT like the docker daemon - it will not manage the containers on
startup for example. It's just an API endpoint, without any "automated
powers".

See it as an interesting complement to the CLI, allowing to access
containers data easily with a computer-oriented language like python3.


# a way to manage containers from within specific containers (think
"healthcheck", "monitoring") by mounting the socket as a shared volume

# a way to get container statistics (think "metrics")

# a way, if needed, to get an ansible module being able to talk to
podman (JSON is always better than plain text)

# a way to secure the accesses to Podman management (we have to define
how varlink talks to Podman, maybe providing dedicated socket with
dedicated rights so that we can have dedicated users for specific tasks)

Some of these cases might prove to be useful, but I do wonder if just
making podman calls would be just as simple without the complexity of
having another host-level service to manage. We can still do podman
operations inside containers by bind-mounting in the container state.


I wouldn't mount the container state as-is for mainly security reasons.
I'd rather get the varlink abstraction rather than the plain `podman'
CLI - in addition, it is far, far easier for applications to get a
proper JSON instead of some random plain text - even if `podman' seems
to get a "--format" option. I really dislike calling "subprocess" things
when there is a nice API interface - maybe that's just me ;).

In addition, apparently the state is managed by some sqlite DB -
concurrent accesses to that DB isn't really a good idea, we really don't
want a corruption, do we?


IIRC sqlite handles concurrent accesses, it just does them slowly.






That said, I have some questions:
° Does any of you have some experience with varlink and podman interface?
° What do you think about that integration wish?
° Does any of you have concern with this possible addition?

I do worry a bit that it is advocating for a solution before we really
understand the problems. The biggest unknown for me is what we do about
healthchecks. Maybe varlink is part of the solution here, or maybe its a
systemd timer which executes the healthcheck and restarts the service
when required.


Maybe. My main concern is: would it be interesting to compare both
solutions?
The Healthchecks are clearly docker-specific, no interface exists atm in
the libpod for that. So we have to mimic it in the best way.
Maybe the healthchecks place is in systemd, and varlink would be used
only for external monitoring and metrics. That would also be a nice way
to explore.

I would not focus on only one of the possibilities I've listed. There
are probably even more possibilities I didn't see - once we get a proper
socket, anything is possible, the good and the bad ;).


Thank you for your feedback and ideas.

Have a great day (or evening, or whatever suits the time you're reading
this ;))!

C.


¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





[openstack-dev] [all][api] POST /api-sig/news

2018-08-23 Thread Michael McCune
Greetings OpenStack community,

This week's meeting brings the return of the full SIG core-quartet as
all core members were in attendance. The main topics were the agenda
[7] for the upcoming Denver PTG [8], and the API-SIG still being
listed as TC working group in the governance repository reference
files. We also pushed a minor technical change related to the
reorganization of the project-config for the upcoming Python 3
transition [9]

On the topic of the PTG, there were no new items added or comments
about the current list [7]. There was brief talk about who will be
attending the gathering, but the details have not been finalized yet.

As always if you're interested in helping out, in addition to coming
to the meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for
changes over time. If you find something that's not quite right,
submit a patch [6] to fix it.
* Have you done something for which you think guidance would have made
things easier but couldn't find any? Submit a patch and help others
[6].

# Newly Published Guidelines

* None

# API Guidelines Proposed for Freeze

* None

# Guidelines that are ready for wider review by the whole community.

* None

# Guidelines Currently Under Review [3]

* Add an api-design doc with design advice
  https://review.openstack.org/592003

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and
service discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet
ready for review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs
that you are developing or changing, please address your concerns in
an email to the OpenStack developer mailing list[1] with the tag
"[api]" in the subject. In your email, you should include any relevant
reviews, links, and comments to help guide the discussion of the
specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our
wiki page [4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] https://etherpad.openstack.org/p/api-sig-stein-ptg
[8] https://www.openstack.org/ptg/
[9] https://review.openstack.org/#/c/593943/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api][grapql] Proof of Concept

2018-08-23 Thread Miguel Lavalle
Hi Gilles,

Ed pinged me earlier today in IRC in regards to this topic. After reading
your message, I assumed that you had patches up for review in Gerrit. I
looked for them, with the intent to list them in the agenda of the next
Neutron team meeting, to draw attention to them. I couldn't find any,
though:
https://review.openstack.org/#/q/owner:%22Gilles+Dubreuil+%253Cgdubreui%2540redhat.com%253E%22

So, how can we help? This is our meetings schedule:
http://eavesdrop.openstack.org/#Neutron_Team_Meeting. Given that you are
Down Under at UTC+10, the most convenient meeting for you is the one on
Monday (even weeks), which would be Tuesday at 7am for you. Please note
that we have an on demand section in our agenda:
https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda. Feel
free to add topics in that section when you have something to discuss with
the Neutron team.

Best regards

Miguel

On Sun, Aug 19, 2018 at 10:57 PM, Gilles Dubreuil 
wrote:

>
>
> On 25/07/18 23:48, Ed Leafe wrote:
>
>> On Jun 6, 2018, at 7:35 PM, Gilles Dubreuil  wrote:
>>
>>> The branch is now available under feature/graphql on the neutron core
>>> repository [1].
>>>
>> I wanted to follow up with you on this effort. I haven’t seen any
>> activity on StoryBoard for several weeks now, and wanted to be sure that
>> there was nothing blocking you that we could help with.
>>
>>
>> -- Ed Leafe
>>
>>
>>
>> Hi Ed,
>
> Thanks for following up.
>
> There has been 2 essential counterproductive factors to the effort.
>
> The first is that I've been busy attending issues on other part of my job.
> The second one is the lack of response/follow-up from the Neutron core
> team.
>
> We have all the plumbing in place but we need to layer the data through
> oslo policies.
>
> Cheers,
> Gilles
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [senlin] Senlin Weekly Meeting Time Change

2018-08-23 Thread Duc Truong
Thanks everyone for replying. Since there were no objections, we will
move to the new meeting time. Our first meeting will be this week on
Friday August 24 at 5:30 UTC.

The meeting agenda has been posted:
https://wiki.openstack.org/wiki/Meetings/SenlinAgenda#Agenda_.282018-08-24_0530_UTC.29

Feel free to add any items you want to discuss.

Looking forward to seeing everyone at the meeting.

Duc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-23 Thread Bogdan Dobrelya

On 8/23/18 6:36 PM, Fox, Kevin M wrote:

Or use kubelet in standalone mode. It can be configured for either Cri-o or 
Docker. You can drive the static manifests from heat/ansible per host as normal 
and it would be a step in the greater direction of getting to Kubernetes 
without needing the whole thing at once, if that is the goal.


I like the idea to adopt k8s components early and deprecate paunch!
Just not that time had shown the plans for k8s integration in tripleo 
look too distant now and we need the solution today...




Thanks,
Kevin

From: Fox, Kevin M [kevin@pnnl.gov]
Sent: Thursday, August 23, 2018 9:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API 
calls

Question. Rather then writing a middle layer to abstract both container 
engines, couldn't you just use CRI? CRI is CRI-O's native language, and there 
is support already for Docker as well.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, August 23, 2018 8:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API 
calls

Dan, thanks for the details and answers. Appreciated.

Best,
-jay

On 08/23/2018 10:50 AM, Dan Prince wrote:

On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:


On 08/15/2018 04:01 PM, Emilien Macchi wrote:

On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi mailto:emil...@redhat.com>> wrote:

  More seriously here: there is an ongoing effort to converge the
  tools around containerization within Red Hat, and we, TripleO are
  interested to continue the containerization of our services (which
  was initially done with Docker & Docker-Distribution).
  We're looking at how these containers could be managed by k8s one
  day but way before that we plan to swap out Docker and join CRI-O
  efforts, which seem to be using Podman + Buildah (among other things).

I guess my wording wasn't the best but Alex explained way better here:
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52

If I may have a chance to rephrase, I guess our current intention is to
continue our containerization and investigate how we can improve our
tooling to better orchestrate the containers.
We have a nice interface (openstack/paunch) that allows us to run
multiple container backends, and we're currently looking outside of
Docker to see how we could solve our current challenges with the new tools.
We're looking at CRI-O because it happens to be a project with a great
community, focusing on some problems that we, TripleO have been facing
since we containerized our services.

We're doing all of this in the open, so feel free to ask any question.


I appreciate your response, Emilien, thank you. Alex' responses to
Jeremy on the #openstack-tc channel were informative, thank you Alex.

For now, it *seems* to me that all of the chosen tooling is very Red Hat
centric. Which makes sense to me, considering Triple-O is a Red Hat product.


Perhaps a slight clarification here is needed. "Director" is a Red Hat
product. TripleO is an upstream project that is now largely driven by
Red Hat and is today marked as single vendor. We welcome others to
contribute to the project upstream just like anybody else.

And for those who don't know the history the TripleO project was once
multi-vendor as well. So a lot of the abstractions we have in place
could easily be extended to support distro specific implementation
details. (Kind of what I view podman as in the scope of this thread).



I don't know how much of the current reinvention of container runtimes
and various tooling around containers is the result of politics. I don't
know how much is the result of certain companies wanting to "own" the
container stack from top to bottom. Or how much is a result of technical
disagreements that simply cannot (or will not) be resolved among
contributors in the container development ecosystem.

Or is it some combination of the above? I don't know.

What I *do* know is that the current "NIH du jour" mentality currently
playing itself out in the container ecosystem -- reminding me very much
of the Javascript ecosystem -- makes it difficult for any potential
*consumers* of container libraries, runtimes or applications to be
confident that any choice they make towards one of the other will be the
*right* choice or even a *possible* choice next year -- or next week.
Perhaps this is why things like openstack/paunch exist -- to give you
options if something doesn't pan out.


This is exactly why paunch exists.

Re, the podman thing I look at it as an implementation detail. The
good news is that given it is almost a parity replacement for what we
already use we'll still contribute to the OpenStack community in
similar ways. Ultimately whether you 

Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-23 Thread Fox, Kevin M
Or use kubelet in standalone mode. It can be configured for either Cri-o or 
Docker. You can drive the static manifests from heat/ansible per host as normal 
and it would be a step in the greater direction of getting to Kubernetes 
without needing the whole thing at once, if that is the goal.

Thanks,
Kevin

From: Fox, Kevin M [kevin@pnnl.gov]
Sent: Thursday, August 23, 2018 9:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API 
calls

Question. Rather then writing a middle layer to abstract both container 
engines, couldn't you just use CRI? CRI is CRI-O's native language, and there 
is support already for Docker as well.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, August 23, 2018 8:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API 
calls

Dan, thanks for the details and answers. Appreciated.

Best,
-jay

On 08/23/2018 10:50 AM, Dan Prince wrote:
> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:
>>
>> On 08/15/2018 04:01 PM, Emilien Macchi wrote:
>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >> > wrote:
>>>
>>>  More seriously here: there is an ongoing effort to converge the
>>>  tools around containerization within Red Hat, and we, TripleO are
>>>  interested to continue the containerization of our services (which
>>>  was initially done with Docker & Docker-Distribution).
>>>  We're looking at how these containers could be managed by k8s one
>>>  day but way before that we plan to swap out Docker and join CRI-O
>>>  efforts, which seem to be using Podman + Buildah (among other things).
>>>
>>> I guess my wording wasn't the best but Alex explained way better here:
>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52
>>>
>>> If I may have a chance to rephrase, I guess our current intention is to
>>> continue our containerization and investigate how we can improve our
>>> tooling to better orchestrate the containers.
>>> We have a nice interface (openstack/paunch) that allows us to run
>>> multiple container backends, and we're currently looking outside of
>>> Docker to see how we could solve our current challenges with the new tools.
>>> We're looking at CRI-O because it happens to be a project with a great
>>> community, focusing on some problems that we, TripleO have been facing
>>> since we containerized our services.
>>>
>>> We're doing all of this in the open, so feel free to ask any question.
>>
>> I appreciate your response, Emilien, thank you. Alex' responses to
>> Jeremy on the #openstack-tc channel were informative, thank you Alex.
>>
>> For now, it *seems* to me that all of the chosen tooling is very Red Hat
>> centric. Which makes sense to me, considering Triple-O is a Red Hat product.
>
> Perhaps a slight clarification here is needed. "Director" is a Red Hat
> product. TripleO is an upstream project that is now largely driven by
> Red Hat and is today marked as single vendor. We welcome others to
> contribute to the project upstream just like anybody else.
>
> And for those who don't know the history the TripleO project was once
> multi-vendor as well. So a lot of the abstractions we have in place
> could easily be extended to support distro specific implementation
> details. (Kind of what I view podman as in the scope of this thread).
>
>>
>> I don't know how much of the current reinvention of container runtimes
>> and various tooling around containers is the result of politics. I don't
>> know how much is the result of certain companies wanting to "own" the
>> container stack from top to bottom. Or how much is a result of technical
>> disagreements that simply cannot (or will not) be resolved among
>> contributors in the container development ecosystem.
>>
>> Or is it some combination of the above? I don't know.
>>
>> What I *do* know is that the current "NIH du jour" mentality currently
>> playing itself out in the container ecosystem -- reminding me very much
>> of the Javascript ecosystem -- makes it difficult for any potential
>> *consumers* of container libraries, runtimes or applications to be
>> confident that any choice they make towards one of the other will be the
>> *right* choice or even a *possible* choice next year -- or next week.
>> Perhaps this is why things like openstack/paunch exist -- to give you
>> options if something doesn't pan out.
>
> This is exactly why paunch exists.
>
> Re, the podman thing I look at it as an implementation detail. The
> good news is that given it is almost a parity replacement for what we
> already use we'll still contribute to the OpenStack community in
> similar ways. Ultimately whether you run 'docker run' or 'podman run'
> you end up with the 

Re: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31

2018-08-23 Thread Akihiro Motoki
2018年8月24日(金) 1:12 Sean McGinnis :
>
> This is the final countdown email for the Rocky development cycle. Thanks to
> everyone involved in the Rocky release!
>
> Development Focus
> -
>
> Teams attending the PTG should be preparing for those discussions and 
> capturing
> information in the etherpads:
>
> https://wiki.openstack.org/wiki/PTG/Stein/Etherpads
>
> General Information
> ---
>
> The release team plans on doing the final Rocky release on 29 August. We will
> re-tag the last commit used for the final RC using the final version number.
>
> If you have not already done so, now would be a good time to take a look at 
> the
> Stein schedule and start planning team activities:
>
> https://releases.openstack.org/stein/schedule.html
>
> Actions
> -
>
> PTLs and release liaisons should watch for the final release patch from the
> release team. While not required, we would appreciate having an ack from each
> team before we approve it on the 29th.
>
> We are still missing releases for the following tempest plugins. Some are
> pending getting pypi and release jobs set up, but please try to prioritize
> getting these done as soon as possible.
>
> barbican-tempest-plugin
> blazar-tempest-plugin
> cloudkitty-tempest-plugin
> congress-tempest-plugin
> ec2api-tempest-plugin
> magnum-tempest-plugin
> mistral-tempest-plugin
> monasca-kibana-plugin
> monasca-tempest-plugin
> murano-tempest-plugin
> networking-generic-switch-tempest-plugin
> oswin-tempest-plugin
> senlin-tempest-plugin
> telemetry-tempest-plugin
> tripleo-common-tempest-plugin
> trove-tempest-plugin
> watcher-tempest-plugin
> zaqar-tempest-plugin

tempest-horizon is missing from the list. horizon team needs to
release tempest-horizon.
It does not follow the naming convention so it seems to be missed from the list.

Thanks,
Akihiro Motoki (amotoki)

>
> Upcoming Deadlines & Dates
> --
>
> Final RC deadline: August 23
> Rocky Release: August 29
> Cycle trailing RC deadline: August 30
> Stein PTG: September 10-14
> Cycle trailing Rocky release: November 28
>
> --
> Sean McGinnis (smcginnis)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-23 Thread Bogdan Dobrelya

On 8/23/18 6:22 PM, Fox, Kevin M wrote:

Question. Rather then writing a middle layer to abstract both container 
engines, couldn't you just use CRI? CRI is CRI-O's native language, and there 
is support already for Docker as well.


I may be messing up abstraction levels, but IMO when it's time to 
support CRI-O as well, paunch should handle that just like docker or 
podman. So nothing changes in the moving layers of tripleo components.
It's nice that CRI-O also supports docker and other runtimes, but not 
sure we want something in tripleo moving parts to become neither docker 
not podman nor CRI-O bound.




Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, August 23, 2018 8:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API 
calls

Dan, thanks for the details and answers. Appreciated.

Best,
-jay

On 08/23/2018 10:50 AM, Dan Prince wrote:

On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:


On 08/15/2018 04:01 PM, Emilien Macchi wrote:

On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi mailto:emil...@redhat.com>> wrote:

  More seriously here: there is an ongoing effort to converge the
  tools around containerization within Red Hat, and we, TripleO are
  interested to continue the containerization of our services (which
  was initially done with Docker & Docker-Distribution).
  We're looking at how these containers could be managed by k8s one
  day but way before that we plan to swap out Docker and join CRI-O
  efforts, which seem to be using Podman + Buildah (among other things).

I guess my wording wasn't the best but Alex explained way better here:
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52

If I may have a chance to rephrase, I guess our current intention is to
continue our containerization and investigate how we can improve our
tooling to better orchestrate the containers.
We have a nice interface (openstack/paunch) that allows us to run
multiple container backends, and we're currently looking outside of
Docker to see how we could solve our current challenges with the new tools.
We're looking at CRI-O because it happens to be a project with a great
community, focusing on some problems that we, TripleO have been facing
since we containerized our services.

We're doing all of this in the open, so feel free to ask any question.


I appreciate your response, Emilien, thank you. Alex' responses to
Jeremy on the #openstack-tc channel were informative, thank you Alex.

For now, it *seems* to me that all of the chosen tooling is very Red Hat
centric. Which makes sense to me, considering Triple-O is a Red Hat product.


Perhaps a slight clarification here is needed. "Director" is a Red Hat
product. TripleO is an upstream project that is now largely driven by
Red Hat and is today marked as single vendor. We welcome others to
contribute to the project upstream just like anybody else.

And for those who don't know the history the TripleO project was once
multi-vendor as well. So a lot of the abstractions we have in place
could easily be extended to support distro specific implementation
details. (Kind of what I view podman as in the scope of this thread).



I don't know how much of the current reinvention of container runtimes
and various tooling around containers is the result of politics. I don't
know how much is the result of certain companies wanting to "own" the
container stack from top to bottom. Or how much is a result of technical
disagreements that simply cannot (or will not) be resolved among
contributors in the container development ecosystem.

Or is it some combination of the above? I don't know.

What I *do* know is that the current "NIH du jour" mentality currently
playing itself out in the container ecosystem -- reminding me very much
of the Javascript ecosystem -- makes it difficult for any potential
*consumers* of container libraries, runtimes or applications to be
confident that any choice they make towards one of the other will be the
*right* choice or even a *possible* choice next year -- or next week.
Perhaps this is why things like openstack/paunch exist -- to give you
options if something doesn't pan out.


This is exactly why paunch exists.

Re, the podman thing I look at it as an implementation detail. The
good news is that given it is almost a parity replacement for what we
already use we'll still contribute to the OpenStack community in
similar ways. Ultimately whether you run 'docker run' or 'podman run'
you end up with the same thing as far as the existing TripleO
architecture goes.

Dan



You have a tough job. I wish you all the luck in the world in making
these decisions and hope politics and internal corporate management
decisions play as little a role in them as possible.

Best,
-jay


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-23 Thread Fox, Kevin M
Question. Rather then writing a middle layer to abstract both container 
engines, couldn't you just use CRI? CRI is CRI-O's native language, and there 
is support already for Docker as well.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, August 23, 2018 8:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API 
calls

Dan, thanks for the details and answers. Appreciated.

Best,
-jay

On 08/23/2018 10:50 AM, Dan Prince wrote:
> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:
>>
>> On 08/15/2018 04:01 PM, Emilien Macchi wrote:
>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >> > wrote:
>>>
>>>  More seriously here: there is an ongoing effort to converge the
>>>  tools around containerization within Red Hat, and we, TripleO are
>>>  interested to continue the containerization of our services (which
>>>  was initially done with Docker & Docker-Distribution).
>>>  We're looking at how these containers could be managed by k8s one
>>>  day but way before that we plan to swap out Docker and join CRI-O
>>>  efforts, which seem to be using Podman + Buildah (among other things).
>>>
>>> I guess my wording wasn't the best but Alex explained way better here:
>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52
>>>
>>> If I may have a chance to rephrase, I guess our current intention is to
>>> continue our containerization and investigate how we can improve our
>>> tooling to better orchestrate the containers.
>>> We have a nice interface (openstack/paunch) that allows us to run
>>> multiple container backends, and we're currently looking outside of
>>> Docker to see how we could solve our current challenges with the new tools.
>>> We're looking at CRI-O because it happens to be a project with a great
>>> community, focusing on some problems that we, TripleO have been facing
>>> since we containerized our services.
>>>
>>> We're doing all of this in the open, so feel free to ask any question.
>>
>> I appreciate your response, Emilien, thank you. Alex' responses to
>> Jeremy on the #openstack-tc channel were informative, thank you Alex.
>>
>> For now, it *seems* to me that all of the chosen tooling is very Red Hat
>> centric. Which makes sense to me, considering Triple-O is a Red Hat product.
>
> Perhaps a slight clarification here is needed. "Director" is a Red Hat
> product. TripleO is an upstream project that is now largely driven by
> Red Hat and is today marked as single vendor. We welcome others to
> contribute to the project upstream just like anybody else.
>
> And for those who don't know the history the TripleO project was once
> multi-vendor as well. So a lot of the abstractions we have in place
> could easily be extended to support distro specific implementation
> details. (Kind of what I view podman as in the scope of this thread).
>
>>
>> I don't know how much of the current reinvention of container runtimes
>> and various tooling around containers is the result of politics. I don't
>> know how much is the result of certain companies wanting to "own" the
>> container stack from top to bottom. Or how much is a result of technical
>> disagreements that simply cannot (or will not) be resolved among
>> contributors in the container development ecosystem.
>>
>> Or is it some combination of the above? I don't know.
>>
>> What I *do* know is that the current "NIH du jour" mentality currently
>> playing itself out in the container ecosystem -- reminding me very much
>> of the Javascript ecosystem -- makes it difficult for any potential
>> *consumers* of container libraries, runtimes or applications to be
>> confident that any choice they make towards one of the other will be the
>> *right* choice or even a *possible* choice next year -- or next week.
>> Perhaps this is why things like openstack/paunch exist -- to give you
>> options if something doesn't pan out.
>
> This is exactly why paunch exists.
>
> Re, the podman thing I look at it as an implementation detail. The
> good news is that given it is almost a parity replacement for what we
> already use we'll still contribute to the OpenStack community in
> similar ways. Ultimately whether you run 'docker run' or 'podman run'
> you end up with the same thing as far as the existing TripleO
> architecture goes.
>
> Dan
>
>>
>> You have a tough job. I wish you all the luck in the world in making
>> these decisions and hope politics and internal corporate management
>> decisions play as little a role in them as possible.
>>
>> Best,
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

[openstack-dev] [release] Release countdown for week R-0, August 27 - 31

2018-08-23 Thread Sean McGinnis
This is the final countdown email for the Rocky development cycle. Thanks to
everyone involved in the Rocky release! 

Development Focus
-

Teams attending the PTG should be preparing for those discussions and capturing
information in the etherpads:

https://wiki.openstack.org/wiki/PTG/Stein/Etherpads

General Information
---

The release team plans on doing the final Rocky release on 29 August. We will
re-tag the last commit used for the final RC using the final version number.

If you have not already done so, now would be a good time to take a look at the
Stein schedule and start planning team activities:

https://releases.openstack.org/stein/schedule.html

Actions
-

PTLs and release liaisons should watch for the final release patch from the
release team. While not required, we would appreciate having an ack from each
team before we approve it on the 29th.

We are still missing releases for the following tempest plugins. Some are
pending getting pypi and release jobs set up, but please try to prioritize
getting these done as soon as possible.

barbican-tempest-plugin
blazar-tempest-plugin
cloudkitty-tempest-plugin
congress-tempest-plugin
ec2api-tempest-plugin
magnum-tempest-plugin
mistral-tempest-plugin
monasca-kibana-plugin
monasca-tempest-plugin
murano-tempest-plugin
networking-generic-switch-tempest-plugin
oswin-tempest-plugin
senlin-tempest-plugin
telemetry-tempest-plugin
tripleo-common-tempest-plugin
trove-tempest-plugin
watcher-tempest-plugin
zaqar-tempest-plugin

Upcoming Deadlines & Dates
--

Final RC deadline: August 23
Rocky Release: August 29
Cycle trailing RC deadline: August 30
Stein PTG: September 10-14
Cycle trailing Rocky release: November 28

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][oslo][release][requirements] FFE request for castellan

2018-08-23 Thread Sean McGinnis
> > > 
> > > I've approved it for a UC only bump
> > > 
> 
> We are still waiting on https://review.openstack.org/594541 to merge,
> but I already voted and noted that it was FFE approved.
> 
> -- 
> Matthew Thode (prometheanfire)

And I have now approved the u-c update. We should be all set now.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-23 Thread Jay Pipes

Dan, thanks for the details and answers. Appreciated.

Best,
-jay

On 08/23/2018 10:50 AM, Dan Prince wrote:

On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:


On 08/15/2018 04:01 PM, Emilien Macchi wrote:

On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi mailto:emil...@redhat.com>> wrote:

 More seriously here: there is an ongoing effort to converge the
 tools around containerization within Red Hat, and we, TripleO are
 interested to continue the containerization of our services (which
 was initially done with Docker & Docker-Distribution).
 We're looking at how these containers could be managed by k8s one
 day but way before that we plan to swap out Docker and join CRI-O
 efforts, which seem to be using Podman + Buildah (among other things).

I guess my wording wasn't the best but Alex explained way better here:
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52

If I may have a chance to rephrase, I guess our current intention is to
continue our containerization and investigate how we can improve our
tooling to better orchestrate the containers.
We have a nice interface (openstack/paunch) that allows us to run
multiple container backends, and we're currently looking outside of
Docker to see how we could solve our current challenges with the new tools.
We're looking at CRI-O because it happens to be a project with a great
community, focusing on some problems that we, TripleO have been facing
since we containerized our services.

We're doing all of this in the open, so feel free to ask any question.


I appreciate your response, Emilien, thank you. Alex' responses to
Jeremy on the #openstack-tc channel were informative, thank you Alex.

For now, it *seems* to me that all of the chosen tooling is very Red Hat
centric. Which makes sense to me, considering Triple-O is a Red Hat product.


Perhaps a slight clarification here is needed. "Director" is a Red Hat
product. TripleO is an upstream project that is now largely driven by
Red Hat and is today marked as single vendor. We welcome others to
contribute to the project upstream just like anybody else.

And for those who don't know the history the TripleO project was once
multi-vendor as well. So a lot of the abstractions we have in place
could easily be extended to support distro specific implementation
details. (Kind of what I view podman as in the scope of this thread).



I don't know how much of the current reinvention of container runtimes
and various tooling around containers is the result of politics. I don't
know how much is the result of certain companies wanting to "own" the
container stack from top to bottom. Or how much is a result of technical
disagreements that simply cannot (or will not) be resolved among
contributors in the container development ecosystem.

Or is it some combination of the above? I don't know.

What I *do* know is that the current "NIH du jour" mentality currently
playing itself out in the container ecosystem -- reminding me very much
of the Javascript ecosystem -- makes it difficult for any potential
*consumers* of container libraries, runtimes or applications to be
confident that any choice they make towards one of the other will be the
*right* choice or even a *possible* choice next year -- or next week.
Perhaps this is why things like openstack/paunch exist -- to give you
options if something doesn't pan out.


This is exactly why paunch exists.

Re, the podman thing I look at it as an implementation detail. The
good news is that given it is almost a parity replacement for what we
already use we'll still contribute to the OpenStack community in
similar ways. Ultimately whether you run 'docker run' or 'podman run'
you end up with the same thing as far as the existing TripleO
architecture goes.

Dan



You have a tough job. I wish you all the luck in the world in making
these decisions and hope politics and internal corporate management
decisions play as little a role in them as possible.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-23 Thread Sean McGinnis
On Wed, Aug 22, 2018 at 08:23:41PM -0500, Matt Riedemann wrote:
> Hi everyone,
> 
> I have started an etherpad for cells topics at the Stein PTG [1]. The main
> issue in there right now is dealing with cross-cell cold migration in nova.
> 
> At a high level, I am going off these requirements:
> 
> * Cells can shard across flavors (and hardware type) so operators would like
> to move users off the old flavors/hardware (old cell) to new flavors in a
> new cell.
> 
> * There is network isolation between compute hosts in different cells, so no
> ssh'ing the disk around like we do today. But the image service is global to
> all cells.
> 
> Based on this, for the initial support for cross-cell cold migration, I am
> proposing that we leverage something like shelve offload/unshelve
> masquerading as resize. We shelve offload from the source cell and unshelve
> in the target cell. This should work for both volume-backed and
> non-volume-backed servers (we use snapshots for shelved offloaded
> non-volume-backed servers).
> 
> There are, of course, some complications. The main ones that I need help
> with right now are what happens with volumes and ports attached to the
> server. Today we detach from the source and attach at the target, but that's
> assuming the storage backend and network are available to both hosts
> involved in the move of the server. Will that be the case across cells? I am
> assuming that depends on the network topology (are routed networks being
> used?) and storage backend (routed storage?). If the network and/or storage
> backend are not available across cells, how do we migrate volumes and ports?
> Cinder has a volume migrate API for admins but I do not know how nova would
> know the proper affinity per-cell to migrate the volume to the proper host
> (cinder does not have a routed storage concept like routed provider networks
> in neutron, correct?). And as far as I know, there is no such thing as port
> migration in Neutron.
> 

Just speaking to iSCSI storage, I know some deployments do not route their
storage traffic. If this is the case, then both cells would need to have access
to the same subnet to still access the volume.

I'm also referring to the case where the migration is from one compute host to
another compute host, and not from one storage backend to another storage
backend.

I haven't gone through the workflow, but I thought shelve/unshelve could detach
the volume on shelving and reattach it on unshelve. In that workflow, assuming
the networking is in place to provide the connectivity, the nova compute host
would be connecting to the volume just like any other attach and should work
fine. The unknown or tricky part is making sure that there is the network
connectivity or routing in place for the compute host to be able to log in to
the storage target.

If it's the other scenario mentioned where the volume needs to be migrated from
one storage backend to another storage backend, then that may require a little
more work. The volume would need to be retype'd or migrated (storage migration)
from the original backend to the new backend.

Again, in this scenario at some point there needs to be network connectivity
between cells to copy over that data.

There is no storage-offloaded migration in this situation, so Cinder can't
currently optimize how that data gets from the original volume backend to the
new one. It would require a host copy of all the data on the volume (an often
slow and expensive operation) and it would require that the host doing the data
copy has access to both the original backend and then new backend.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Eric Fried
Do you mean an actual fixture, that would be used like:

 class MyTestCase(testtools.TestCase):
 def setUp(self):
 self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids

 def test_foo(self):
 do_a_thing_with(self.uuids.foo)

?

That's... okay I guess, but the refactoring necessary to cut over to it
will now entail adding 'self.' to every reference. Is there any way
around that?

efried

On 08/23/2018 07:40 AM, Jay Pipes wrote:
> On 08/23/2018 08:06 AM, Doug Hellmann wrote:
>> Excerpts from Davanum Srinivas (dims)'s message of 2018-08-23 06:46:38
>> -0400:
>>> Where exactly Eric? I can't seem to find the import:
>>>
>>> http://codesearch.openstack.org/?q=(from%7Cimport).*oslotest=nope==oslo.utils
>>>
>>>
>>> -- dims
>>
>> oslo.utils depends on oslotest via test-requirements.txt and oslotest is
>> used within the test modules in oslo.utils.
>>
>> As I've said on both reviews, I think we do not want a global
>> singleton instance of this sentinal class. We do want a formal test
>> fixture.  Either library can export a test fixture and olso.utils
>> already has oslo_utils.fixture.TimeFixture so there's precedent to
>> adding it there, so I have a slight preference for just doing that.
>>
>> That said, oslo_utils.uuidutils.generate_uuid() is simply returning
>> str(uuid.uuid4()). We have it wrapped up as a function so we can
>> mock it out in other tests, but we hardly need to rely on that if
>> we're making a test fixture for oslotest.
>>
>> My vote is to add a new fixture class to oslo_utils.fixture.
> 
> OK, thanks for the helpful explanation, Doug. Works for me.
> 
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Bumping eventlet to 0.24.1

2018-08-23 Thread Matthew Thode
This is your warning, if you have concerns please comment in
https://review.openstack.org/589382 .  cross tests pass, so that's a
good sign... atm this is only for stein.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-23 Thread Dan Prince
On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:
>
> On 08/15/2018 04:01 PM, Emilien Macchi wrote:
> > On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi  > > wrote:
> >
> > More seriously here: there is an ongoing effort to converge the
> > tools around containerization within Red Hat, and we, TripleO are
> > interested to continue the containerization of our services (which
> > was initially done with Docker & Docker-Distribution).
> > We're looking at how these containers could be managed by k8s one
> > day but way before that we plan to swap out Docker and join CRI-O
> > efforts, which seem to be using Podman + Buildah (among other things).
> >
> > I guess my wording wasn't the best but Alex explained way better here:
> > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52
> >
> > If I may have a chance to rephrase, I guess our current intention is to
> > continue our containerization and investigate how we can improve our
> > tooling to better orchestrate the containers.
> > We have a nice interface (openstack/paunch) that allows us to run
> > multiple container backends, and we're currently looking outside of
> > Docker to see how we could solve our current challenges with the new tools.
> > We're looking at CRI-O because it happens to be a project with a great
> > community, focusing on some problems that we, TripleO have been facing
> > since we containerized our services.
> >
> > We're doing all of this in the open, so feel free to ask any question.
>
> I appreciate your response, Emilien, thank you. Alex' responses to
> Jeremy on the #openstack-tc channel were informative, thank you Alex.
>
> For now, it *seems* to me that all of the chosen tooling is very Red Hat
> centric. Which makes sense to me, considering Triple-O is a Red Hat product.

Perhaps a slight clarification here is needed. "Director" is a Red Hat
product. TripleO is an upstream project that is now largely driven by
Red Hat and is today marked as single vendor. We welcome others to
contribute to the project upstream just like anybody else.

And for those who don't know the history the TripleO project was once
multi-vendor as well. So a lot of the abstractions we have in place
could easily be extended to support distro specific implementation
details. (Kind of what I view podman as in the scope of this thread).

>
> I don't know how much of the current reinvention of container runtimes
> and various tooling around containers is the result of politics. I don't
> know how much is the result of certain companies wanting to "own" the
> container stack from top to bottom. Or how much is a result of technical
> disagreements that simply cannot (or will not) be resolved among
> contributors in the container development ecosystem.
>
> Or is it some combination of the above? I don't know.
>
> What I *do* know is that the current "NIH du jour" mentality currently
> playing itself out in the container ecosystem -- reminding me very much
> of the Javascript ecosystem -- makes it difficult for any potential
> *consumers* of container libraries, runtimes or applications to be
> confident that any choice they make towards one of the other will be the
> *right* choice or even a *possible* choice next year -- or next week.
> Perhaps this is why things like openstack/paunch exist -- to give you
> options if something doesn't pan out.

This is exactly why paunch exists.

Re, the podman thing I look at it as an implementation detail. The
good news is that given it is almost a parity replacement for what we
already use we'll still contribute to the OpenStack community in
similar ways. Ultimately whether you run 'docker run' or 'podman run'
you end up with the same thing as far as the existing TripleO
architecture goes.

Dan

>
> You have a tough job. I wish you all the luck in the world in making
> these decisions and hope politics and internal corporate management
> decisions play as little a role in them as possible.
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-23 Thread Tobias Urdin

Now with Fedora 26 I have etcd available but etcd fails.

[root@swarm-u2rnie4d4ik6-master-0 ~]# /usr/bin/etcd 
--name="${ETCD_NAME}" --data-dir="${ETCD_DATA_DIR}" 
--listen-client-urls="${ETCD_LISTEN_CLIENT_URLS}" --debug
2018-08-23 14:34:15.596516 E | etcdmain: error verifying flags, 
--advertise-client-urls is required when --listen-client-urls is set 
explicitly. See 'etcd --help'.
2018-08-23 14:34:15.596611 E | etcdmain: When listening on specific 
address(es), this etcd process must advertise accessible url(s) to each 
connected client.


There is a issue where the --advertise-client-urls and TLS --cert-file 
and --key-file is not passed in the systemd file, changing this to:
/usr/bin/etcd --name="${ETCD_NAME}" --data-dir="${ETCD_DATA_DIR}" 
--listen-client-urls="${ETCD_LISTEN_CLIENT_URLS}" 
--advertise-client-urls="${ETCD_ADVERTISE_CLIENT_URLS}" 
--cert-file="${ETCD_PEER_CERT_FILE}" --key-file="${ETCD_PEER_KEY_FILE}"


Makes it work, any thoughts?

Best regards
Tobias

On 08/23/2018 03:54 PM, Tobias Urdin wrote:
Found the issue, I assume I have to use Fedora Atomic 26 until Rocky 
where I can start using Fedora Atomic 27.

Will Fedora Atomia 28 be supported for Rocky?

https://bugs.launchpad.net/magnum/+bug/1735381 (Run etcd and flanneld 
in system containers, In Fedora Atomic 27 etcd and flanneld are 
removed from the base image.)
https://review.openstack.org/#/c/524116/ (Run etcd and flanneld in a 
system container)


Still wondering about the "The Parameter (nodes_affinity_policy) was 
not provided" when using Mesos + Ubuntu?


Best regards
Tobias

On 08/23/2018 02:56 PM, Tobias Urdin wrote:

Thanks for all of your help everyone,

I've been busy with other thing but was able to pick up where I left 
regarding Magnum.
After fixing some issues I have been able to provision a working 
Kubernetes cluster.


I'm still having issues with getting Docker Swarm working, I've tried 
with both Docker and flannel as the networking layer but
none of these works. After investigating the issue seems to be that 
etcd.service is not installed (unit file doesn't exist) so the master
doesn't work, the minion swarm node is provisioned but cannot join 
the cluster because there is no etcd.


Anybody seen this issue before? I've been digging through all 
cloud-init logs and cannot see anything that would cause this.


I also have another separate issue, when provisioning using the 
magnum-ui in Horizon and selecting ubuntu with Mesos I get the error
"The Parameter (nodes_affinity_policy) was not provided". The 
nodes_affinity_policy do have a default value in magnum.conf so I'm 
starting

to think this might be an issue with the magnum-ui dashboard?

Best regards
Tobias

On 08/04/2018 06:24 PM, Joe Topjian wrote:
We recently deployed Magnum and I've been making my way through 
getting both Swarm and Kubernetes running. I also ran into some 
initial issues. These notes may or may not help, but thought I'd 
share them in case:


* We're using Barbican for SSL. I have not tried with the internal 
x509keypair.


* I was only able to get things running with Fedora Atomic 27, 
specifically the version used in the Magnum docs: 
https://docs.openstack.org/magnum/latest/install/launch-instance.html


Anything beyond that wouldn't even boot in my cloud. I haven't dug 
into this.


* Kubernetes requires a Cluster Template to have a label of 
cert_manager_api=true set in order for the cluster to fully come up 
(at least, it didn't work for me until I set this).


As far as troubleshooting methods go, check the cloud-init logs on 
the individual instances to see if any of the "parts" have failed to 
run. Manually re-run the parts on the command-line to get a better 
idea of why they failed. Review the actual script, figure out the 
variable interpolation and how it relates to the Cluster Template 
being used.


Eventually I was able to get clusters running with the stock 
driver/templates, but wanted to tune them in order to better fit in 
our cloud, so I've "forked" them. This is in no way a slight against 
the existing drivers/templates nor do I recommend doing this until 
you reach a point where the stock drivers won't meet your needs. But 
I mention it because it's possible to do and it's not terribly hard. 
This is still a work-in-progress and a bit hacky:


https://github.com/cybera/magnum-templates

Hope that helps,
Joe

On Fri, Aug 3, 2018 at 6:46 AM, Tobias Urdin > wrote:


Hello,

I'm testing around with Magnum and have so far only had issues.
I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora
Atomic 28) and Kubernetes (on Fedora Atomic 27) and haven't been
able to get it working.

Running Queens, is there any information about supported images?
Is Magnum maintained to support Fedora Atomic still?
What is in charge of population the certificates inside the
instances, because this seems to be the root of all issues, I'm
not using Barbican 

Re: [openstack-dev] [tripleo] ansible roles in tripleo

2018-08-23 Thread Dan Prince
On Tue, Aug 14, 2018 at 1:53 PM Jill Rouleau  wrote:
>
> Hey folks,
>
> Like Alex mentioned[0] earlier, we've created a bunch of ansible roles
> for tripleo specific bits.  The idea is to start putting some basic
> cookiecutter type things in them to get things started, then move some
> low-hanging fruit out of tripleo-heat-templates and into the appropriate
> roles.  For example, docker/services/keystone.yaml could have
> upgrade_tasks and fast_forward_upgrade_tasks moved into ansible-role-
> tripleo-keystone/tasks/(upgrade.yml|fast_forward_upgrade.yml), and the
> t-h-t updated to
> include_role: ansible-role-tripleo-keystone
>   tasks_from: upgrade.yml
> without having to modify any puppet or heat directives.
>
> This would let us define some patterns for implementing these tripleo
> roles during Stein while looking at how we can make use of ansible for
> things like core config.

I like the idea of consolidating the Ansible stuff and getting out of
the practice of inlining it into t-h-t. Especially the "core config"
which I take to mean moving away from Puppet and towards Ansible for
service level configuration. But presumably we are going to rely on
the upstream Openstack ansible-os_* projects to do the heavy config
lifting for us here though right? We won't have to do much on our side
to leverage that I hope other than translating old hiera to equivalent
settings for the config files to ensure some backwards comparability.

While I agree with the goals I do wonder if the shear number of git
repos we've created here is needed. Like with puppet-tripleo we were
able to combine a set of "small lightweight" manifests in a way to
wrap them around the upstream Puppet modules. Why not do the same with
ansible-role-tripleo? My concern is that we've created so many cookie
cutter repos with boilerplate code in them that ends up being much
heavier than the files which will actually reside in many of these
repos. This in addition to the extra review work and RPM packages we
need to constantly maintain.

Dan

>
> t-h-t and config-download will still drive the vast majority of playbook
> creation for now, but for new playbooks (such as for operations tasks)
> tripleo-ansible[1] would be our project directory.
>
> So in addition to the larger conversation about how deployers can start
> to standardize how we're all using ansible, I'd like to also have a
> tripleo-specific conversation at PTG on how we can break out some of our
> ansible that's currently embedded in t-h-t into more modular and
> flexible roles.
>
> Cheers,
> Jill
>
> [0] http://lists.openstack.org/pipermail/openstack-dev/2018-August/13311
> 9.html
> [1] 
> https://git.openstack.org/cgit/openstack/tripleo-ansible/tree/__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca][goal][python3] monasca's zuul migration is only partially complete

2018-08-23 Thread Doug Szumski



On 23/08/18 13:34, Doug Hellmann wrote:

Excerpts from Doug Szumski's message of 2018-08-23 09:53:35 +0100:


Thanks Doug, we had a discussion and we agreed that the best way to
proceed is for you to submit your patches and we will carefully review them.

I proposed those patches this morning. With the aid of your exemplary
repository naming conventions, you can find them all at:

https://review.openstack.org/#/q/topic:python3-first+project:%255E.*monasca.*+is:open

Thanks, we will start going through them.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][horizon] Issues we found when using Community Images

2018-08-23 Thread Jeremy Freudberg
Hi Andy,

Can you comment more on what needs to be updated in Sahara? Are they
simply issues in the UI (sahara-dashboard) or is there a problem
consuming community images on the server side?

I'm happy to pitch in, just would like to do it efficiently.

Thanks,
Jeremy

On Wed, Aug 22, 2018 at 5:31 PM, Andy Botting  wrote:
> Hi all,
>
> We've recently moved to using Glance's community visibility on the Nectar
> Research Cloud. We had lots of public images (12255), and we found it was
> becoming slow to list them all and the community image visibility seems to
> fit our use-case nicely.
>
> We moved all of our user's images over to become community images, and left
> our 'official' images as the only public ones.
>
> We found a few issues, which I wanted to document, if anyone else is looking
> at doing the same thing.
>
> -> Glance API has no way of returning all images available to me in a single
> API request (https://bugs.launchpad.net/glance/+bug/1779251)
> The default list of images is perfect (all available to me, except
> community), but there's a heap of cases where you need to fetch all images
> including community. If we did have this, my next points would be a whole
> lot easier to solve.
>
> -> Horizon's support for Community images is very lacking
> (https://bugs.launchpad.net/horizon/+bug/1779250)
> On the surface, it looks like Community images are supported in Horizon, but
> it's only as far as listing images in the Images tab. Trying to boot a
> Community image from the Launch Instance wizard is actually impossible, as
> community images don't appear in that list at all. The images tab in Horizon
> dynamically builds the list of images on the Images tab through new Glance
> API calls when you use any filters (good).
> In contrast, the source tab on the Launch Images wizard loads all images at
> the start (slow with lots of images), then relies on javascript client-side
> filtering of the list. I've got a dirty patch to fix this for us by
> basically making two Glance API requests (one without specifying visibility,
> and another with visibility=community), then merging the data. This would be
> better handled the same way as the Images tab, with new Glance API requests
> when filtering.
>
> -> Users can't set their own images as Community from the dashboard
> Should be relatively easy to add this. I'm hoping to look into fixing this
> soon.
>
> -> Murano / Sahara image discovery
> These projects rely on images to be chosen when creating new environments,
> and it looks like they use a glance list for their discovery. They both
> suffer from the same issue and require their images to be non-community for
> them to find their images.
>
> -> Openstack Client didn't support listing community images at all
> (https://storyboard.openstack.org/#!/story/2001925)
> It did support setting images to community, but support for actually listing
> them was missing.  Support has  now been added, but not sure if it's made it
> to a release yet.
>
> Apart from these issues, our migration was pretty successful with minimal
> user complaints.
>
> cheers,
> Andy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3] please check with me before submitting any zuul migration patches

2018-08-23 Thread Nguyễn Trí Hải
Hi,

There is a conflict appearing on karbor projects:
https://review.openstack.org/#/q/project:%255E.*karbor.*+topic:python3-first+status:open

Please check the storyboard to check who is working on the target project
if you want to help.
https://storyboard.openstack.org/#!/story/2002586



On Wed, Aug 22, 2018 at 2:42 PM Nguyễn Trí Hải 
wrote:

> Please add yourself to storyboard to everyone know who is working on the
> project.
>
> https://storyboard.openstack.org/#!/story/2002586
>
> On Wed, Aug 22, 2018 at 3:31 AM Doug Hellmann 
> wrote:
>
>> We have a few folks eager to join in and contribute to the python3 goal
>> by helping with the patches to migrate zuul settings. That's great!
>> However, many of the patches being proposed are incorrect, which means
>> there is either something wrong with the tool or the way it is used.
>>
>> The intent was to have a very small group, 3-4 people, who knew how
>> the tools worked to propose all of those patches.  Having incorrect
>> patches can break the CI for a project, so we need to be especially
>> careful with them.  We do not want every team writing the patches
>> for themselves, and we do not want lots and lots of people who we
>> have to train to use the tools.
>>
>> If you are not one of the people already listed as a goal champion
>> on [1], please PLEASE stop writing patches and get in touch with
>> me personally and directly (via IRC or email) BEFORE doing any more
>> work on the goal.
>>
>> Thanks,
>> Doug
>>
>> [1] https://governance.openstack.org/tc/goals/stein/python3-first.html
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
>
> Nguyen Tri Hai / Ph.D. Student
>
> ANDA Lab., Soongsil Univ., Seoul, South Korea
>
> 
> 
> *[image:
> http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4]
> * 
>


-- 

Nguyen Tri Hai / Ph.D. Student

ANDA Lab., Soongsil Univ., Seoul, South Korea



*[image:
http://link.springer.com/chapter/10.1007/978-3-319-26135-5_4]
* 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] neutron 13.0.0.0rc2 (rocky)

2018-08-23 Thread no-reply

Hello everyone,

A new release candidate for neutron for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/neutron/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/neutron/log/?h=stable/rocky

Release notes for neutron can be found at:

https://docs.openstack.org/releasenotes/neutron/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-23 Thread Tobias Urdin
Found the issue, I assume I have to use Fedora Atomic 26 until Rocky 
where I can start using Fedora Atomic 27.

Will Fedora Atomia 28 be supported for Rocky?

https://bugs.launchpad.net/magnum/+bug/1735381 (Run etcd and flanneld in 
system containers, In Fedora Atomic 27 etcd and flanneld are removed 
from the base image.)
https://review.openstack.org/#/c/524116/ (Run etcd and flanneld in a 
system container)


Still wondering about the "The Parameter (nodes_affinity_policy) was not 
provided" when using Mesos + Ubuntu?


Best regards
Tobias

On 08/23/2018 02:56 PM, Tobias Urdin wrote:

Thanks for all of your help everyone,

I've been busy with other thing but was able to pick up where I left 
regarding Magnum.
After fixing some issues I have been able to provision a working 
Kubernetes cluster.


I'm still having issues with getting Docker Swarm working, I've tried 
with both Docker and flannel as the networking layer but
none of these works. After investigating the issue seems to be that 
etcd.service is not installed (unit file doesn't exist) so the master
doesn't work, the minion swarm node is provisioned but cannot join the 
cluster because there is no etcd.


Anybody seen this issue before? I've been digging through all 
cloud-init logs and cannot see anything that would cause this.


I also have another separate issue, when provisioning using the 
magnum-ui in Horizon and selecting ubuntu with Mesos I get the error
"The Parameter (nodes_affinity_policy) was not provided". The 
nodes_affinity_policy do have a default value in magnum.conf so I'm 
starting

to think this might be an issue with the magnum-ui dashboard?

Best regards
Tobias

On 08/04/2018 06:24 PM, Joe Topjian wrote:
We recently deployed Magnum and I've been making my way through 
getting both Swarm and Kubernetes running. I also ran into some 
initial issues. These notes may or may not help, but thought I'd 
share them in case:


* We're using Barbican for SSL. I have not tried with the internal 
x509keypair.


* I was only able to get things running with Fedora Atomic 27, 
specifically the version used in the Magnum docs: 
https://docs.openstack.org/magnum/latest/install/launch-instance.html


Anything beyond that wouldn't even boot in my cloud. I haven't dug 
into this.


* Kubernetes requires a Cluster Template to have a label of 
cert_manager_api=true set in order for the cluster to fully come up 
(at least, it didn't work for me until I set this).


As far as troubleshooting methods go, check the cloud-init logs on 
the individual instances to see if any of the "parts" have failed to 
run. Manually re-run the parts on the command-line to get a better 
idea of why they failed. Review the actual script, figure out the 
variable interpolation and how it relates to the Cluster Template 
being used.


Eventually I was able to get clusters running with the stock 
driver/templates, but wanted to tune them in order to better fit in 
our cloud, so I've "forked" them. This is in no way a slight against 
the existing drivers/templates nor do I recommend doing this until 
you reach a point where the stock drivers won't meet your needs. But 
I mention it because it's possible to do and it's not terribly hard. 
This is still a work-in-progress and a bit hacky:


https://github.com/cybera/magnum-templates

Hope that helps,
Joe

On Fri, Aug 3, 2018 at 6:46 AM, Tobias Urdin > wrote:


Hello,

I'm testing around with Magnum and have so far only had issues.
I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora
Atomic 28) and Kubernetes (on Fedora Atomic 27) and haven't been
able to get it working.

Running Queens, is there any information about supported images?
Is Magnum maintained to support Fedora Atomic still?
What is in charge of population the certificates inside the
instances, because this seems to be the root of all issues, I'm
not using Barbican but the x509keypair driver
is that the reason?

Perhaps I missed some documentation that x509keypair does not
support what I'm trying to do?

I've seen the following issues:

Docker:
* Master does not start and listen on TCP because of certificate
issues
dockerd-current[1909]: Could not load X509 key pair (cert:
"/etc/docker/server.crt", key: "/etc/docker/server.key")

* Node does not start with:
Dependency failed for Docker Application Container Engine.
docker.service: Job docker.service/start failed with result
'dependency'.

Kubernetes:
* Master etcd does not start because /run/etcd does not exist
** When that is created it fails to start because of certificate
2018-08-03 12:41:16.554257 C | etcdmain: open
/etc/etcd/certs/server.crt: no such file or directory

* Master kube-apiserver does not start because of certificate
unable to load server certificate: open
/etc/kubernetes/certs/server.crt: no such file or directory

* Master 

Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-23 Thread Dan Smith
> I think Nova should never have to rely on Cinder's hosts/backends
> information to do migrations or any other operation.
>
> In this case even if Nova had that info, it wouldn't be the solution.
> Cinder would reject migrations if there's an incompatibility on the
> Volume Type (AZ, Referenced backend, capabilities...)

I think I'm missing a bunch of cinder knowledge required to fully grok
this situation and probably need to do some reading. Is there some
reason that a volume type can't exist in multiple backends or something?
I guess I think of volume type as flavor, and the same definition in two
places would be interchangeable -- is that not the case?

> I don't know anything about Nova cells, so I don't know the specifics of
> how we could do the mapping between them and Cinder backends, but
> considering the limited range of possibilities in Cinder I would say we
> only have Volume Types and AZs to work a solution.

I think the only mapping we need is affinity or distance. The point of
needing to migrate the volume would purely be because moving cells
likely means you moved physically farther away from where you were,
potentially with different storage connections and networking. It
doesn't *have* to mean that, but I think in reality it would. So the
question I think Matt is looking to answer here is "how do we move an
instance from a DC in building A to building C and make sure the
volume gets moved to some storage local in the new building so we're
not just transiting back to the original home for no reason?"

Does that explanation help or are you saying that's fundamentally hard
to do/orchestrate?

Fundamentally, the cells thing doesn't even need to be part of the
discussion, as the same rules would apply if we're just doing a normal
migration but need to make sure that storage remains affined to compute.

> I don't know how the Nova Placement works, but it could hold an
> equivalency mapping of volume types to cells as in:
>
>  Cell#1Cell#2
>
> VolTypeA <--> VolTypeD
> VolTypeB <--> VolTypeE
> VolTypeC <--> VolTypeF
>
> Then it could do volume retypes (allowing migration) and that would
> properly move the volumes from one backend to another.

The only way I can think that we could do this in placement would be if
volume types were resource providers and we assigned them traits that
had special meaning to nova indicating equivalence. Several of the words
in that sentence are likely to freak out placement people, myself
included :)

So is the concern just that we need to know what volume types in one
backend map to those in another so that when we do the migration we know
what to ask for? Is "they are the same name" not enough? Going back to
the flavor analogy, you could kinda compare two flavor definitions and
have a good idea if they're equivalent or not...

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee status for 23 August

2018-08-23 Thread Doug Hellmann
This is the weekly summary of work being done by the Technical
Committee members. The full list of active items is managed in the
wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker

We also track TC objectives for the cycle using StoryBoard at:
https://storyboard.openstack.org/#!/project/923

== Recent Activity ==

Project updates:

- The RefStack team was dissolved, and the repositories transfered
  to the interop working group.
- Added the qinling-dashboard repository to Qinling project:
  https://review.openstack.org/#/c/591559/
- The rst2bash repository has been retired:
  https://review.openstack.org/#/c/592293/
- Added the os-ken repository to the Neutron project:
  https://review.openstack.org/#/c/588358/

== PTG Planning ==

The TC is soon going to finalize the topics for presentations to
be given around lunch time at the PTG. If you have suggestions,
please add them to the etherpad.

- https://etherpad.openstack.org/p/PTG4-postlunch

There will be 2 TC meetings during the PTG week. See
http://lists.openstack.org/pipermail/openstack-tc/2018-August/001544.html
for details.

== Leaderless teams after PTL elections ==

We approved all of the volunteers as appointed PTLs and rejected
the proposals to drop Freezer and Searchlight from governance. Thank
you to all of the folks who have stepped up to serve as PTL for
Stein!

We also formalized the process for appointing PTLs to avoid the
confusion we had this time around.

- https://review.openstack.org/590790

== Ongoing Discussions ==

The draft technical vision has gathered a good bit of feedback.
This will be a major topic of discussion for us before and during
the PTG.

- https://review.openstack.org/#/c/592205/

We have spent a lot of time this week thinking about and discussing
the nova/placement split. As things stand, it seems the nova team
considers it too early to spin placement out of the team's purview,
but it is likely that it will be moved to its own repository during
Stein. This leaves some of us concerned about issues like contributors'
self-determination and trust between teams within the community,
so I expect more discussion to occur before a conclusion is reached.

- 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-20.log.html#t2018-08-20T15:27:57
- http://lists.openstack.org/pipermail/openstack-dev/2018-August/133445.html

== TC member actions/focus/discussions for the coming week(s) ==

The PTG is approaching quickly. Please complete any remaining team
health checks.

== Contacting the TC ==

The Technical Committee uses a series of weekly "office hour" time
slots for synchronous communication. We hope that by having several
such times scheduled, we will have more opportunities to engage
with members of the community from different timezones.

Office hour times in #openstack-tc:

- 09:00 UTC on Tuesdays
- 01:00 UTC on Wednesdays
- 15:00 UTC on Thursdays

If you have something you would like the TC to discuss, you can add
it to our office hour conversation starter etherpad at:
https://etherpad.openstack.org/p/tc-office-hour-conversation-starters

Many of us also run IRC bouncers which stay in #openstack-tc most
of the time, so please do not feel that you need to wait for an
office hour time to pose a question or offer a suggestion. You can
use the string "tc-members" to alert the members to your question.

You will find channel logs with past conversations at
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/

If you expect your topic to require significant discussion or to
need input from members of the community other than the TC, please
start a mailing list discussion on openstack-dev at lists.openstack.org
and use the subject tag "[tc]" to bring it to the attention of TC
members.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][nova] nova rx_queue_size tx_queue_size config options breaks booting vm with SR-IOV

2018-08-23 Thread Matt Riedemann

On 8/23/2018 7:10 AM, Matt Riedemann wrote:

On 8/23/2018 6:34 AM, Moshe Levi wrote:
Recent change in tripleo [1] configure nova rx_queue_size 
tx_queue_size config by default.


It seem that this config option breaks booting vm with SR-IOV. See [2]

The issues is because of this code [3] which configure virtio queue 
size if the in the interface xml the driver is vhost or None.


In case of SR-IOV the driver is also None and that why we get the error.

A quick fix will be adding driver=vfio to [4]

I just wonder if there are other interface in the libvirt xml which 
this can have the same issue.


[1] - 
https://github.com/openstack/tripleo-heat-templates/commit/444fc042dca3f9a85e8f7076ce68114ac45478c7#diff-99a22d37b829681d157f41d35c38e4c5 



[2] - http://paste.openstack.org/show/728666/

[3] - https://review.openstack.org/#/c/595592/

[4] - 
https://github.com/openstack/nova/blob/34956bea4beb8e5ba474b42ba777eb88a5eadd76/nova/virt/libvirt/designer.py#L123 





Quick note, your [3] and [4] references are reversed.

Nice find on this, it's a regression in Rocky. As such, please report a 
bug so we can track it as an RC3 potential issue. Note that RC3 is *today*.


Moshe had to leave for the day. The IRC conversation about this bug was 
confusing at best, and it sounds like we don't know what the correct 
solution to make the rx/tx queues work with vnic_type direct interfaces. 
Given that, I would like to know:


* What do we know actually does work with rx/tx queue sizes? Is it just 
macvtap ports? Is that was the feature was tested with?


* If we have a known good tested vnic_type with the rx/tx queue config 
options in Rocky, let's put out a known limitations release note and 
update the help text for those config options to mention that only known 
types of interfaces work with them.


Then people can work on fixing the configs to work with other types of 
vnics in Stein when there is actually time to test the changes other 
than unit tests.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-23 Thread Tobias Urdin

Thanks for all of your help everyone,

I've been busy with other thing but was able to pick up where I left 
regarding Magnum.
After fixing some issues I have been able to provision a working 
Kubernetes cluster.


I'm still having issues with getting Docker Swarm working, I've tried 
with both Docker and flannel as the networking layer but
none of these works. After investigating the issue seems to be that 
etcd.service is not installed (unit file doesn't exist) so the master
doesn't work, the minion swarm node is provisioned but cannot join the 
cluster because there is no etcd.


Anybody seen this issue before? I've been digging through all cloud-init 
logs and cannot see anything that would cause this.


I also have another separate issue, when provisioning using the 
magnum-ui in Horizon and selecting ubuntu with Mesos I get the error
"The Parameter (nodes_affinity_policy) was not provided". The 
nodes_affinity_policy do have a default value in magnum.conf so I'm starting

to think this might be an issue with the magnum-ui dashboard?

Best regards
Tobias

On 08/04/2018 06:24 PM, Joe Topjian wrote:
We recently deployed Magnum and I've been making my way through 
getting both Swarm and Kubernetes running. I also ran into some 
initial issues. These notes may or may not help, but thought I'd share 
them in case:


* We're using Barbican for SSL. I have not tried with the internal 
x509keypair.


* I was only able to get things running with Fedora Atomic 27, 
specifically the version used in the Magnum docs: 
https://docs.openstack.org/magnum/latest/install/launch-instance.html


Anything beyond that wouldn't even boot in my cloud. I haven't dug 
into this.


* Kubernetes requires a Cluster Template to have a label of 
cert_manager_api=true set in order for the cluster to fully come up 
(at least, it didn't work for me until I set this).


As far as troubleshooting methods go, check the cloud-init logs on the 
individual instances to see if any of the "parts" have failed to run. 
Manually re-run the parts on the command-line to get a better idea of 
why they failed. Review the actual script, figure out the variable 
interpolation and how it relates to the Cluster Template being used.


Eventually I was able to get clusters running with the stock 
driver/templates, but wanted to tune them in order to better fit in 
our cloud, so I've "forked" them. This is in no way a slight against 
the existing drivers/templates nor do I recommend doing this until you 
reach a point where the stock drivers won't meet your needs. But I 
mention it because it's possible to do and it's not terribly hard. 
This is still a work-in-progress and a bit hacky:


https://github.com/cybera/magnum-templates

Hope that helps,
Joe

On Fri, Aug 3, 2018 at 6:46 AM, Tobias Urdin > wrote:


Hello,

I'm testing around with Magnum and have so far only had issues.
I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora
Atomic 28) and Kubernetes (on Fedora Atomic 27) and haven't been
able to get it working.

Running Queens, is there any information about supported images?
Is Magnum maintained to support Fedora Atomic still?
What is in charge of population the certificates inside the
instances, because this seems to be the root of all issues, I'm
not using Barbican but the x509keypair driver
is that the reason?

Perhaps I missed some documentation that x509keypair does not
support what I'm trying to do?

I've seen the following issues:

Docker:
* Master does not start and listen on TCP because of certificate
issues
dockerd-current[1909]: Could not load X509 key pair (cert:
"/etc/docker/server.crt", key: "/etc/docker/server.key")

* Node does not start with:
Dependency failed for Docker Application Container Engine.
docker.service: Job docker.service/start failed with result
'dependency'.

Kubernetes:
* Master etcd does not start because /run/etcd does not exist
** When that is created it fails to start because of certificate
2018-08-03 12:41:16.554257 C | etcdmain: open
/etc/etcd/certs/server.crt: no such file or directory

* Master kube-apiserver does not start because of certificate
unable to load server certificate: open
/etc/kubernetes/certs/server.crt: no such file or directory

* Master heat script just sleeps forever waiting for port 8080 to
become available (kube-apiserver) so it can never kubectl apply
the final steps.

* Node does not even start and times out when Heat deploys it,
probably because master never finishes

Any help is appreciated perhaps I've missed something crucial,
I've not tested Kubernetes on CoreOS yet.

Best regards
Tobias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Jay Pipes

On 08/23/2018 08:06 AM, Doug Hellmann wrote:

Excerpts from Davanum Srinivas (dims)'s message of 2018-08-23 06:46:38 -0400:

Where exactly Eric? I can't seem to find the import:

http://codesearch.openstack.org/?q=(from%7Cimport).*oslotest=nope==oslo.utils

-- dims


oslo.utils depends on oslotest via test-requirements.txt and oslotest is
used within the test modules in oslo.utils.

As I've said on both reviews, I think we do not want a global
singleton instance of this sentinal class. We do want a formal test
fixture.  Either library can export a test fixture and olso.utils
already has oslo_utils.fixture.TimeFixture so there's precedent to
adding it there, so I have a slight preference for just doing that.

That said, oslo_utils.uuidutils.generate_uuid() is simply returning
str(uuid.uuid4()). We have it wrapped up as a function so we can
mock it out in other tests, but we hardly need to rely on that if
we're making a test fixture for oslotest.

My vote is to add a new fixture class to oslo_utils.fixture.


OK, thanks for the helpful explanation, Doug. Works for me.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca][goal][python3] monasca's zuul migration is only partially complete

2018-08-23 Thread Doug Hellmann
Excerpts from Doug Szumski's message of 2018-08-23 09:53:35 +0100:

> Thanks Doug, we had a discussion and we agreed that the best way to 
> proceed is for you to submit your patches and we will carefully review them.

I proposed those patches this morning. With the aid of your exemplary
repository naming conventions, you can find them all at:

https://review.openstack.org/#/q/topic:python3-first+project:%255E.*monasca.*+is:open

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][nova] nova rx_queue_size tx_queue_size config options breaks booting vm with SR-IOV

2018-08-23 Thread Matt Riedemann

On 8/23/2018 6:34 AM, Moshe Levi wrote:
Recent change in tripleo [1] configure nova rx_queue_size tx_queue_size 
config by default.


It seem that this config option breaks booting vm with SR-IOV. See [2]

The issues is because of this code [3] which configure virtio queue size 
if the in the interface xml the driver is vhost or None.


In case of SR-IOV the driver is also None and that why we get the error.

A quick fix will be adding driver=vfio to [4]

I just wonder if there are other interface in the libvirt xml which this 
can have the same issue.


[1] - 
https://github.com/openstack/tripleo-heat-templates/commit/444fc042dca3f9a85e8f7076ce68114ac45478c7#diff-99a22d37b829681d157f41d35c38e4c5 



[2] - http://paste.openstack.org/show/728666/

[3] - https://review.openstack.org/#/c/595592/

[4] - 
https://github.com/openstack/nova/blob/34956bea4beb8e5ba474b42ba777eb88a5eadd76/nova/virt/libvirt/designer.py#L123 





Quick note, your [3] and [4] references are reversed.

Nice find on this, it's a regression in Rocky. As such, please report a 
bug so we can track it as an RC3 potential issue. Note that RC3 is *today*.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2018-08-23 06:46:38 -0400:
> Where exactly Eric? I can't seem to find the import:
> 
> http://codesearch.openstack.org/?q=(from%7Cimport).*oslotest=nope==oslo.utils
> 
> -- dims

oslo.utils depends on oslotest via test-requirements.txt and oslotest is
used within the test modules in oslo.utils.

As I've said on both reviews, I think we do not want a global
singleton instance of this sentinal class. We do want a formal test
fixture.  Either library can export a test fixture and olso.utils
already has oslo_utils.fixture.TimeFixture so there's precedent to
adding it there, so I have a slight preference for just doing that.

That said, oslo_utils.uuidutils.generate_uuid() is simply returning
str(uuid.uuid4()). We have it wrapped up as a function so we can
mock it out in other tests, but we hardly need to rely on that if
we're making a test fixture for oslotest.

My vote is to add a new fixture class to oslo_utils.fixture.

Doug

> 
> On Wed, Aug 22, 2018 at 11:24 PM Jay Pipes  wrote:
> 
> >
> > On Wed, Aug 22, 2018, 10:13 AM Eric Fried  wrote:
> >
> >> For some time, nova has been using uuidsentinel [1] which conveniently
> >> allows you to get a random UUID in a single LOC with a readable name
> >> that's the same every time you reference it within that process (but not
> >> across processes). Example usage: [2].
> >>
> >> We would like other projects (notably the soon-to-be-split-out placement
> >> project) to be able to use uuidsentinel without duplicating the code. So
> >> we would like to stuff it in an oslo lib.
> >>
> >> The question is whether it should live in oslotest [3] or in
> >> oslo_utils.uuidutils [4]. The proposed patches are (almost) the same.
> >> The issues we've thought of so far:
> >>
> >> - If this thing is used only for test, oslotest makes sense. We haven't
> >> thought of a non-test use, but somebody surely will.
> >> - Conversely, if we put it in oslo_utils, we're kinda saying we support
> >> it for non-test too. (This is why the oslo_utils version does some extra
> >> work for thread safety and collision avoidance.)
> >> - In oslotest, awkwardness is necessary to avoid circular importing:
> >> uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In
> >> oslo_utils.uuidutils, everything is right there.
> >>
> >
> > My preference is to put it in oslotest. Why does oslo_utils.uuidutils
> > import oslotest? That makes zero sense to me...
> >
> > -jay
> >
> > - It's a... UUID util. If I didn't know anything and I was looking for a
> >> UUID util like uuidsentinel, I would look in a module called uuidutils
> >> first.
> >>
> >> We hereby solicit your opinions, either by further discussion here or as
> >> votes on the respective patches.
> >>
> >> Thanks,
> >> efried
> >>
> >> [1]
> >>
> >> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py
> >> [2]
> >>
> >> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115
> >> [3] https://review.openstack.org/594068
> >> [4] https://review.openstack.org/594179
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][nova] nova rx_queue_size tx_queue_size config options breaks booting vm with SR-IOV

2018-08-23 Thread Moshe Levi
Hi all,

Recent change in tripleo [1] configure nova rx_queue_size tx_queue_size config 
by default.
It seem that this config option breaks booting vm with SR-IOV. See [2]
The issues is because of this code [3] which configure virtio queue size if the 
in the interface xml the driver is vhost or None.
In case of SR-IOV the driver is also None and that why we get the error.
A quick fix will be adding driver=vfio to [4]
I just wonder if there are other interface in the libvirt xml which this can 
have the same issue.


[1] - 
https://github.com/openstack/tripleo-heat-templates/commit/444fc042dca3f9a85e8f7076ce68114ac45478c7#diff-99a22d37b829681d157f41d35c38e4c5
[2] - http://paste.openstack.org/show/728666/
[3] -  https://review.openstack.org/#/c/595592/
[4] - 
https://github.com/openstack/nova/blob/34956bea4beb8e5ba474b42ba777eb88a5eadd76/nova/virt/libvirt/designer.py#L123
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ovs][TripleO] Enabling IPv6 address for tunnel endpoints

2018-08-23 Thread Janki Chhatbar
Hi

I understand that currently tunnel endpoints are supported to be on IPv4
address only. I have a requirement for them to be on IPv6 endpoints as well
with OpenDaylight. So the deployment will have IPv6 addresses for tenant
network.

I know OVS now supports IPv6 endpoints. I want to know if there are any
gaps from Neutron and is it safe to enable tenant endpoints on IPv6 address
in TripleO.

-- 
Thanking you

Janki Chhatbar
OpenStack | Docker | SDN
simplyexplainedblog.wordpress.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Davanum Srinivas
Where exactly Eric? I can't seem to find the import:

http://codesearch.openstack.org/?q=(from%7Cimport).*oslotest=nope==oslo.utils

-- dims

On Wed, Aug 22, 2018 at 11:24 PM Jay Pipes  wrote:

>
> On Wed, Aug 22, 2018, 10:13 AM Eric Fried  wrote:
>
>> For some time, nova has been using uuidsentinel [1] which conveniently
>> allows you to get a random UUID in a single LOC with a readable name
>> that's the same every time you reference it within that process (but not
>> across processes). Example usage: [2].
>>
>> We would like other projects (notably the soon-to-be-split-out placement
>> project) to be able to use uuidsentinel without duplicating the code. So
>> we would like to stuff it in an oslo lib.
>>
>> The question is whether it should live in oslotest [3] or in
>> oslo_utils.uuidutils [4]. The proposed patches are (almost) the same.
>> The issues we've thought of so far:
>>
>> - If this thing is used only for test, oslotest makes sense. We haven't
>> thought of a non-test use, but somebody surely will.
>> - Conversely, if we put it in oslo_utils, we're kinda saying we support
>> it for non-test too. (This is why the oslo_utils version does some extra
>> work for thread safety and collision avoidance.)
>> - In oslotest, awkwardness is necessary to avoid circular importing:
>> uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In
>> oslo_utils.uuidutils, everything is right there.
>>
>
> My preference is to put it in oslotest. Why does oslo_utils.uuidutils
> import oslotest? That makes zero sense to me...
>
> -jay
>
> - It's a... UUID util. If I didn't know anything and I was looking for a
>> UUID util like uuidsentinel, I would look in a module called uuidutils
>> first.
>>
>> We hereby solicit your opinions, either by further discussion here or as
>> votes on the respective patches.
>>
>> Thanks,
>> efried
>>
>> [1]
>>
>> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py
>> [2]
>>
>> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115
>> [3] https://review.openstack.org/594068
>> [4] https://review.openstack.org/594179
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-23 Thread Gorka Eguileor
On 22/08, Matt Riedemann wrote:
> Hi everyone,
>
> I have started an etherpad for cells topics at the Stein PTG [1]. The main
> issue in there right now is dealing with cross-cell cold migration in nova.
>
> At a high level, I am going off these requirements:
>
> * Cells can shard across flavors (and hardware type) so operators would like
> to move users off the old flavors/hardware (old cell) to new flavors in a
> new cell.
>
> * There is network isolation between compute hosts in different cells, so no
> ssh'ing the disk around like we do today. But the image service is global to
> all cells.
>
> Based on this, for the initial support for cross-cell cold migration, I am
> proposing that we leverage something like shelve offload/unshelve
> masquerading as resize. We shelve offload from the source cell and unshelve
> in the target cell. This should work for both volume-backed and
> non-volume-backed servers (we use snapshots for shelved offloaded
> non-volume-backed servers).
>
> There are, of course, some complications. The main ones that I need help
> with right now are what happens with volumes and ports attached to the
> server. Today we detach from the source and attach at the target, but that's
> assuming the storage backend and network are available to both hosts
> involved in the move of the server. Will that be the case across cells? I am
> assuming that depends on the network topology (are routed networks being
> used?) and storage backend (routed storage?). If the network and/or storage
> backend are not available across cells, how do we migrate volumes and ports?
> Cinder has a volume migrate API for admins but I do not know how nova would
> know the proper affinity per-cell to migrate the volume to the proper host
> (cinder does not have a routed storage concept like routed provider networks
> in neutron, correct?). And as far as I know, there is no such thing as port
> migration in Neutron.

Hi Matt,

I think Nova should never have to rely on Cinder's hosts/backends
information to do migrations or any other operation.

In this case even if Nova had that info, it wouldn't be the solution.
Cinder would reject migrations if there's an incompatibility on the
Volume Type (AZ, Referenced backend, capabilities...)

I don't know anything about Nova cells, so I don't know the specifics of
how we could do the mapping between them and Cinder backends, but
considering the limited range of possibilities in Cinder I would say we
only have Volume Types and AZs to work a solution.

>
> Could Placement help with the volume/port migration stuff? Neutron routed
> provider networks rely on placement aggregates to schedule the VM to a
> compute host in the same network segment as the port used to create the VM,
> however, if that segment does not span cells we are kind of stuck, correct?
>

I don't know how the Nova Placement works, but it could hold an
equivalency mapping of volume types to cells as in:

 Cell#1Cell#2

VolTypeA <--> VolTypeD
VolTypeB <--> VolTypeE
VolTypeC <--> VolTypeF

Then it could do volume retypes (allowing migration) and that would
properly move the volumes from one backend to another.

Cheers,
Gorka.


> To summarize the issues as I see them (today):
>
> * How to deal with the targeted cell during scheduling? This is so we can
> even get out of the source cell in nova.
>
> * How does the API deal with the same instance being in two DBs at the same
> time during the move?
>
> * How to handle revert resize?
>
> * How are volumes and ports handled?
>
> I can get feedback from my company's operators based on what their
> deployment will look like for this, but that does not mean it will work for
> others, so I need as much feedback from operators, especially those running
> with multiple cells today, as possible. Thanks in advance.
>
> [1] https://etherpad.openstack.org/p/nova-ptg-stein-cells
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-23 Thread Gorka Eguileor
On 22/08, Matthew Booth wrote:
> On Wed, 22 Aug 2018 at 10:47, Gorka Eguileor  wrote:
> >
> > On 20/08, Matthew Booth wrote:
> > > For those who aren't familiar with it, nova's volume-update (also
> > > called swap volume by nova devs) is the nova part of the
> > > implementation of cinder's live migration (also called retype).
> > > Volume-update is essentially an internal cinder<->nova api, but as
> > > that's not a thing it's also unfortunately exposed to users. Some
> > > users have found it and are using it, but because it's essentially an
> > > internal cinder<->nova api it breaks pretty easily if you don't treat
> > > it like a special snowflake. It looks like we've finally found a way
> > > it's broken for non-cinder callers that we can't fix, even with a
> > > dirty hack.
> > >
> > > volume-updateessentially does a live copy of the
> > > data on  volume to  volume, then seamlessly swaps the
> > > attachment to  from  to . The guest OS on 
> > > will not notice anything at all as the hypervisor swaps the storage
> > > backing an attached volume underneath it.
> > >
> > > When called by cinder, as intended, cinder does some post-operation
> > > cleanup such that  is deleted and  inherits the same
> > > volume_id; that is  effectively becomes . When called any
> > > other way, however, this cleanup doesn't happen, which breaks a bunch
> > > of assumptions. One of these is that a disk's serial number is the
> > > same as the attached volume_id. Disk serial number, in KVM at least,
> > > is immutable, so can't be updated during volume-update. This is fine
> > > if we were called via cinder, because the cinder cleanup means the
> > > volume_id stays the same. If called any other way, however, they no
> > > longer match, at least until a hard reboot when it will be reset to
> > > the new volume_id. It turns out this breaks live migration, but
> > > probably other things too. We can't think of a workaround.
> > >
> > > I wondered why users would want to do this anyway. It turns out that
> > > sometimes cinder won't let you migrate a volume, but nova
> > > volume-update doesn't do those checks (as they're specific to cinder
> > > internals, none of nova's business, and duplicating them would be
> > > fragile, so we're not adding them!). Specifically we know that cinder
> > > won't let you migrate a volume with snapshots. There may be other
> > > reasons. If cinder won't let you migrate your volume, you can still
> > > move your data by using nova's volume-update, even though you'll end
> > > up with a new volume on the destination, and a slightly broken
> > > instance. Apparently the former is a trade-off worth making, but the
> > > latter has been reported as a bug.
> > >
> >
> > Hi Matt,
> >
> > As you know, I'm in favor of making this REST API call only authorized
> > for Cinder to avoid messing the cloud.
> >
> > I know you wanted Cinder to have a solution to do live migrations of
> > volumes with snapshots, and while this is not possible to do in a
> > reasonable fashion, I kept thinking about it given your strong feelings
> > to provide a solution for users that really need this, and I think we
> > may have a "reasonable" compromise.
> >
> > The solution is conceptually simple.  We add a new API microversion in
> > Cinder that adds and optional parameter called "generic_keep_source"
> > (defaults to False) to both migrate and retype operations.
> >
> > This means that if the driver optimized migration cannot do the
> > migration and the generic migration code is the one doing the migration,
> > then, instead of our final step being to swap the volume id's and
> > deleting the source volume, what we would do is to swap the volume id's
> > and move all the snapshots to reference the new volume.  Then we would
> > create a user message with the new ID of the volume.
> >
> > This way we can preserve the old volume with all its snapshots and do
> > the live migration.
> >
> > The implementation is a little bit tricky, as we'll have to add anew
> > "update_migrated_volume" mechanism to support the renaming of both
> > volumes, since the old one wouldn't work with this among other things,
> > but it's doable.
> >
> > Unfortunately I don't have the time right now to work on this...
>
> Sounds promising, and honestly more than I'd have hoped for.
>
> Matt
>

Hi Matt,

Reading Sean's reply I notice that I phrased that wrong.  The volume on
the new storage backend wouldn't have any snapshots.

The result of the operation would be a new volume with the old ID and no
snapshots (this would be the one in use by Nova), and the old volume
with all the snapshots having a new ID on the DB.

Due to Cinder's mechanism to create this new volume we wouldn't be
returning it on the REST API call, but as a user message instead.

Sorry for the confusion.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-23 Thread Gorka Eguileor
On 22/08, Sean McGinnis wrote:
> >
> > The solution is conceptually simple.  We add a new API microversion in
> > Cinder that adds and optional parameter called "generic_keep_source"
> > (defaults to False) to both migrate and retype operations.
> >
> > This means that if the driver optimized migration cannot do the
> > migration and the generic migration code is the one doing the migration,
> > then, instead of our final step being to swap the volume id's and
> > deleting the source volume, what we would do is to swap the volume id's
> > and move all the snapshots to reference the new volume.  Then we would
> > create a user message with the new ID of the volume.
> >
>
> How would you propose to "move all the snapshots to reference the new volume"?
> Most storage does not allow a snapshot to be moved from one volume to another.
> really the only way a migration of a snapshot can work across all storage 
> types
> would be to incrementally copy the data from a source to a destination up to
> the point of the oldest snapshot, create a new snapshot on the new volume, 
> then
> proceed through until all snapshots have been rebuilt on the new volume.
>

Hi Sean,

Sorry, I phrased that wrong. When I say move the snapshots to the new
volume I mean to the "New Volume DB entry", which is now pointing to the
old volume.

So we wouldn't really be moving the snapshots, we would just be leaving
the old volume with its snapshots under a new UUID, and the old UUID
that the user had attached to Nova will be referencing the new volume.

Again, sorry for the confusion.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread ChangBo Guo
+1 for oslotest

Jay Pipes  于2018年8月23日周四 上午11:24写道:

>
> On Wed, Aug 22, 2018, 10:13 AM Eric Fried  wrote:
>
>> For some time, nova has been using uuidsentinel [1] which conveniently
>> allows you to get a random UUID in a single LOC with a readable name
>> that's the same every time you reference it within that process (but not
>> across processes). Example usage: [2].
>>
>> We would like other projects (notably the soon-to-be-split-out placement
>> project) to be able to use uuidsentinel without duplicating the code. So
>> we would like to stuff it in an oslo lib.
>>
>> The question is whether it should live in oslotest [3] or in
>> oslo_utils.uuidutils [4]. The proposed patches are (almost) the same.
>> The issues we've thought of so far:
>>
>> - If this thing is used only for test, oslotest makes sense. We haven't
>> thought of a non-test use, but somebody surely will.
>> - Conversely, if we put it in oslo_utils, we're kinda saying we support
>> it for non-test too. (This is why the oslo_utils version does some extra
>> work for thread safety and collision avoidance.)
>> - In oslotest, awkwardness is necessary to avoid circular importing:
>> uuidsentinel uses oslo_utils.uuidutils, which requires oslotest. In
>> oslo_utils.uuidutils, everything is right there.
>>
>
> My preference is to put it in oslotest. Why does oslo_utils.uuidutils
> import oslotest? That makes zero sense to me...
>
> -jay
>
> - It's a... UUID util. If I didn't know anything and I was looking for a
>> UUID util like uuidsentinel, I would look in a module called uuidutils
>> first.
>>
>> We hereby solicit your opinions, either by further discussion here or as
>> votes on the respective patches.
>>
>> Thanks,
>> efried
>>
>> [1]
>>
>> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/uuidsentinel.py
>> [2]
>>
>> https://github.com/openstack/nova/blob/17b69575bc240ca1dd8b7a681de846d90f3b642c/nova/tests/functional/api/openstack/placement/db/test_resource_provider.py#L109-L115
>> [3] https://review.openstack.org/594068
>> [4] https://review.openstack.org/594179
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-23 Thread Thierry Carrez

melanie witt wrote:
[...] 
I have been trying to explain why over several replies to this thread. 
Fracturing a group is not something anyone does to foster cooperation 
and shared priorities and goals. 
[...]


I would argue that the group is already fractured, otherwise we would 
not even be having this discussion.


In the OpenStack governance model, contributors to a given piece of code 
control its destiny. We have two safety valves: disagreement between 
contributors on that specific piece of code are escalated at the PTL 
level, and disagreement between teams handling different pieces of code 
that need to interoperate are escalated at the TC level. In reality, in 
OpenStack history most disagreements were discussed and solved directly 
between contributors or teams, since nobody likes to appeal to the 
safety valves.


That model implies at the base that contributors to a given piece of 
code are in control: project teams boundaries need to be aligned on 
those discrete groups. We dropped the concept of "Programs" a while ago 
specifically to avoid creating subgroups ruled by larger groups, or 
artificial domains of ownership.


The key issue here is that there is a distinct subgroup within the 
group. It should be its own team, but it's not. You are saying that 
keeping the subgroup governed inside the larger group ensures that 
features that operators and users need get delivered to them. But having 
a group retaining control over other groups is not how we ensure that in 
OpenStack -- it's by using the model above.


Are you saying that you don't think the OpenStack governance model, 
where each team talks to its peers in terms of requirements and 
conflicts between teams may be escalated to the TC if they ever arise, 
will ultimately ensure that features that operators and users need get 
delivered to them ? That keeping placement inside Nova governance will 
yield better results ?


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca][goal][python3] monasca's zuul migration is only partially complete

2018-08-23 Thread Doug Szumski

Reply in-line.


On 23/08/18 00:32, Doug Hellmann wrote:

Monasca team,

It looks like you have self-proposed some, but not all, of the
patches to import the zuul settings into monasca repositories.

I found these:

+-++--++-+---+
| Subject | Repo
   | Tests | Workflow   | URL | Branch  
  |
+-++--++-+---+
| Removed dependency on supervisor| openstack/monasca-agent 
   | VERIFIED | MERGED | https://review.openstack.org/554304 | master   
 |
| fix tox python3 overrides   | openstack/monasca-agent 
   | VERIFIED | MERGED | https://review.openstack.org/574693 | master   
 |
| fix tox python3 overrides   | openstack/monasca-api   
   | VERIFIED | MERGED | https://review.openstack.org/572970 | master   
 |
| import zuul job settings from project-config| openstack/monasca-api   
   | VERIFIED | MERGED | https://review.openstack.org/590698 | 
stable/ocata  |
| import zuul job settings from project-config| openstack/monasca-api   
   | VERIFIED | MERGED | https://review.openstack.org/590355 | 
stable/pike   |
| import zuul job settings from project-config| openstack/monasca-api   
   | VERIFIED | MERGED | https://review.openstack.org/589928 | 
stable/queens |
| fix tox python3 overrides   | 
openstack/monasca-common   | VERIFIED | MERGED | 
https://review.openstack.org/572910 | master|
| ignore python2-specific code under python3 for pep8 | 
openstack/monasca-common   | VERIFIED | MERGED | 
https://review.openstack.org/573002 | master|
| fix tox python3 overrides   | 
openstack/monasca-log-api  | VERIFIED | MERGED | 
https://review.openstack.org/572971 | master|
| replace use of 'unicode' builtin| 
openstack/monasca-log-api  | VERIFIED | MERGED | 
https://review.openstack.org/573015 | master|
| fix tox python3 overrides   | 
openstack/monasca-statsd   | VERIFIED | MERGED | 
https://review.openstack.org/572911 | master|
| fix tox python3 overrides   | 
openstack/python-monascaclient | VERIFIED | MERGED | 
https://review.openstack.org/573344 | master|
| replace unicode with six.text_type  | 
openstack/python-monascaclient | VERIFIED | MERGED | 
https://review.openstack.org/575212 | master|
| | 
   |   || | 
  |
| | 
   | VERIFIED: 13 | MERGED: 13 | |  
 |
+-++--++-+———+

They do not include the monasca-events-api, monasca-specs,
monasca-persister, monasca-tempest-plugin, monasca-thresh, monasca-ui,
monasca-ceilometer, monasaca-transform, monasca-analytics,
monasca-grafana-datasource, and monasca-kibana-plugin repositories.

It also looks like they don’t include some necessary changes for
some branches in some of the other repos, although I haven’t checked
if those branches actually exist so maybe they’re fine.

We also need a patch to project-config to remove the settings for
all of the monasca team’s repositories.

I can generate the missing patches, but doing that now is likely
to introduce some bad patches into the repositories that have had
some work done, so you’ll need to review everything carefully.

In all, it looks like we’re missing around 80+ patches, although
some of the ones I have generated locally may be bogus because of
the existing changes.

I realize Witold is OOO for a while, so I'm emailing the list to
ask the team how you want to proceed. Should I go ahead and propose
the patches I have?
Thanks Doug, we had a discussion and we agreed that the best way to 
proceed is for you to submit your patches and we will carefully review them.


Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Re: [openstack-dev] Redis licensing terms changes

2018-08-23 Thread Thierry Carrez

Jimmy McArthur wrote:

Hmm...

http://antirez.com/news/120

Today a page about the new Creative Common license in the Redis Labs web 
site was interpreted as if Redis itself switched license. This is not 
the case, Redis is, and will remain, BSD licensed. However in the fake 
news era my attempts to provide the correct information failed, and I’m 
still seeing everywhere “Redis is no longer open source”. The reality is 
that Redis remains BSD, and actually Redis Labs did the right thing 
supporting my effort to keep the Redis core open as usually.


What is happening instead is that certain Redis modules, developed 
inside Redis Labs, are now released under the Common Clause (using 
Apache license as a base license). This means that basically certain 
enterprise add-ons, instead of being completely closed source as they 
could be, will be available with a more permissive license.


Right, they switched to an open core model, with "enterprise" features 
moving from open source (AGPL) to proprietary (the so-called Commons 
clause). So we need to evaluate our use of Redis since:


1/ We generally prefer our default drivers to use truly open source 
backends (not open core nor proprietary)


2/ I have no idea how usable Redis core is in our use case without the 
now-proprietary modules (or how usable Redis core will stay in the 
future now that Redis labs has an incentive to land any "serious" 
features in the proprietary modules rather than in core).


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] [magnum-ui] show certificate button bug requesting reviews

2018-08-23 Thread Tobias Urdin

Hello,

Requesting reviews from the magnum-ui core team for 
https://review.openstack.org/#/c/595245/
I'm hoping that we could make quick due of this and be able to backport 
it to the stable/rocky release, would be ideal to backport it for 
stable/queens as well.


Best regards
Tobias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cyborg] Zoom URL for Aug 29 meeting

2018-08-23 Thread Nadathur, Sundar
For the August 29 weekly meeting [1], the main agenda is the discussion 
of Cyborg device/data models.


We will use this meeting invite to present slides:

Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/189707867

Or iPhone one-tap :
    US: +16465588665,,189707867#  or +14086380986,,189707867#
Or Telephone:
    Dial(for higher quality, dial a number based on your current 
location):

    US: +1 646 558 8665  or +1 408 638 0986
    Meeting ID: 189 707 867
    International numbers available: https://zoom.us/u/dnYoZcYYJ


[1] https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting

Regards,
Sundar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][oslo][release][requirements] FFE request for castellan

2018-08-23 Thread Matthew Thode
On 18-08-22 23:06:36, Ade Lee wrote:
> Thanks guys, 
> 
> Sorry - it was not clear to me if I was supposed to do anything
> further.  It seems like the requirements team has approved the FFE and
> the release has merged.  Is there anything further I need to do?
> 
> Thanks,
> Ade
> 
> On Tue, 2018-08-21 at 14:16 -0500, Matthew Thode wrote:
> > On 18-08-21 14:00:41, Ben Nemec wrote:
> > > Because castellan is in global-requirements, we need an FFE from
> > > requirements too.  Can someone from the requirements team respond
> > > to the
> > > review?  Thanks.
> > > 
> > > On 08/16/2018 04:34 PM, Ben Nemec wrote:
> > > > The backport has merged and I've proposed the release here:
> > > > https://review.openstack.org/592746
> > > > 
> > > > On 08/15/2018 11:58 AM, Ade Lee wrote:
> > > > > Done.
> > > > > 
> > > > > https://review.openstack.org/#/c/592154/
> > > > > 
> > > > > Thanks,
> > > > > Ade
> > > > > 
> > > > > On Wed, 2018-08-15 at 09:20 -0500, Ben Nemec wrote:
> > > > > > 
> > > > > > On 08/14/2018 01:56 PM, Sean McGinnis wrote:
> > > > > > > > On 08/10/2018 10:15 AM, Ade Lee wrote:
> > > > > > > > > Hi all,
> > > > > > > > > 
> > > > > > > > > I'd like to request a feature freeze exception to get
> > > > > > > > > the
> > > > > > > > > following
> > > > > > > > > change in for castellan.
> > > > > > > > > 
> > > > > > > > > https://review.openstack.org/#/c/575800/
> > > > > > > > > 
> > > > > > > > > This extends the functionality of the vault backend to
> > > > > > > > > provide
> > > > > > > > > previously uninmplemented functionality, so it should
> > > > > > > > > not break
> > > > > > > > > anyone.
> > > > > > > > > 
> > > > > > > > > The castellan vault plugin is used behind barbican in
> > > > > > > > > the
> > > > > > > > > barbican-
> > > > > > > > > vault plugin.  We'd like to get this change into Rocky
> > > > > > > > > so that
> > > > > > > > > we can
> > > > > > > > > release Barbican with complete functionality on this
> > > > > > > > > backend
> > > > > > > > > (along
> > > > > > > > > with a complete set of passing functional tests).
> > > > > > > > 
> > > > > > > > This does seem fairly low risk since it's just
> > > > > > > > implementing a
> > > > > > > > function that
> > > > > > > > previously raised a NotImplemented exception.  However,
> > > > > > > > with it
> > > > > > > > being so
> > > > > > > > late in the cycle I think we need the release team's
> > > > > > > > input on
> > > > > > > > whether this
> > > > > > > > is possible.  Most of the release FFE's I've seen have
> > > > > > > > been for
> > > > > > > > critical
> > > > > > > > bugs, not actual new features.  I've added that tag to
> > > > > > > > this
> > > > > > > > thread so
> > > > > > > > hopefully they can weigh in.
> > > > > > > > 
> > > > > > > 
> > > > > > > As far as releases go, this should be fine. If this doesn't
> > > > > > > affect
> > > > > > > any other
> > > > > > > projects and would just be a late merging feature, as long
> > > > > > > as the
> > > > > > > castellan
> > > > > > > team has considered the risk of adding code so late and is
> > > > > > > comfortable with
> > > > > > > that, this is OK.
> > > > > > > 
> > > > > > > Castellan follows the cycle-with-intermediary release
> > > > > > > model, so the
> > > > > > > final Rocky
> > > > > > > release just needs to be done by next Thursday. I do see
> > > > > > > the
> > > > > > > stable/rocky
> > > > > > > branch has already been created for this repo, so it would
> > > > > > > need to
> > > > > > > merge to
> > > > > > > master first (technically stein), then get cherry-picked to
> > > > > > > stable/rocky.
> > > > > > 
> > > > > > Okay, sounds good.  It's already merged to master so we're
> > > > > > good
> > > > > > there.
> > > > > > 
> > > > > > Ade, can you get the backport proposed?
> > > > > > 
> > 
> > I've approved it for a UC only bump
> > 

We are still waiting on https://review.openstack.org/594541 to merge,
but I already voted and noted that it was FFE approved.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev