Re: [openstack-dev] [tripleo] Update TripleO core members

2017-01-26 Thread Dougal Matthews
+1!

On 23 January 2017 at 19:03, Emilien Macchi  wrote:

> Greeting folks,
>
> I would like to propose some changes in our core members:
>
> - Remove Jay Dobies who has not been active in TripleO for a while
> (thanks Jay for your hard work!).
> - Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
> docker bits.
> - Add Steve Backer on os-collect-config and also docker bits in
> tripleo-common and tripleo-heat-templates.
>
> Indeed, both Flavio and Steve have been involved in deploying TripleO
> in containers, their contributions are very valuable. I would like to
> encourage them to keep doing more reviews in and out container bits.
>
> As usual, core members are welcome to vote on the changes.
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Honza Pokorny core on tripleo-ui

2017-01-26 Thread Dougal Matthews
+1!

On 25 January 2017 at 15:28, Steven Hardy  wrote:

> On Tue, Jan 24, 2017 at 08:52:51AM -0500, Emilien Macchi wrote:
> > I have been discussed with TripleO UI core reviewers and it's pretty
> > clear Honza's work has been valuable so we can propose him part of
> > Tripleo UI core team.
> > His quality of code and reviews make him a good candidate and it would
> > also help the other 2 core reviewers to accelerate the review process
> > in UI component.
> >
> > Like usual, this is open for discussion, Tripleo UI core and TripleO
> > core, please vote.
>
> +1
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Sergey (Sagi) Shnaidman for core on tripleo-ci

2017-01-26 Thread Dougal Matthews
+1!

On 26 January 2017 at 18:36, Harry Rybacki  wrote:

> On Thu, Jan 26, 2017 at 12:25 PM, Martin André  wrote:
> > On Tue, Jan 24, 2017 at 6:03 PM, Juan Antonio Osorio
> >  wrote:
> >> Sagi (sshnaidm on IRC) has done significant work in TripleO CI (both
> >> on the current CI solution and in getting tripleo-quickstart jobs for
> >> it); So I would like to propose him as part of the TripleO CI core team.
> >>
> >> I think he'll make a great addition to the team and will help move CI
> >> issues forward quicker.
> >
> > +1
> >
> +1
>
> >> Best Regards,
> >>
> >>
> >>
> >> --
> >> Juan Antonio Osorio R.
> >> jaosorior
> >>
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] What's Up, Doc? Farewell edition

2017-01-26 Thread Lana Brindley
Thanks Shilla, it was a pleasure :)

L

On 27/01/17 14:27, Shilla Saebi wrote:
> We will miss you Lana, you did a phenomenal job and welcome aboard Alexandra! 
> 
> On Thu, Jan 26, 2017 at 8:17 PM, Lana Brindley  > wrote:
> 
> Hi everyone,
> 
> I must confess, I'm feeling a little sad. This is my very last What's Up, 
> Doc. Next week, I'll be handling the Docs PTL mantle to Alexandra Settle. 
> I've worked with Alex in varying capacities over many years, and I have no 
> doubt that she will be a fabulous PTL. I'm really looking forward to working 
> with her, and seeing what new directions she's able to take this team. I want 
> to take this opportunity to thank each and every one of you for your 
> continued support and encouragement over the last two years (and almost-four 
> releases!). I have had an absolute ball doing this job, and it's all because 
> of the amazing people I get to work with every day. Of course, I'm not going 
> anywhere just yet. I will stay on as a core contributor, and continue to help 
> out as much as I can.
> 
> In the meantime, we have a release to get out the door! We now only have 
> 26 days until Ocata is released, please keep an eye on the docs mailing list 
> for updates, and consider getting your hands dirty with some Install Guide 
> testing: https://wiki.openstack.org/wiki/Documentation/OcataDocTesting 
> 
> 
> == Progress towards Ocata ==
> 
> 26 days to go!
> 
> Closed 211 bugs so far.
> 
> Release tasks are being tracked here: 
> https://wiki.openstack.org/wiki/Documentation/OcataDeliverables 
> 
> Install Guide testing is being tracked here: 
> https://wiki.openstack.org/wiki/Documentation/OcataDocTesting 
> 
> 
> == The Road to PTG in Atlanta ==
> 
> Event info is available here: http://www.openstack.org/ptg
> Purchase tickets here: https://pikeptg.eventbrite.com/ 
> 
> 
> Docs is a horizontal project, so our sessions will run across the Monday 
> and Tuesday of the event. We will be combining the docs event with i18n, so 
> translators and docs people will all be in the room together.
> 
> Conversation topics for Docs and i18n here: 
> https://etherpad.openstack.org/p/docs-i18n-ptg-pike 
> 
> 
> Also, a quick note that the CFP and ticket sales for *Boston in May* are 
> now open: 
> https://www.openstack.org/summit/boston-2017/call-for-presentations/ 
> 
> 
> == Speciality Team Reports ==
> 
> No speciality team reports this week, as we didn't have quorum for the 
> docs meeting.
> 
> == Doc team meeting ==
> 
> Our next meeting will be on Thursday 9 February at 2100 UTC in 
> #openstack-meeting-alt
> 
> Meeting chair will be Alexandra Settle.
> 
> For more meeting details, including minutes and the agenda: 
> https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting 
> 
> 
> --
> 
> Keep on doc'ing!
> 
> Lana
> 
> https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#27_January_2017 
> 
> 
> --
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com
> 
> 
> ___
> OpenStack-docs mailing list
> openstack-d...@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs 
> 
> 
> 

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] What's Up, Doc? Farewell edition

2017-01-26 Thread Shilla Saebi
We will miss you Lana, you did a phenomenal job and welcome aboard
Alexandra!

On Thu, Jan 26, 2017 at 8:17 PM, Lana Brindley 
wrote:

> Hi everyone,
>
> I must confess, I'm feeling a little sad. This is my very last What's Up,
> Doc. Next week, I'll be handling the Docs PTL mantle to Alexandra Settle.
> I've worked with Alex in varying capacities over many years, and I have no
> doubt that she will be a fabulous PTL. I'm really looking forward to
> working with her, and seeing what new directions she's able to take this
> team. I want to take this opportunity to thank each and every one of you
> for your continued support and encouragement over the last two years (and
> almost-four releases!). I have had an absolute ball doing this job, and
> it's all because of the amazing people I get to work with every day. Of
> course, I'm not going anywhere just yet. I will stay on as a core
> contributor, and continue to help out as much as I can.
>
> In the meantime, we have a release to get out the door! We now only have
> 26 days until Ocata is released, please keep an eye on the docs mailing
> list for updates, and consider getting your hands dirty with some Install
> Guide testing: https://wiki.openstack.org/wiki/Documentation/
> OcataDocTesting
>
> == Progress towards Ocata ==
>
> 26 days to go!
>
> Closed 211 bugs so far.
>
> Release tasks are being tracked here: https://wiki.openstack.org/
> wiki/Documentation/OcataDeliverables
> Install Guide testing is being tracked here: https://wiki.openstack.org/
> wiki/Documentation/OcataDocTesting
>
> == The Road to PTG in Atlanta ==
>
> Event info is available here: http://www.openstack.org/ptg
> Purchase tickets here: https://pikeptg.eventbrite.com/
>
> Docs is a horizontal project, so our sessions will run across the Monday
> and Tuesday of the event. We will be combining the docs event with i18n, so
> translators and docs people will all be in the room together.
>
> Conversation topics for Docs and i18n here: https://etherpad.openstack.
> org/p/docs-i18n-ptg-pike
>
> Also, a quick note that the CFP and ticket sales for *Boston in May* are
> now open: https://www.openstack.org/summit/boston-2017/call-for-
> presentations/
>
> == Speciality Team Reports ==
>
> No speciality team reports this week, as we didn't have quorum for the
> docs meeting.
>
> == Doc team meeting ==
>
> Our next meeting will be on Thursday 9 February at 2100 UTC in
> #openstack-meeting-alt
>
> Meeting chair will be Alexandra Settle.
>
> For more meeting details, including minutes and the agenda:
> https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting
>
> --
>
> Keep on doc'ing!
>
> Lana
>
> https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#27_January_2017
>
> --
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com
>
>
> ___
> OpenStack-docs mailing list
> openstack-d...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-26 Thread Renat Akhmerov

> On 26 Jan 2017, at 22:29, Dougal Matthews  wrote:
> 
> 
> 
> On 24 January 2017 at 16:16, Mikhail Fedosin  > wrote:
> Hey, Flavio :) Thanks for your questions!
> 
> As you said currently only Nokia's adopting Glare for its own platform, but 
> if we talk about OpenStack, that I believe Mistral will start to use it soon. 
> 
> Has there been some discussion surrounding Mistral and Glare? I'd be 
> interested in reading more about those plans and ideas.


Dougal, I’ve cherished this idea for a long time and discussed it with a few 
people, but informally.
I believe we didn’t have any official discussions around it yet. I included the 
corresponding topic
to our PTG etherpad to finally get this going. Mike and will bring this topic 
up to discussion.
I believe it’s worth it. We can also discuss basics before the PTG too, but in 
a separate thread.

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's Up, Doc? Farewell edition

2017-01-26 Thread Lana Brindley
On 27/01/17 12:08, Chris Smart wrote:
> On Fri, Jan 27, 2017 at 11:17:11AM +1000, Lana Brindley wrote:
>> Hi everyone,
>>
>> I must confess, I'm feeling a little sad. This is my very last What's Up, 
>> Doc. Next week, I'll be handling the Docs PTL mantle to Alexandra Settle. 
>> I've worked with Alex in varying capacities over many years, and I have no 
>> doubt that she will be a fabulous PTL. I'm really looking forward to working 
>> with her, and seeing what new directions she's able to take this team. I 
>> want to take this opportunity to thank each and every one of you for your 
>> continued support and encouragement over the last two years (and almost-four 
>> releases!). I have had an absolute ball doing this job, and it's all because 
>> of the amazing people I get to work with every day. Of course, I'm not going 
>> anywhere just yet. I will stay on as a core contributor, and continue to 
>> help out as much as I can.
>>
> 
> Congrats Alex, I'm confident you will do a fabulous job!
> 
> 
> Lana, from what I've seen you've been a great PTL and it will be great
> to still have you around. Thanks for your leadership and support! Great
> job.
> 
> 
> -c
> 

d'aww, thanks Chris :)

L

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tag in the API breaks in the old microversion

2017-01-26 Thread Artom Lifshitz
Since the consensus is to fix this with a new microversion, I've
submitted some patches:

* https://review.openstack.org/#/c/426030/
  A spec for the new microversion in case folks want one.

* https://review.openstack.org/#/c/424759/
  The new microversion itself. I've already had feedback from Alex and
Ghanshyam (thanks guys!), and I've tried to address it.

* https://review.openstack.org/#/c/425876/
  A patch to - as Alex and Sean suggested - stop passing plain string
version to the schema extension point.

On Tue, Jan 24, 2017 at 10:38 PM, Matt Riedemann  wrote:
> On 1/24/2017 8:16 PM, Alex Xu wrote:
>>
>>
>>
>> One other thing: we're going to need to also fix this in
>> python-novaclient, which we might want to do first, or work
>> concurrently, since that's going to give us the client side
>> perspective on how gross it will be to deal with this issue.
>>
>>
>
> This is Andrey's patch to at least document the limitation:
>
> https://review.openstack.org/#/c/424745/
>
> We'll have to fix the client to use the new microversion in Pike (or at
> least release the fix in Pike) since the client release freeze is Thursday.
>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
--
Artom Lifshitz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's Up, Doc? Farewell edition

2017-01-26 Thread Chris Smart

On Fri, Jan 27, 2017 at 11:17:11AM +1000, Lana Brindley wrote:

Hi everyone,

I must confess, I'm feeling a little sad. This is my very last What's Up, Doc. 
Next week, I'll be handling the Docs PTL mantle to Alexandra Settle. I've 
worked with Alex in varying capacities over many years, and I have no doubt 
that she will be a fabulous PTL. I'm really looking forward to working with 
her, and seeing what new directions she's able to take this team. I want to 
take this opportunity to thank each and every one of you for your continued 
support and encouragement over the last two years (and almost-four releases!). 
I have had an absolute ball doing this job, and it's all because of the amazing 
people I get to work with every day. Of course, I'm not going anywhere just 
yet. I will stay on as a core contributor, and continue to help out as much as 
I can.



Congrats Alex, I'm confident you will do a fabulous job!


Lana, from what I've seen you've been a great PTL and it will be great
to still have you around. Thanks for your leadership and support! Great
job.


-c

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] focus for RC1 week

2017-01-26 Thread Emilien Macchi
On Thu, Jan 26, 2017 at 8:23 AM, Emilien Macchi  wrote:
> Folks,
>
> Here's a short term agenda for action items in TripleO team:
>
> ## Jan 26th (today)
> We are releasing python-tripleoclient and stable/ocata will be created
> for this project.

This step is now done, we released tripleoclient 6.0.0.

> If you're working on a bug that is candidate for backport, please tag
> it "ocata-backport-potential".
> Priority has to be critical or high to be backported.
>
> ## Jan 27th (tomorrow)
> Once we have python-tripleoclient in place with stable/ocata branch,
> we'll need to make advanced testing of TripleO CI and make sure
> everything is in place to deploy Ocata packaging from the right RDO
> builds.
> We'll work closely with RDO folks on this side, but both
> project-config & tripleo-ci should be ready™.

I've kicked-off a patch that will run CI jobs against stable/ocata:
https://review.openstack.org/#/c/426017/

People are welcome to look at check everything is allright.

> ## Next week until March 10th
> RC & final releases.
> Feature & CI freeze will start.
> During this time, folks should focus on upgrades from Newton to Ocata,
> fixing bugs [1].
> Please do the FFE or CIFE [2] requests on openstack-dev [tripleo].
>
> Please let us know any concern or feedback, it's always welcome!
>
> [1] https://launchpad.net/tripleo/+milestone/ocata-3
>  https://launchpad.net/tripleo/+milestone/ocata-rc1
> [2] I think I just invented it: CI feature exception
>
> Thanks,
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Blazar] PTL Candidacy for Pike

2017-01-26 Thread Masahito MUROI

Hi everyone,

This is my candidacy as Blazar PTL for Pike release cycle. I'm pleased  
to announce the self-nomination since some developers got gathered for  
re-activate Blazar project in Barcelona summit, then we've started  
current activities.


First of all, I'd like to thanks all developers involved in previous  
Blazar activities. If the great activities are nothing, we need to think  
of our goal from scratch and can't move forward fast we have now.


We all have concrete usecase and demands using Blazar in production even  
if the current activities have started from few months ago. I try to  
move forward the activities and realize our requirements by Blazar  
coming from different technical areas, if elected.


For achieving the goal I'd like to focus on the following in Pike cycle:

* Blazar's Features: Making host reservation feature stable and also  
starting to support another resources


The current main activity is making host reservation feature stable  
since all of us needs it at least. I'm sure we'll achieve it in next  
cycle because of recent active discussion and activity.


In addition to the host reservation, I'd like to start supporting  
reservation of another resource for our usecase. I'd think the goal is  
more challenging and difficult than host one, but I believe the team can  
achieve it since all of this team members has great knowledge and skills.


* Community: Encouraging diversity to this team

The latest activities have been started by few members. It causes this  
team looks less diversity.


I'd like to encourage more people to join this team. Encourage meaning  
for increasing its user as well as developers. I believe it gives Blazar  
more useful features and moves this team forwards.


* Blazar's position: Encouraging Blazar to be in BigTents

Blazar project is now out of BigTents project. The motivation of this  
team is making OpenStack more useful by Blazar for each demands or some  
problems. Becoming one of BigTents project is an easy way to share  
solutions to others who have same demands or problems.


best regards,
Masahito



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? Farewell edition

2017-01-26 Thread Lana Brindley
Hi everyone,

I must confess, I'm feeling a little sad. This is my very last What's Up, Doc. 
Next week, I'll be handling the Docs PTL mantle to Alexandra Settle. I've 
worked with Alex in varying capacities over many years, and I have no doubt 
that she will be a fabulous PTL. I'm really looking forward to working with 
her, and seeing what new directions she's able to take this team. I want to 
take this opportunity to thank each and every one of you for your continued 
support and encouragement over the last two years (and almost-four releases!). 
I have had an absolute ball doing this job, and it's all because of the amazing 
people I get to work with every day. Of course, I'm not going anywhere just 
yet. I will stay on as a core contributor, and continue to help out as much as 
I can.

In the meantime, we have a release to get out the door! We now only have 26 
days until Ocata is released, please keep an eye on the docs mailing list for 
updates, and consider getting your hands dirty with some Install Guide testing: 
https://wiki.openstack.org/wiki/Documentation/OcataDocTesting

== Progress towards Ocata ==

26 days to go!

Closed 211 bugs so far.

Release tasks are being tracked here: 
https://wiki.openstack.org/wiki/Documentation/OcataDeliverables
Install Guide testing is being tracked here: 
https://wiki.openstack.org/wiki/Documentation/OcataDocTesting

== The Road to PTG in Atlanta ==

Event info is available here: http://www.openstack.org/ptg 
Purchase tickets here: https://pikeptg.eventbrite.com/ 

Docs is a horizontal project, so our sessions will run across the Monday and 
Tuesday of the event. We will be combining the docs event with i18n, so 
translators and docs people will all be in the room together.

Conversation topics for Docs and i18n here: 
https://etherpad.openstack.org/p/docs-i18n-ptg-pike

Also, a quick note that the CFP and ticket sales for *Boston in May* are now 
open: https://www.openstack.org/summit/boston-2017/call-for-presentations/

== Speciality Team Reports ==

No speciality team reports this week, as we didn't have quorum for the docs 
meeting. 

== Doc team meeting ==

Our next meeting will be on Thursday 9 February at 2100 UTC in 
#openstack-meeting-alt

Meeting chair will be Alexandra Settle.

For more meeting details, including minutes and the agenda: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting

--

Keep on doc'ing!

Lana

https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#27_January_2017

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] o-3 and FFEs

2017-01-26 Thread Matt Riedemann

This is just a short update on the o-3 tag and FFEs.

We have several changes approved and in the gate, but because of some 
resets those won't make the o-3 tag, which I've requested here:


https://review.openstack.org/#/c/425992/

However, those already approved changes will still get merged and into 
Ocata, they just won't be in the o-3 tag, but they'll be in the rc1 tag 
and in my opinion that's what really matters.


The main thing we're still working on getting in for Ocata is the filter 
scheduler using the placement service:


https://review.openstack.org/#/c/417961/

That's still trying to work through issues with multinode grenade.

As for FFEs, I'm not considering anything that's not listed as a priority:

https://specs.openstack.org/openstack/nova-specs/priorities/ocata-priorities.html

Which at this point leaves Sylvain's scheduler change for placement and 
then probably Jay's final change for supporting custom resource classes 
for Ironic:


https://review.openstack.org/404472

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Ocata Feature Freeze

2017-01-26 Thread Armando M.
On 26 January 2017 at 12:53, Dariusz Śmigiel 
wrote:

> Dear Neutrinos,
> Feature Freeze day arrived! Ocata-3 has been released, so it means
> that no new features will be allowed to current release... The only
> patches approved to be merged should be: release critical or gate
> blocker.
> All outstanding features, that would need to be landed into Ocata,
> will need to receive "Feature Freeze Exception" status.
>
> From now on, we have one week till RC1.
> Please double check release notes and make sure everything is in order.
>
> Thanks,
> Dariusz
> your Release Liaison
>

Dasm, thanks for the details provided. Let me also add that:

You can find milestone deliverables at [1].

Between now and the official release date (week of Feb 20th, calendar [2]
for more details), we will be busy with the following:

   - cleaning up release notes [3]
   - handling release tasks [4]
   - squashing doc bugs [5]
   - dealing with gate failures [6]
   - apply for FFE on the ocata postmortem [7]
   - for pending efforts that get a FFE granted, there's time until we cut
   an RC1 [8]
   - Pike-1 opens up as soon as RC1 is cut [9] (which I took the liberty to
   seed based on reasonable expectations of the progress we can make on
   outstanding efforts)
   - if you find a RC critical bug, please file it and add bug tag
   'ocata-rc-potential' [10]
   - If you are a subproject maintainer, please check [11], switch to
   release mindset and get ready to prepare an Ocata release.

Be mindful of what you approve for merge (e.g. patches containing DB
migration need special attention), and double check whether it's aimed at
making RC1 solid/complete. If not, please refrain from putting it in the
gate queue, and most of all, *recheck* mindfully.

Many thanks for your help, and when in doubt, reach out!

Cheers,
Armando

[1] https://releases.openstack.org/ocata/index.html
[2] https://releases.openstack.org/ocata/schedule.html
[3] http://docs.openstack.org/releasenotes/neutron/unreleased.html
[4] http://docs.openstack.org/developer/neutron/policies/rel
ease-checklist.html
[5] https://bugs.launchpad.net/neutron/+bugs?field.tag=doc
[6] https://bugs.launchpad.net/neutron/+bugs?field.status%3A
list=NEW%3Alist=CONFIRMED=gate-failure
[7] https://review.openstack.org/#/c/425990
[8] https://launchpad.net/neutron/+milestone/ocata-rc1
[9] https://launchpad.net/neutron/+milestone/pike-1
[10] https://bugs.launchpad.net/neutron/+bugs/?field.tag=ocata-rc-potential
[11] https://review.openstack.org/#/c/389397/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] priorities for the coming week (01/27-02/02)

2017-01-26 Thread Brian Rosmaita
First, please read the email from Ian (our release czar) about the
feature freeze:
http://lists.openstack.org/pipermail/openstack-dev/2017-January/111067.html

We have three priorities this week.  The first is an all-hands-on-deck
super priority, namely, reviewing (and re-reviewing, as appropriate) the
code associated with Rolling Upgrades, which has received a FFE:

- https://review.openstack.org/#/c/382958/
- https://review.openstack.org/#/c/392993/
- https://review.openstack.org/#/c/397409/
- https://review.openstack.org/#/c/424774/

Please start your reviews now.  We don't want to be in a situation next
week where people are rush-reviewing things.

The other, secondary, not as important as the above, items are:

* nominate any appropriate glanceclient bugs that didn't make it into
this week's release by tagging them in Launchpad with
"ocata-backport-potential".  Please do this earlier rather than later,
but do it consistently with the above.

* ongoing work on the security bug - actually, this one is pretty
important.  Anyone in coresec, the only excuse for not reviewing Rolling
Upgrades patches is if you are actively working on this bug.

Have a good week, everyone!

cheers,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2017-01-26 Thread Doug Hellmann
Excerpts from Takashi Yamamoto's message of 2017-01-26 11:42:48 +0900:
> hi,
> 
> On Sat, Jan 14, 2017 at 2:17 AM, Doug Hellmann  wrote:
> > Excerpts from Dariusz Śmigiel's message of 2017-01-13 09:11:01 -0600:
> >> 2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
> >> > hi,
> >> >
> >> > On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  wrote:
> >> >> Hi
> >> >>
> >> >> As of today, the project neutron-vpnaas is no longer part of the neutron
> >> >> governance. This was a decision reached after the project saw a dramatic
> >> >> drop in active development over a prolonged period of time.
> >> >>
> >> >> What does this mean in practice?
> >> >>
> >> >> From a visibility point of view, release notes and documentation will no
> >> >> longer appear on openstack.org as of Ocata going forward.
> >> >> No more releases will be published by the neutron release team.
> >> >> The neutron team will stop proposing fixes for the upstream CI, if not
> >> >> solely on a voluntary basis (e.g. I still felt like proposing [2]).
> >> >>
> >> >> How does it affect you, the user or the deployer?
> >> >>
> >> >> You can continue to use vpnaas and its CLI via the python-neutronclient 
> >> >> and
> >> >> expect it to work with neutron up until the newton
> >> >> release/python-neutronclient 6.0.0. After this point, if you want a 
> >> >> release
> >> >> that works for Ocata or newer, you need to proactively request a release
> >> >> [5], and reach out to a member of the neutron release team [3] for 
> >> >> approval.
> >> >
> >> > i want to make an ocata release. (and more importantly the stable branch,
> >> > for the benefit of consuming subprojects)
> >> > for the purpose, the next step would be ocata-3, right?
> >>
> >> Hey Takashi,
> >> If you want to release new version of neutron-vpnaas, please look at [1].
> >> This is the place, which you need to update and based on provided
> >> details, tags and branches will be cut.
> >>
> >> [1] 
> >> https://github.com/openstack/releases/blob/master/deliverables/ocata/neutron-vpnaas.yaml
> >
> > Unfortunately, since vpnaas is no longer part of an official project,
> > we won't be using the releases repository to manage and publish
> > information about the releases. It'll need to be done by hand.
> 
> who can/should do it by hand?

I can do it. Let me know the version number, and for each repository the
SHA of the commit on the master branch to be tagged.

Doug

> 
> >
> > Doug
> >
> >>
> >> BR, Dariusz
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage][ptl] PTL candidacy for Pike

2017-01-26 Thread Afek, Ifat (Nokia - IL)
Hi All,

I am announcing my candidacy for PTL of the OpenStack Vitrage project for the
Pike cycle.

I have been the Vitrage PTL from its first day. I was involved in the first
stages of the design and implementation (in Mitaka), was thrilled when it
became an official OpenStack project only seven months later (in Newton),
and spent more and more time on relationships with our growing community (in
Ocata).

I think that the Vitrage project has a group of extremely talented developers,
who achieved in less than a year and a half all of the major goals we set for
ourselves. Seeing how Vitrage grew from an idea to a mature, production-grade
and well-known project in such a short period of time was an amazing experience
for me.

With that said, we have quite a few challenges ahead. I’ll describe the areas
that I believe we should focus on in the Pike cycle.

- Extend our community. As time passes, we see more and more interest in
  Vitrage. The more contributors we have, the better Vitrage can get.

- Support more use cases
  - Deduce alarms that are not reported as expected (lost or delayed)
  - Add more deduced alarms and RCA templates (e.g. for network monitoring).
It took us a while to build the infrastructure, but now it’s there.
It’s time to think of new alarms, what effect they will have on the system,
and who can be notified and benefit from this information.

- Integrate with more OpenStack and external projects, for the sake of the
  above goal.

- Improve Vitrage usability. Vitrage provides a lot of valuable information
  that is presented in the Horizon UI. As much informative as it is, the way
  it is displayed is not ideal and should be enhanced.

- Support a persistent graph database. We have been talking about it for a
  while, it’s time to implement. Our in-memory graph database works very well,
  but a persistent one has its own advantages.

- Enhance the Vitrage evaluator templates language, and support full template
  CRUD API.

Overall, we would like Vitrage to become a project that every cloud operator
would like to use, and I believe we are on the right direction. I think that
the Pike cycle will be a very interesting one.

Thanks,
Ifat.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Ocata Feature Freeze

2017-01-26 Thread Dariusz Śmigiel
Dear Neutrinos,
Feature Freeze day arrived! Ocata-3 has been released, so it means
that no new features will be allowed to current release... The only
patches approved to be merged should be: release critical or gate
blocker.
All outstanding features, that would need to be landed into Ocata,
will need to receive "Feature Freeze Exception" status.

>From now on, we have one week till RC1.
Please double check release notes and make sure everything is in order.

Thanks,
Dariusz
your Release Liaison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] placement api request analysis

2017-01-26 Thread Chris Dent

On Wed, 25 Jan 2017, Chris Dent wrote:


#B3
The new GET to /placement/allocations is happening when the
resource tracker calls _update_usage_from_instance, which is always
being called becuause is_new_instance is always true in that method,
even when the instance is not "new". This is happening because the
tracked_instaces dict is _always_ getting cleared before
_update_usage_from_instance is being called. Which is weird because
it appears that it is that method's job to update tracked_instances.
If I remove the clear() the get on /placement/allocations goes away.
But I'm not sure what else this will break. The addition of that line
was a long time ago, in this change (I think):
https://review.openstack.org/#/c/13182/


I made a bug about this:

https://bugs.launchpad.net/nova/+bug/1659647

and have the gate looking at what breaks if the clear goes away:

https://review.openstack.org/#/c/425885/

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][watcher][puppet] Release of openstack/puppet-watcher failed

2017-01-26 Thread Emilien Macchi
On Thu, Jan 26, 2017 at 2:56 PM, Doug Hellmann  wrote:
> Excerpts from jenkins's message of 2017-01-26 19:26:35 +:
>> Build failed.
>>
>> - puppet-watcher-tarball 
>> http://logs.openstack.org/3d/3dd9ce72aadf433ee5a0381c78e641691bcce8eb/release/puppet-watcher-tarball/75519e7/
>>  : SUCCESS in 2m 20s
>
> The tarball generated had version number 10.1.0 rather than the tagged
> 10.2.0.
>
>> - puppet-watcher-tarball-signing 
>> http://logs.openstack.org/3d/3dd9ce72aadf433ee5a0381c78e641691bcce8eb/release/puppet-watcher-tarball-signing/2a4f153/
>>  : FAILURE in 9s
>
> The tarball with version 10.2.0 doesn't exist, so the job failed to
> download it.
>
> To resolve this, I suggest updating the version in the puppet code to
> 10.3.0 and then tagging that version for just openstack/puppet-watcher.
>
> Doug
>
>> - puppet-watcher-announce-release 
>> http://logs.openstack.org/3d/3dd9ce72aadf433ee5a0381c78e641691bcce8eb/release/puppet-watcher-announce-release/8460e62/
>>  : SUCCESS in 4m 02s
>>
>

Just FYI, as corrective and preventive action:
https://review.openstack.org/#/c/425910/

-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Tacker PTL Non-candidacy

2017-01-26 Thread Sahdev P Zala
Hi Sridhar, 

Thanks for your leadership in Tacker for last two years!! Great years for 
the project. I am glad that you will be continuing contributing to the 
project. I look forward to work with you for further collaboration between 
Tacker and TOSCA translator projects.

Regards, 
Sahdev Zala




From:   Sridhar Ramaswamy 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   01/18/2017 06:11 PM
Subject:[openstack-dev] [tacker] Tacker PTL Non-candidacy



As I announced in the last Tacker weekly meeting, I'm not planning to run 
for Pike PTL position. Having served in this role for the last three 
cycles (including for the periods before it was a big-tent project), I 
think it is time for someone else to step in and take this forward. I'll 
continue to contribute as a core-team member. I'll be available to help 
the new PTL in any ways needed.

Personally, it has been a such a rewarding experience. I would like to 
thank all the contributors - cores and non-core members - who supported 
this project and me. We had an incredible amount of cross-project 
collaboration in tacker, with the likes of tosca-parser / heat-translator, 
neutron networking-sfc, senlin, and mistral - my sincere thanks to all the 
PTLs and the members of those projects. 

Now going forward, we have tons to do in Tacker - towards making it a 
leading, community built TOSCA Orchestrator service. And that, not just 
for the current focus area of NFV but also expand into Enterprise and 
Container use-cases. Fun times!

thanks,
Sridhar
irc: sridhar_ram
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][watcher][puppet] Release of openstack/puppet-watcher failed

2017-01-26 Thread Doug Hellmann
Excerpts from jenkins's message of 2017-01-26 19:26:35 +:
> Build failed.
> 
> - puppet-watcher-tarball 
> http://logs.openstack.org/3d/3dd9ce72aadf433ee5a0381c78e641691bcce8eb/release/puppet-watcher-tarball/75519e7/
>  : SUCCESS in 2m 20s

The tarball generated had version number 10.1.0 rather than the tagged
10.2.0.

> - puppet-watcher-tarball-signing 
> http://logs.openstack.org/3d/3dd9ce72aadf433ee5a0381c78e641691bcce8eb/release/puppet-watcher-tarball-signing/2a4f153/
>  : FAILURE in 9s

The tarball with version 10.2.0 doesn't exist, so the job failed to
download it.

To resolve this, I suggest updating the version in the puppet code to
10.3.0 and then tagging that version for just openstack/puppet-watcher.

Doug

> - puppet-watcher-announce-release 
> http://logs.openstack.org/3d/3dd9ce72aadf433ee5a0381c78e641691bcce8eb/release/puppet-watcher-announce-release/8460e62/
>  : SUCCESS in 4m 02s
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-3 (Ocata RC1 Target), 30 Jan - 3 Feb

2017-01-26 Thread Doug Hellmann
Focus
-

This week is the Release Candidate target deadline for all
milestone-based projects. Only bug fixes and previously agreed
feature freeze extensions should be merged into master branches.
The RC1 is only 1 week after feature freeze, which is different
from our usual 2 week freeze period, so please stay on top of reviews
and minimize FFEs accordingly.

The requirements list for Ocata is frozen. We will reopen it after
all of the cycle-with-milestones projects have stable branches
created.

Release Tasks
-

Review the changes to your projects over the Ocata cycle and ensure
that any necessary release notes are present.

Optionally, add "prelude" release notes to summarize the work that
has been done and highlight anything of special importance.

All projects following the cycle-with-milestones or cycle-with-intermediary
release models should prepare a release candidate by the deadline
on Thursday 2 Feb. Even cycle-with-intermediary projects should
consider this release a candidate, especially for those projects
who have not released at all yet this cycle.

Projects following the cycle-with-milestones model should propose
a patch to openstack/releases to create a 0rc1 tag and a new
stable/ocata branch. Unlike the milestone tags, with release
candidates it is best to wait until the deadline when the project
is stable. This avoids having several release candidates tagged
close together, which discourages users from testing early candidates.

Projects following the cycle-with-intermediary release model should
also include the stable/ocata branch with their release. Library
deliverables that have been frozen will need a separate branch
request.

Projects following the release-independent model and tagging releases
outside of the automation should review the history of the deliverable
in openstack/releases and update it, if necessary.

General Notes
-

After this release cycle I will be reviewing the list of cycle-based
projects that do not prepare releases and providing that information
to the TC for a discussion about whether those projects should be
considered inactive, and therefore should be removed from the
official list. All deliverables saw at least one release for Newton,
and I hope we have the same results for Ocata.

The deadline for documenting community wide goal completion artifacts
is the end of the cycle. Please update the Ocata goals page with
any information needed to understand how the goal affected your
project, and whether there is any work left to be done.

Important Dates
---

Ocata RC1 target: 2 Feb

Ocata Final Release candidate deadline: 16 Feb

Ocata release schedule:
http://releases.openstack.org/ocata/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc]

2017-01-26 Thread Henry Fourie
Michael,
  Regarding horizon support for networking-sfc, the screens shown
on the demo were developed in an earlier patch that was not merged.
https://review.openstack.org/#/c/258430/

This work is still in progress.
 - Louis

-Original Message-
From: Bernard Cafarelli [mailto:bcafa...@redhat.com] 
Sent: Tuesday, January 24, 2017 5:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-sfc]

On 20 January 2017 at 00:06, Michael Gale  wrote:
> Hello,
>
> Are there updated install docs for sfc? The only install steps for 
> a testbed I can find are here and they seem outdated:
> https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining
There is also a SFC chapter in the networking guide:
http://docs.openstack.org/newton/networking-guide/config-sfc.html

Which parts do you find outdated? Some references to Ubuntu/OVS versions may 
need a cleanup, but the design and API parts should still be OK (OSC client, 
SFC graph API, symmetric ports and other goodies are still under review and not 
yed merged)

> Also from the conference videos there seems to be some Horizon menu / 
> screens that are available?
Not for networking-sfc directly, but there is a SFC tab in the tacker horizon 
plugin (or will be, someone from the tacker team can confirm
that)


--
Bernard Cafarelli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Sergey (Sagi) Shnaidman for core on tripleo-ci

2017-01-26 Thread Harry Rybacki
On Thu, Jan 26, 2017 at 12:25 PM, Martin André  wrote:
> On Tue, Jan 24, 2017 at 6:03 PM, Juan Antonio Osorio
>  wrote:
>> Sagi (sshnaidm on IRC) has done significant work in TripleO CI (both
>> on the current CI solution and in getting tripleo-quickstart jobs for
>> it); So I would like to propose him as part of the TripleO CI core team.
>>
>> I think he'll make a great addition to the team and will help move CI
>> issues forward quicker.
>
> +1
>
+1

>> Best Regards,
>>
>>
>>
>> --
>> Juan Antonio Osorio R.
>> jaosorior
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Lots of teams without PTL candidates!

2017-01-26 Thread Kendall Nelson
Hello All!

It appears that there are several projects without PTL nominations and we
are reaching the close of the nomination period. The period ends Jan 29,
2017 23:45 UTC.

The current leaderless projects are:
- Cloudkitty
- Community App Catalog
- Congress
- Ec2 API
- Fuel
- Karbor
- Magnum
- Monasca
- OpenStackClient
- OpenStackUX
- Packaging Prm
- Rally
- RefStack
- Requirements
- Searchlight
- Senlin
- Stable Branch Maintenance
- Vitrage
- Winstackers
- Zun

We look forward to seeing your nominations! Good luck in the election!

Thanks,

-Kendall Nelson (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-I18n] [I18n] PTL Candidacy for Pike

2017-01-26 Thread Frank Kloeker

Ian,

we know you as a reliable, ethusiastic and friendly guy of the OpenStack 
community. I take the opportunity to thank you for your work at all. And 
I'm very happy that you want to continue to work in the project. Go 
ahead, you may always have strength and energy for the new challenges.


kind regards

Frank

Am 2017-01-26 17:38, schrieb Ian Y. Choi:

Hello,

I am writing to announce my candidacy for I18n Pike PTL.

To retrospect my activities in Ocata cycle,
my overall feeling is that Ocata cycle is really short.
In fact, I acknowledge that most of the action items I wrote in [1]
are still on-going or have not completed yet.
Nevertheless, the followings are some memorable activities which I have
involved with and also would like to continue during upcoming Pike 
cycle:


* Well summarized action items from Ocata Design Summit [2]
* IRC meeting time change accordingly with two bi-weekly schedules
* Publicizing IRC meeting announcement and logs [3]
* On-going effort on Zanata upgrade with Xenial with infra team
* Clearer communication on translation imports and criteria [4, 5]
* Landing page updates for translated documents
* Co-participation in Pike PTG with Documentation team [6]

I am seeing that there are many upstream translations,
and many I18n team members participate with great and kind help.
I really appreciate their contributions and also the help from
especially Infrastructure team, Documentation team, and Zanata 
development

team members. I believe that I am able to continue my effort to finish
what I wrote in [1] and make for better I18n.

Please support and encourage me again.


With many thanks,

/Ian

[1]
https://git.openstack.org/cgit/openstack/election/plain/candidates/ocata/I18n/ianychoi.txt
[2] https://etherpad.openstack.org/p/barcelona-i18n-meetup
[3] https://wiki.openstack.org/wiki/I18N/MeetingLogs
[4] 
http://docs.openstack.org/developer/i18n/reviewing-translation-import.html

[5] http://docs.openstack.org/developer/i18n/infra.html
[6] https://www.openstack.org/ptg


___
OpenStack-I18n mailing list
openstack-i...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] OpenStack Ocata B2 for Ubuntu 16.04 LTS

2017-01-26 Thread Corey Bryant
Hi All,

The Ubuntu OpenStack team is pleased to announce the general availability
of the OpenStack Ocata B2 milestone in Ubuntu 16.04 LTS via the Ubuntu
Cloud Archive.

Ubuntu 16.04 LTS



You can enable the Ubuntu Cloud Archive pocket for OpenStack Ocata on
Ubuntu 16.04 installations by running the following commands:

sudo add-apt-repository cloud-archive:ocata

sudo apt update

The Ubuntu Cloud Archive for Ocata includes updates for:

aodh (commit 5363ff85), barbican, ceilometer (commit aa3f491bb), cinder,
congress, designate, designate-dashboard, glance, heat, horizon (commit
158a4c1a), keystone, libvirt (2.5.0), manila, mistral, networking-ovn,
neutron, neutron-fwaas, neutron-lbaas, neutron-vpnaas (commit 47c217e4),
nova, openstack-trove, panko (1.0.0), qemu (2.6.1), sahara, senlin, swift
(2.12.0), watcher (0.33.0), and zaqar.

For a full list of packages and versions, please refer to [0].

API’s now running under apache2 with mod_wsgi

--

In this milestone we’ve updated the following APIs to run under apache2
with mod_wsgi: aodh-api, barbican-api, ceilometer-api, cinder-api, and
nova-placement-api.

Please keep this in mind as the packages will no longer install systemd
unit files for these services, and will instead install apache2 with
corresponding apache2 sites.

libvirt 2.5.0 changes

---

In this release of libvirt, the libvirt-bin systemd service has been
renamed to libvirtd, and the unix_sock_group has changed from libvirtd to
libvirt.

Branch Package Builds

---

If you would like to try out the latest updates to branches, we are
delivering continuously integrated packages on each upstream commit via the
following PPA’s:

  sudo add-apt-repository ppa:openstack-ubuntu-testing/liberty

  sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka

  sudo add-apt-repository ppa:openstack-ubuntu-testing/newton

  sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata

Reporting bugs

-

If you have any issues please report bugs using the 'ubuntu-bug' tool to
ensure that bugs get logged in the right place in Launchpad:

 sudo ubuntu-bug nova-conductor

Thanks to all who have contributed thus far to OpenStack Ocata, both
upstream and downstream.  And special thanks to the puppet modules team for
their continued early testing of Ocata.

Have fun!

Regards,

Corey

(on behalf of the Ubuntu OpenStack team)

[0] http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-
archive/ocata_versions.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [aodh][vitrage] Aodh generic alarms

2017-01-26 Thread Julien Danjou
On Thu, Jan 26 2017, gordon chung wrote:

> On 26/01/17 11:41 AM, Julien Danjou wrote:
>> So here's another question then: why wouldn't there be a "zabbix" alarm
>> type in Aodh that could be created by a user (or another program) and
>> that would be triggered by Aodh when Zabbix does something?
>> Which is something that is really like the event alarm mechanism which
>> already exists. Maybe all that's missing is a
>> Zabbix-to-OpenStack-notification converter to have that feature?
>
> and vitrage would be an alarm orchestrator?

Yup, something like that. It could be the one driving Zabbix and
creating alarms for Zabbix in Aodh when a new host is plugged for
example.

Just thinking out loud. :)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] To rootwrap or piggyback privsep helpers?

2017-01-26 Thread Thierry Carrez
Davanum Srinivas wrote:
> Clint,
> 
> Pike may be too soon :) as we need to make sure what we have in
> oslo.rootwrap/oslo.privsep work properly in py35. I saw some stuff i
> am still chasing.
> 
> So the one after next will have my vote.

Yes, I'd like us to make enough progress during Pike that people will be
comfortable with it being a Queens goal.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Feature Freeze In Effect

2017-01-26 Thread Ian Cordasco
Hi Glancers!

Glance 14.0.0.0b3 (Ocata-3) has been released.  With that done Glance
now enters its Feature Freeze period until release.  There is *one*
exception to that freeze (as discussed in today's meeting): Rolling
Upgrades work. That includes

- https://review.openstack.org/#/c/382958/
- https://review.openstack.org/#/c/392993/
- https://review.openstack.org/#/c/397409/

If you are working on Glance, you should be reviewing those and
testing them locally.

No new feature work on Glance will be accepted until stable/ocata has
been created. I will attempt to keep track of all new feature work
coming into the review queue and I will -2 it with the appropriate
message.

If there are any patches in the review queue that aren't already
approved prior to next week's meeting, I will not wait for them to
work their way through Zuul's gate queue. It would be ideal if we do
not have to wait for anything on Thursday to create the release
request.

Note: I will be closely watching our project. If any features are
merged between now and RC-1, I will work to revert them, regardless of
whether it is accidental or not. We've had a few approvals lately that
have been suspect and I expect all of Glance's cores to be a bit more
careful.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Nominating mkarpin for core for the Puppet OpenStack modules

2017-01-26 Thread Alex Schultz
On Thu, Jan 19, 2017 at 3:25 PM, Alex Schultz  wrote:
> Hey Puppet Cores,
>
> I would like to nominate Mykyta Karpin as a Core reviewer for the
> Puppet OpenStack modules.  He has been providing quality patches and
> reviews for some time now and I believe he would be a good addition to
> the team.  His stats for the last 90 days can be viewed here[0]
>
> Please response with your +1 or any objections. If there are no
> objections by Jan 26, I will add him to the core list.
>

As there were no objections, I have added him to the core list. Welcome Mykyta.

Thanks,
-Alex

> Thanks,
> -Alex
>
> [0] http://stackalytics.com/report/contribution/puppet%20openstack-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Sergey (Sagi) Shnaidman for core on tripleo-ci

2017-01-26 Thread Martin André
On Tue, Jan 24, 2017 at 6:03 PM, Juan Antonio Osorio
 wrote:
> Sagi (sshnaidm on IRC) has done significant work in TripleO CI (both
> on the current CI solution and in getting tripleo-quickstart jobs for
> it); So I would like to propose him as part of the TripleO CI core team.
>
> I think he'll make a great addition to the team and will help move CI
> issues forward quicker.

+1

> Best Regards,
>
>
>
> --
> Juan Antonio Osorio R.
> jaosorior
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [aodh][vitrage] Aodh generic alarms

2017-01-26 Thread gordon chung


On 26/01/17 11:41 AM, Julien Danjou wrote:
> So here's another question then: why wouldn't there be a "zabbix" alarm
> type in Aodh that could be created by a user (or another program) and
> that would be triggered by Aodh when Zabbix does something?
> Which is something that is really like the event alarm mechanism which
> already exists. Maybe all that's missing is a
> Zabbix-to-OpenStack-notification converter to have that feature?

and vitrage would be an alarm orchestrator?

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] PTL Candidacy for Pike

2017-01-26 Thread Devdatta Kulkarni
Hi,

I would like to submit my candidacy to continue as PTL of Solum for
the Pike cycle.

Solum (https://wiki.openstack.org/wiki/Solum) is a big tent project that
supports
building, testing, and deploying applications on OpenStack starting from
applications' source code. Applications are built as Docker containers and
deployed using Heat. Application containers are stored in Glance or Swift
(configurable).

Looking back at the Ocata cycle, some of the highlights for our team have
been:
- adding kolla-ansible role [1] and kolla container [2] for Solum
  (Many thanks to Wei Cao for spearheading this effort)
- completing Ocata goal of removing incubated Oslo libraries
- extending our core contributor team [3]

The key focus areas for the Pike cycle include:
- finding an alternative for nova-docker in our devstack setup
- completing the work that we started on adding support for building
applications into VM images
- making it easy for operators to configure and use different build and
deploy options that are supported within Solum

You might remember, we have been using nova-docker as the Virt driver
in our devstack setup. However, nova-docker is being retired [4].
So it is important that we find a replacement that can work in our devstack
setup.
We have been looking at Zun as this replacement [5]. In this cycle I hope
we can finish this work, thus removing our dependence on nova-docker.

One of our contributors has been working on adding support
to build applications into VM images [6].
You can find the details about this use-case and approach
in his thesis [7]. I hope that we are able to complete this work in this
cycle.
This will give Solum the ability to build and deploy applications as VM
images in addition to Docker containers.

Lastly, we should continue working on the Python 3.5 support [8],
which has been approved as the community-wide goal for this cycle.

If you are interested in Solum feel free to reach out to us here, or on
Solum IRC channel
(#solum on chat.freenode.net).

Regards,
Devdatta Kulkarni

[1] https://review.openstack.org/#/c/402225/
[2] https://review.openstack.org/#/c/355408/
[3]
http://lists.openstack.org/pipermail/openstack-dev/2016-September/104699.html
[4]
http://lists.openstack.org/pipermail/openstack-dev/2016-December/109387.html
[5] https://review.openstack.org/#/c/416224/
[6] https://review.openstack.org/#/c/336570/
[7]
https://gitlab.com/ablu/bachelorthesis/builds/9259804/artifacts/file/build/Bachelor%20Thesis%20Erik%20Schilling.pdf
[8] https://etherpad.openstack.org/p/solum-python35-goal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request "libvirt-emulator-threads-policy"

2017-01-26 Thread Jay Pipes

On 01/26/2017 11:51 AM, Sahid Orentino Ferdjaoui wrote:

I'm requesting a FFE for the libvirt driver blueprint/spec to isolate
emulator threads [1]. The code is up and ready since Mid of November
2016.

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/libvirt-emulator-threads-policy


I can sponsor this since I've been involved in the series' reviews.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-01-26 Thread Chris Dent


Greetings OpenStack community,

Today's meeting [0] covered two main topics: Making sure we get input on the 
process for refactoring the api compatibility guidelines [4] and whether we 
should have some official time at the PTG. Rather than having official time we 
decided that the best thing to do was make sure we are available for those 
topics which are relevant (such as discussion of capabilities [5] and service 
catalog and version endpoint clarification [6]) and to hang out with the 
architecture working group. Of the API-WG cores, edleafe and I (cdent) will be 
at the PTG. Find one of us if there's something that ought to be discussed 
there and we'll try to set something up.

# Newly Published Guidelines

Nothing recently published.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

* Add guidelines on usage of state vs. status
  https://review.openstack.org/#/c/411528/

* Clarify the status values in versions
  https://review.openstack.org/#/c/411849/

* Add guideline for invalid query parameters
  https://review.openstack.org/417441

# Guidelines Currently Under Review [3]

* Add guidelines for boolean names
  https://review.openstack.org/#/c/411529/

* Define pagination guidelines
  https://review.openstack.org/#/c/390973/

* Add API capabilities discovery guideline
  https://review.openstack.org/#/c/386555/

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[0] 
http://eavesdrop.openstack.org/meetings/api_wg/2017/api_wg.2017-01-26-16.00.log.html
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://review.openstack.org/#/c/421846/ and 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110384.html
[5] https://review.openstack.org/#/c/386555/
[6] http://lists.openstack.org/pipermail/openstack-dev/2017-January/110043.html

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request "libvirt-emulator-threads-policy"

2017-01-26 Thread Sahid Orentino Ferdjaoui
I'm requesting a FFE for the libvirt driver blueprint/spec to isolate
emulator threads [1]. The code is up and ready since Mid of November
2016.

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/libvirt-emulator-threads-policy

s.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Field 'domain_id' doesn't have a default value

2017-01-26 Thread Lance Bragstad
Hi Eduardo,

Master should populate the domain_id for a user before it gets to the sql
layer [0] [1]. Do you have `[identity] default_domain_id` specified in your
keystone.conf?

Can you give some specifics on the upgrade scenario? Number of nodes?
Specific request you're making to create users?


[0]
https://github.com/openstack/keystone/blob/169e66ab8800148c4052a46d2cb321af33e44f77/keystone/identity/controllers.py#L218
[1]
https://github.com/openstack/keystone/blob/169e66ab8800148c4052a46d2cb321af33e44f77/keystone/common/controller.py#L737-L741

On Thu, Jan 26, 2017 at 10:15 AM, Eduardo Gonzalez 
wrote:

> Hi.
> I’m testing upgrades from Newton to master branch using keystone’s
> zero-downtime upgrade method:
>
> keystone-manage db_sync --expand
> keystone-manage db_sync --migrate
> keystone-manage db_sync --contract
>
> After upgrade is made with no errors in logs, I cannot create users. Other
> keystone commands works fine.
>
> Error message: “Field ‘domain_id’ doesn’t have a default value”
>
> Full trace:
>
> 2017-01-26 15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters 
> [req-576e5683-6631-49b9-b532-65f06a23bbeb 4b3ff53812734e72b5aea42103349571 
> 04fe68cf58724855a2ee67781a14b446 - default default] DBAPIError exception 
> wrapped from (pymysql.err.InternalError) (1364, u"Field 'domain_id' doesn't 
> have a default value") [SQL: u'INSERT INTO user (id, enabled, extra, 
> default_project_id, created_at, last_active_at) VALUES (%(id)s, %(enabled)s, 
> %(extra)s, %(default_project_id)s, %(created_at)s, %(last_active_at)s)'] 
> [parameters: {'last_active_at': None, 'extra': '{}', 'created_at': 
> datetime.datetime(2017, 1, 26, 15, 32, 10, 974903), 'enabled': 1, 
> 'default_project_id': None, 'id': 
> '8989be945c954f14a1c9ebaf45988fad'}]2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters Traceback (most recent call last):2017-01-26 
> 15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", 
> line 1139, in _execute_context2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters context)2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters   File 
> "/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
>  line 450, in do_execute2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters cursor.execute(statement, 
> parameters)2017-01-26 15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters   
> File "/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/cursors.py", 
> line 167, in execute2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters result = self._query(query)2017-01-26 
> 15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/cursors.py", line 
> 323, in _query2017-01-26 15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters 
> conn.query(q)2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters   File 
> "/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/connections.py", 
> line 836, in query2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters self._affected_rows = 
> self._read_query_result(unbuffered=unbuffered)2017-01-26 15:32:10.978 17 
> ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/connections.py", 
> line 1020, in _read_query_result2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters result.read()2017-01-26 15:32:10.978 17 
> ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/connections.py", 
> line 1303, in read2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters first_packet = 
> self.connection._read_packet()2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters   File 
> "/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/connections.py", 
> line 982, in _read_packet2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters packet.check_error()2017-01-26 
> 15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/connections.py", 
> line 394, in check_error2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters 
> err.raise_mysql_exception(self._data)2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters   File 
> "/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/err.py", line 120, 
> in raise_mysql_exception2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters _check_mysql_exception(errinfo)2017-01-26 
> 15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters   File 
> "/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/err.py", line 115, 
> in _check_mysql_exception2017-01-26 15:32:10.978 17 ERROR 
> oslo_db.sqlalchemy.exc_filters raise InternalError(errno, 
> errorvalue)2017-01-26 

Re: [openstack-dev] [aodh][vitrage] Aodh generic alarms

2017-01-26 Thread Julien Danjou
On Thu, Jan 26 2017, Afek, Ifat (Nokia - IL) wrote:

> I’ll try to answer your question from a user perspective. 

Thanks for your explanation, it helped me a lot to understand how you
view things. :)

> Suppose a bridge has a bond of two physical ports, and Zabbix detects a signal
> loss in one of them. This failure has no immediate effect on the host,
> instances or applications, and will not be reflected anywhere in OpenStack.
>
> Vitrage will receive an alarm from Zabbix, identify the instances that will be
> affected if the entire bond fails, and create deduced alarms that they are at
> risk (if the other port fails they will become unreachable). Similarly, it 
> will
> create alarms on the relevant applications.

So when you say "create deduced alarms"… What does it mean? I understand
the deduction, but I am not sure what it "creates" – 'cause then you
say:

> A user that checks Aodh will see that all alarms are in ‘ok’ state, which 
> might
> be misleading.

Which alarms? Could you be more precise? Where these alarms come from?
Are they created by the users or by Vitrage automatically?
If it's a CPU usage of its instance there's no reason for it to become
red.

If I recall correctly what you explained to me a while back, there are
alarms created by Vitrage based on some rules, so I imagine these are
the ones you talk about?

> The user might determine that everything is ok with the instances that
> Aodh is monitoring. If the user then checks Vitrage, he will see the
> deduced alarms and understand that the instances and the applications
> are at risk.

From what I understood the user can't really check Vitrage (IIRC it does
not really have a full API for users yet), right?

> Does it make sense that the user will check Aodh *and* Vitrage? A standard 
> user
> would like to see all of the alarms in one place, no matter which monitor was
> responsible for triggering them.

Yes: it does make sense for the user to check both because of the way
Aodh+Vitrage are architectured right now. Does it make sense in term of
user experience? I think we both agree that no it does not. Having a
central place of alerting would be awesome.

But does it make sense to force-fed Vitrage alarms and data model in
Aodh? I am not sure right now. If I circle back again to UX, when a user
requests Aodh, it only sees alarm he created and he managed. With
generic alarms, the way it's pushed right now, there's going to be a
bunch of generic thing the user has barely any clue about that can do
things he has no idea – because it can't really do anything on Vitrage.

And even if Vitrage had an API to manipulate the rules and all (I can
easily imagine it's in the roadmap) that means it would manipulate
deduction rules on the Vitrage API and then see things magically happen
into his Aodh account. I find that… weird. It sounds a lot prone to
failure and out-of-async between Aodh and Vitrage.

Let's imagine another scenario/solution (which I am *not* advocating,
it's just an exercise for thought): Vitrage would store its alarms
(defined and created bases on its rules) in a database. It would then
offer an access to it to Aodh (e.g. via an HTTP API). Then Aodh could
query it.
For example, when a user would ask Aodh to list the alarms, Aodh will
return the alarms that are store in its own database (created by the
user) and would also query Vitrage to return the list of alarms created
by Vitrage rules (and their deducted state).

What's the point of such a design? Well it's less prone to
out-of-sync-ness and does not force any data model in Aodh that it has
no use for. It also solves the problem of "having a central listing of
alarms" for the user – the user does not have to be aware of Vitrage. Is
it a good technical design? Probably not. It seems weird to make Aodh a
bridge to Vitrage.

And I think that's the whole thing I am not liking from the current
proposal and the one I just invented. The way Aodh and Vitrage are
bridged, the way Vitrage is built on top and outside of Aodh right now
feels wobbly to me.

So here's another question then: why wouldn't there be a "zabbix" alarm
type in Aodh that could be created by a user (or another program) and
that would be triggered by Aodh when Zabbix does something?
Which is something that is really like the event alarm mechanism which
already exists. Maybe all that's missing is a
Zabbix-to-OpenStack-notification converter to have that feature?

I'll stop that for now to let you reply or my mail is going to be way
too long lol.

> And a side note – you said that Aodh and Zabbix are exactly the same. I agree.
> You can implement in Aodh everything that is implemented in Zabbix. But why do
> that instead of just using that alarms that are already created by another
> monitor?

Oh no point, I was just making a point to be sure we were on the same
line in term of understanding, and it seems we are. :)

> Well… is this awesome enough? ;-)

Yes thanks, I think this is a good example that will help us 

[openstack-dev] [I18n] PTL Candidacy for Pike

2017-01-26 Thread Ian Y. Choi

Hello,

I am writing to announce my candidacy for I18n Pike PTL.

To retrospect my activities in Ocata cycle,
my overall feeling is that Ocata cycle is really short.
In fact, I acknowledge that most of the action items I wrote in [1]
are still on-going or have not completed yet.
Nevertheless, the followings are some memorable activities which I have
involved with and also would like to continue during upcoming Pike cycle:

* Well summarized action items from Ocata Design Summit [2]
* IRC meeting time change accordingly with two bi-weekly schedules
* Publicizing IRC meeting announcement and logs [3]
* On-going effort on Zanata upgrade with Xenial with infra team
* Clearer communication on translation imports and criteria [4, 5]
* Landing page updates for translated documents
* Co-participation in Pike PTG with Documentation team [6]

I am seeing that there are many upstream translations,
and many I18n team members participate with great and kind help.
I really appreciate their contributions and also the help from
especially Infrastructure team, Documentation team, and Zanata development
team members. I believe that I am able to continue my effort to finish
what I wrote in [1] and make for better I18n.

Please support and encourage me again.


With many thanks,

/Ian

[1] 
https://git.openstack.org/cgit/openstack/election/plain/candidates/ocata/I18n/ianychoi.txt

[2] https://etherpad.openstack.org/p/barcelona-i18n-meetup
[3] https://wiki.openstack.org/wiki/I18N/MeetingLogs
[4] 
http://docs.openstack.org/developer/i18n/reviewing-translation-import.html

[5] http://docs.openstack.org/developer/i18n/infra.html
[6] https://www.openstack.org/ptg


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Latest and greatest on trying to get n-sch to require placement

2017-01-26 Thread Jay Pipes

On 01/26/2017 09:14 AM, Ed Leafe wrote:

On Jan 26, 2017, at 7:50 AM, Sylvain Bauza  wrote:


That's where I think we have another problem, which is bigger than the
corner case you mentioned above : when upgrading from Newton to Ocata,
we said that all Newton computes have be upgraded to the latest point
release. Great. But we forgot to identify that it would also require to
*modify* their nova.conf so they would be able to call the placement API.

That looks to me more than just a rolling upgrade mechanism. In theory,
a rolling upgrade process accepts that N-1 versioned computes can talk
to N versioned other services. That doesn't imply a necessary
configuration change (except the upgrade_levels flag) on the computes to
achieve that, right?

http://docs.openstack.org/developer/nova/upgrade.html


Reading that page: "At this point, you must also ensure you update the 
configuration, to stop using any deprecated features or options, and perform any 
required work to transition to alternative features.”

So yes, "updating your configuration” is an expected action. I’m not sure why 
this is so alarming.


Me neither.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Field 'domain_id' doesn't have a default value

2017-01-26 Thread Eduardo Gonzalez
Hi.
I’m testing upgrades from Newton to master branch using keystone’s
zero-downtime upgrade method:

keystone-manage db_sync --expand
keystone-manage db_sync --migrate
keystone-manage db_sync --contract

After upgrade is made with no errors in logs, I cannot create users. Other
keystone commands works fine.

Error message: “Field ‘domain_id’ doesn’t have a default value”

Full trace:

2017-01-26 15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters
[req-576e5683-6631-49b9-b532-65f06a23bbeb
4b3ff53812734e72b5aea42103349571 04fe68cf58724855a2ee67781a14b446 -
default default] DBAPIError exception wrapped from
(pymysql.err.InternalError) (1364, u"Field 'domain_id' doesn't have a
default value") [SQL: u'INSERT INTO user (id, enabled, extra,
default_project_id, created_at, last_active_at) VALUES (%(id)s,
%(enabled)s, %(extra)s, %(default_project_id)s, %(created_at)s,
%(last_active_at)s)'] [parameters: {'last_active_at': None, 'extra':
'{}', 'created_at': datetime.datetime(2017, 1, 26, 15, 32, 10,
974903), 'enabled': 1, 'default_project_id': None, 'id':
'8989be945c954f14a1c9ebaf45988fad'}]2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters Traceback (most recent call
last):2017-01-26 15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters
 File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1139, in _execute_context2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters context)2017-01-26 15:32:10.978 17
ERROR oslo_db.sqlalchemy.exc_filters   File
"/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
line 450, in do_execute2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters cursor.execute(statement,
parameters)2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters   File
"/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/cursors.py",
line 167, in execute2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters result =
self._query(query)2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters   File
"/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/cursors.py",
line 323, in _query2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters conn.query(q)2017-01-26
15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters   File
"/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/connections.py",
line 836, in query2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters self._affected_rows =
self._read_query_result(unbuffered=unbuffered)2017-01-26 15:32:10.978
17 ERROR oslo_db.sqlalchemy.exc_filters   File
"/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/connections.py",
line 1020, in _read_query_result2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters result.read()2017-01-26
15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters   File
"/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/connections.py",
line 1303, in read2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters first_packet =
self.connection._read_packet()2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters   File
"/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/connections.py",
line 982, in _read_packet2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters packet.check_error()2017-01-26
15:32:10.978 17 ERROR oslo_db.sqlalchemy.exc_filters   File
"/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/connections.py",
line 394, in check_error2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters
err.raise_mysql_exception(self._data)2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters   File
"/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/err.py", line
120, in raise_mysql_exception2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters
_check_mysql_exception(errinfo)2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters   File
"/var/lib/kolla/venv/lib/python2.7/site-packages/pymysql/err.py", line
115, in _check_mysql_exception2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters raise InternalError(errno,
errorvalue)2017-01-26 15:32:10.978 17 ERROR
oslo_db.sqlalchemy.exc_filters InternalError: (1364, u"Field
'domain_id' doesn't have a default value")2017-01-26 15:32:10.978 17
ERROR oslo_db.sqlalchemy.exc_filters

Database migration logs:

2017-01-26 15:27:11.425 18 INFO migrate.versioning.api [-] 4 -> 5...
2017-01-26 15:27:11.540 18 INFO migrate.versioning.api [-]
done2017-01-26 15:27:11.540 18 INFO migrate.versioning.api [-] 5 ->
6... 2017-01-26 15:27:11.643 18 INFO migrate.versioning.api [-]
done2017-01-26 15:27:11.645 18 INFO migrate.versioning.api [-] 6 ->
7... 2017-01-26 15:27:11.758 18 INFO migrate.versioning.api [-]
done2017-01-26 15:27:11.759 18 INFO migrate.versioning.api [-] 7 ->
8... 2017-01-26 15:27:11.872 18 INFO migrate.versioning.api [-]
done2017-01-26 15:27:11.879 18 INFO migrate.versioning.api [-] 8 ->
9... 2017-01-26 15:27:12.082 18 INFO 

Re: [openstack-dev] [tripleo] [tripleo-quickstart] pending reviews for composable upgrade for Ocata

2017-01-26 Thread John Trowbridge


On 01/26/2017 10:00 AM, Emilien Macchi wrote:
> On Thu, Jan 26, 2017 at 9:51 AM, John Trowbridge  wrote:
>>
>>
>> On 01/26/2017 04:03 AM, mathieu bultel wrote:
>>> Hi,
>>>
>>> I'm sending this email to the list to request reviews about the
>>> composable upgrade work I have been done in Tripleo quickstart. It's
>>> pending for a while (dec 4 for one of those 2 reviews), and I have
>>> addressed all the comments on time, rebase & so one [1].
>>> Those reviews is required, and very important for 3 reasons:
>>> 1/ It addressed the following BP: [2]
>>> 2/ It would give a tool for the other Squad and DFGs to start to play
>>> with composable upgrade in order to support their own components.
>>> 3/ It will be a first shot for the Tripleo-CI / Tripleo-Quickstart
>>> transition for supporting the tripleo-ci upgrade jobs that we have
>>> implemented few weeks ago now.
>>>
>>> I updated the documentation (README) regarding the upgrade workflow, the
>>> commit message explain the deployment workflow, I know it's not easy to
>>> review this stuff, and probably tripleo-quickstart cores don't give
>>> importance around this subject. I think I can't do much more now for
>>> making the review more easy for the Cores.
>>>
>>> It was one of my concerns about adding all the very specific extras
>>> roles (upgrade / baremetal / scale) in one common repo, loosing flexibly
>>> and reaction, but it's more than that...
>>>
>>> I'm planning to write a "How To" for helping to other DFGs/Squads to
>>> work on upgrade, but since this work is still under review, I'm stuck.
>>>
>>> Thanks.
>>>
>>> [1]
>>> tripleo-quickstart repo:
>>> https://review.openstack.org/#/c/410831/
>>> tripleo-quickstart-extras repo:
>>> https://review.openstack.org/#/c/416480/
>>>
>>> [2]
>>>
>>> https://blueprints.launchpad.net/tripleo/+spec/tripleo-composable-upgrade-job
>>>
>>
>> We discussed this a bit this morning on #tripleo, and the consensus
>> there was that we should be focusing upgrade CI efforts for the end of
>> Ocata cycle on the existing tripleo-ci multinode upgrades job. This is
>> due to priority constraints on both sides.
>>
>> On the quickstart side, we really need to focus on having good
>> replacements for all of the basic jobs which are solid (OVB, multinode,
>> scenarios), so we can switch over in early Pike.
>>
>> On the upgrades side, we really need to focus on having coverage for as
>> many services to upgrade as possible.
> 
> I'm currently working on this front, by implementing the
> scenarioXXX-upgrade jobs (with multinode, but not oooq yet):
> https://review.openstack.org/#/c/425727/
> 
> Any feedback on the review is welcome, I hope it's aligned with our plans.
> 

I think this approach is great and should allow transitioning it to run
via quickstart to be simple after we have the scenario jobs in quickstart.

>> As such, I think we should use the existing job for upgrades, and port
>> it to quickstart after we have switched over the basic jobs in early Pike.
>>
>> One note about making it easier to get patches reviewed. As a group, I
>> think we have been reviewing quickstart/extras patches at a very good
>> pace. However, adding a very large feature with no CI covering it, makes
>> me personally totally uninterested to review. Not only does it require
>> me to follow some manual instructions just to see it actually works, but
>> there is nothing preventing it from being completely broken within days
>> of merging the feature.
>>
>> Another thing we should probably document for Tripleo CI somewhere is
>> that we should be trying to create multinode based CI for anything that
>> does not require nova/ironic interactions. Upgrades are in this category.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [aodh][vitrage] Aodh generic alarms

2017-01-26 Thread Afek, Ifat (Nokia - IL)
On 25/01/2017, 17:12, "Julien Danjou"  wrote:

> On Wed, Jan 25 2017, Afek, Ifat (Nokia - IL) wrote:
>  
> To circle back to the original point, the main question that I asked and
> started this thread is: why, why Aodh should store Vitrage alarms? What
> are the advantages, for both Aodh and Vitrage?
> 
> So far the only answer I read is "well we though Aodh would be a central
> storage place for alarm". So far it seems it has more drawbacks than
> benefits: worst performances for Vitrage, confusion for users and more
> complexity in Aodh.
> 
> As I already said, I'm trying to be really objective on this. I just
> really want someone to explain to me how awesome this will be and why we
> should totally go toward this direction. :-)

I’ll try to answer your question from a user perspective. 

Suppose a bridge has a bond of two physical ports, and Zabbix detects a signal 
loss in one of them. This failure has no immediate effect on the host, 
instances or applications, and will not be reflected anywhere in OpenStack. 

Vitrage will receive an alarm from Zabbix, identify the instances that will be 
affected if the entire bond fails, and create deduced alarms that they are at 
risk (if the other port fails they will become unreachable). Similarly, it will 
create alarms on the relevant applications.

A user that checks Aodh will see that all alarms are in ‘ok’ state, which might 
be misleading. The user might determine that everything is ok with the 
instances that Aodh is monitoring. If the user then checks Vitrage, he will see 
the deduced alarms and understand that the instances and the applications are 
at risk. 

Does it make sense that the user will check Aodh *and* Vitrage? A standard user 
would like to see all of the alarms in one place, no matter which monitor was 
responsible for triggering them.

And a side note – you said that Aodh and Zabbix are exactly the same. I agree. 
You can implement in Aodh everything that is implemented in Zabbix. But why do 
that instead of just using that alarms that are already created by another 
monitor?

Well… is this awesome enough? ;-)
Ifat.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Requirements] Freeze

2017-01-26 Thread Matthew Thode
On 01/24/2017 02:22 PM, Matthew Thode wrote:
> We are going to be freezing Thursday at ~20:00 UTC.
> 
> So if you need any changes we'll be needing needing them in soon, with
> reasoning.  Thanks.

This is just about 4 hours away now, so second and last reminder.

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Latest and greatest on trying to get n-sch to require placement

2017-01-26 Thread Matt Riedemann

On 1/26/2017 7:41 AM, Sylvain Bauza wrote:


Circling back to the problem as time flies. As the patch Matt proposed
for option #4 is not fully working yet, I'm implementing option #3 by
making the HostManager.get_filtered_hosts() method being resilient to
the fact that there are no hosts given by the placement API if and only
if the user asked for forced destinations.

-Sylvain


And circling back on *that*, we've agreed to introduce a new service 
version for the compute to indicate it's Ocata or not. Then we'll:


* check in the scheduler if the minimum compute service version is ocata,
* if minimum is ocata, then use placement, else fallback to the old 
resource tracker data in the compute_nodes table - then we remove that 
fallback in Pike.


We'll also have a check for the placement config during init_host on the 
ocata compute such that if you are upgrading to ocata code for the 
compute but don't have placement configured, it's a hard fail and the 
nova-compute service is doing to die.


I'm pretty sure we've come full circle on this now.

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Docs status?

2017-01-26 Thread Paul Bourke

All,

After some further discussion in the last meeting I have submitted a 
patch here: https://review.openstack.org/#/c/425749/


If you look at the number of conflicting patches I think it highlights 
the problem I'm trying to solve here, where ansible related patches are 
still been submitted against the kolla repository.


Please have a look if you have time.

Thanks,
-Paul

On 24/01/17 12:06, Paul Bourke wrote:

Hi Kolla,

Does anyone know the current status of docs refactor? I believe there
was someone looking into it (apologies can't remember their name).

If not, I'd like to propose a vote for some immediate changes that can
improve things.

Regards,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-26 Thread Dougal Matthews
On 24 January 2017 at 16:16, Mikhail Fedosin  wrote:

> Hey, Flavio :) Thanks for your questions!
>
> As you said currently only Nokia's adopting Glare for its own platform,
> but if we talk about OpenStack, that I believe Mistral will start to use it
> soon.
>

Has there been some discussion surrounding Mistral and Glare? I'd be
interested in reading more about those plans and ideas.



> In my opinion Glare's adoption is low due to the fact that the project is
> not included under Big Tent. I think it will be changed soon, because now
> I'm finishing Glare v1 API proposal, and when it's done we will apply under
> BT.
>
> About Glance v2 API - as I said they are +- the same with several cosmetic
> differences (in Glance status is called 'queued', in Glare we renamed it to
> 'drafted', and so on). For this reason I'm going to implement a middleware,
> that will provide a full Image API v2 for Glare (even with unnecessary
> '/v2' prefix) and glance clients will be able to communicate with it as
> with Glance. It's definitely doable and we can discuss it more detailed
> during the PTG.
>
> Best,
> Mike
>
> On Mon, Jan 23, 2017 at 11:51 AM, Flavio Percoco 
> wrote:
>
>> On 19/01/17 12:48 +0300, Mikhail Fedosin wrote:
>>
>>> Hi Matt!
>>>
>>> This should be discussed, for sure, but there is a lot of potential. In
>>> general, it depends on how far we are willing to go. In the minimum
>>> approximation we can seamlessly replace Glance with Glare and operators
>>> simply get additional features for versioning, validation (and
>>> conversion,
>>> if necessary) of their uploaded images on the fly, as well as support for
>>> storing files in different stores.
>>>
>>> If we dig a little deeper, then Glare allows you to store multiple files
>>> in
>>> a single artifact, so we can create a new type (ec2_image) and define
>>> three
>>> blobs inside: ami, ari, aki, and upload all three as a single object.
>>> This
>>> will get rid of a large amount of legacy code and simplify the
>>> architecture
>>> of Nova. Plus Glare will control the integrity of such artifact.
>>>
>>
>> Hey Mike,
>>
>> Thanks for bringing this up. While I think there's potential in Glare
>> given it's
>> being created during a more mature age of OpenStack and based on matured
>> principles and standards, I believe you may be promoting Glare using the
>> wrong
>> arguments.
>>
>> As you mentioned in your first email on this thread, Glare has a set of
>> functionalities that are already useful to a set of existing projects.
>> This is
>> great and I'd probably start from there.
>>
>> * How much have these projects adopted Glare?
>> * Is Glare being deployed already?
>> * What projects are the main consumers of Glare?
>>
>> Unfortunately, replacing Glance is not as simple as dropping Glare in
>> because
>> it's not only being used by Nova. Glance is also a public API (at least
>> v2) and
>> there are integrations that have been built by either cloud providers or
>> cloud
>> consumers on top of the existing Glance API.
>>
>> If Glare ships a Glance compatible API (as a way to make a drop-in
>> replacement
>> possible), it'll have to support it and live with it for a long time. In
>> addition to this, Glare will have to keep up with the changes that may
>> happen in
>> Glance's API during that time.
>>
>> The next step could be full support for OVF and other formats that require
>>> a large number of files. Here we can use artifact folders and put all the
>>> files there.
>>> "OpenStack Compute does not currently have support for OVF packages, so
>>> you
>>> will need to extract the image file(s) from an OVF package if you wish to
>>> use it with OpenStack."
>>> http://docs.openstack.org/image-guide/introduction.html
>>>
>>> Finally, I notice that there are a few nasty bugs in Glance (you know
>>> what
>>> I mean), which make it extremely inconvenient for a number of
>>> deployments.
>>>
>>
>> Not everyone is familiar with the issues of Glance's API. I believe I
>> know what
>> you're referring to but I'd recommend to expand here for the sake of
>> discussion.
>>
>> That being said, I'd also like to point out that the flaws of Glance's
>> API could
>> be fixed so I'd rather focus the discussion on the benefits Glare would
>> bring
>> rather than how Glance's API may or may not be terrible. That's the kind
>> of
>> competition I'd prefer to see in this discussion.
>>
>> Cheers,
>> Flavio
>>
>>
>> On Wed, Jan 18, 2017 at 8:26 PM, Matt Riedemann <
>>> mrie...@linux.vnet.ibm.com>
>>> wrote:
>>>
>>> On 1/18/2017 10:54 AM, Mikhail Fedosin wrote:

 Hello!
>
> In this letter I want to tell you the current status of Glare project
> and discuss its future development within the entire OpenStack
> community.
>
> In the beginning I have to say a few words about myself - my name is
> Mike and I am the PTL of Glare. Currently I work as a consultant at
> Nokia, where we're developing the 

Re: [openstack-dev] [glance] FFE Request

2017-01-26 Thread Brian Rosmaita
Update: The FFE for the Rolling Upgrades work was discussed at today's
Glance meeting, and an FFE was granted.

On 1/26/17 8:58 AM, Brian Rosmaita wrote:
> On 1/26/17 8:52 AM, Steve Lewis wrote:
>> I'm requesting a FFE to enable us to complete the work described as the
>> "Rolling Upgrades" priority [0].
>>
> Thanks, Steve.  FFE exception requests are on the agenda for today's
> Glance meeting at 14:00 UTC.
> 
> cheers,
> brian
> 
> 
>>
>> [0]
>> http://specs.openstack.org/openstack/glance-specs/priorities/ocata-priorities.html
>>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Propose Dharini Chandrasekar for Glance core

2017-01-26 Thread Brian Rosmaita
Having heard only affirmative responses, I've added Dharini Chandrasekar
to the Glance core group, with all the rights and privileges pertaining
thereto.

Welcome to the Glance core team, Dharini!

On 1/24/17 8:36 AM, Brian Rosmaita wrote:
> I'd like to propose Dharini Chandrasekar (dharinic on IRC) for Glance
> core.  She has been an active reviewer and contributor to the Glance
> project during the Newton and Ocata cycles, has contributed to other
> OpenStack projects, and has represented Glance in some interactions with
> other project teams.  Additionally, she recently jumped in and saw
> through to completion a high priority feature for Newton when the
> original developer was unable to continue working on it.  Plus, she's
> willing to argue with me (and the other cores) about points of software
> engineering.  She will be a great addition to the Glance core reviewers
> team.
> 
> If you have any concerns, please let me know.  I plan to add Dharini to
> the core list after this week's Glance meeting.
> 
> thanks,
> brian
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] To rootwrap or piggyback privsep helpers?

2017-01-26 Thread Davanum Srinivas
Clint,

Pike may be too soon :) as we need to make sure what we have in
oslo.rootwrap/oslo.privsep work properly in py35. I saw some stuff i
am still chasing.

So the one after next will have my vote.

-- Dims

On Thu, Jan 26, 2017 at 9:55 AM, Clint Byrum  wrote:
> Excerpts from Thierry Carrez's message of 2017-01-26 10:08:52 +0100:
>> Michael Still wrote:
>> > I think #3 is the right call for now. The person we had working on
>> > privsep has left the company, and I don't have anyone I could get to
>> > work on this right now. Oh, and we're out of time.
>>
>> Yes, as much as I'm an advocate of privsep adoption, I don't think the
>> last minutes before feature freeze are the best moment to introduce a
>> single isolated privsep-backed command in Nova. So I'd recommend #3.
>>
>> In an ideal world, Nova would start migrating existing commands early in
>> Pike so that in the near future, adding new privsep-backed commands
>> doesn't feel so alien.
>>
>
> Would it be too radical to propose the full migration of everything to
> privsep as a Pike community goal?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [relmgt] PTL candidacy for Pike

2017-01-26 Thread Emilien Macchi
On Thu, Jan 26, 2017 at 7:24 AM, Thierry Carrez  wrote:
> Hi!
>
> I would like to submit my candidacy to return as PTL of the Release
> Management team for the Pike cycle.
>
> You may remember me as the release manager from Bexar to Grizzly, and
> PTL of the Release Management team from Havana to Liberty. I'd like to
> thank Doug Hellmann for his service as PTL from Mitaka to Ocata. Under
> his leadership, the Release Management team transformed from an artisan
> shop into a highly efficient and scalable factory. His focus on writing
> down everything and introducing automation everywhere will make the work
> of succeeding him easier than ever. But we always wanted to introduce
> regular PTL rotation in the team, so Doug won't run again, and for Pike
> I volunteer to take back that baton.
>
> I would personally prefer to let someone else take it (and would love to
> see other election candidates for this role !), but during this cycle we
> probably failed to grow new members who would want to take over team
> leadership. People with interest in cross-project functions like Release
> Management and time to dedicate to it are a rare resource those days.

Thanks for volunteering, Thierry! (and Doug for your outstanding work
in the last cycles).

Maybe could you (and Doug) document (if not done already, sorry if
that's the case) your tasks on Release management, and the "things to
know" about this topic, on https://releases.openstack.org.
Also we could eventually organize a training session during a PTG (or
virtual?), to get attraction from folks volunteering to help and
eventually become good enough to apply PTL one day.
Just some ideas here, feel free to comment.

> If elected, my plan is to:
>
> - continue in the direction set by Doug toward more self-service
> automation around Release Management, and focus the team on providing a
> framework, advice and last-minute sanity checks before tagging releases.
>
> - anticipate changes that we may need to do to accommodate the
> introduction of new programming languages.
>
> - add a few new members in the team, to grow the set of people able to
> participate in a PTL rotation scheme in the future.
>
> Thanks for reading until the last line!
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [tripleo-quickstart] pending reviews for composable upgrade for Ocata

2017-01-26 Thread Emilien Macchi
On Thu, Jan 26, 2017 at 9:51 AM, John Trowbridge  wrote:
>
>
> On 01/26/2017 04:03 AM, mathieu bultel wrote:
>> Hi,
>>
>> I'm sending this email to the list to request reviews about the
>> composable upgrade work I have been done in Tripleo quickstart. It's
>> pending for a while (dec 4 for one of those 2 reviews), and I have
>> addressed all the comments on time, rebase & so one [1].
>> Those reviews is required, and very important for 3 reasons:
>> 1/ It addressed the following BP: [2]
>> 2/ It would give a tool for the other Squad and DFGs to start to play
>> with composable upgrade in order to support their own components.
>> 3/ It will be a first shot for the Tripleo-CI / Tripleo-Quickstart
>> transition for supporting the tripleo-ci upgrade jobs that we have
>> implemented few weeks ago now.
>>
>> I updated the documentation (README) regarding the upgrade workflow, the
>> commit message explain the deployment workflow, I know it's not easy to
>> review this stuff, and probably tripleo-quickstart cores don't give
>> importance around this subject. I think I can't do much more now for
>> making the review more easy for the Cores.
>>
>> It was one of my concerns about adding all the very specific extras
>> roles (upgrade / baremetal / scale) in one common repo, loosing flexibly
>> and reaction, but it's more than that...
>>
>> I'm planning to write a "How To" for helping to other DFGs/Squads to
>> work on upgrade, but since this work is still under review, I'm stuck.
>>
>> Thanks.
>>
>> [1]
>> tripleo-quickstart repo:
>> https://review.openstack.org/#/c/410831/
>> tripleo-quickstart-extras repo:
>> https://review.openstack.org/#/c/416480/
>>
>> [2]
>>
>> https://blueprints.launchpad.net/tripleo/+spec/tripleo-composable-upgrade-job
>>
>
> We discussed this a bit this morning on #tripleo, and the consensus
> there was that we should be focusing upgrade CI efforts for the end of
> Ocata cycle on the existing tripleo-ci multinode upgrades job. This is
> due to priority constraints on both sides.
>
> On the quickstart side, we really need to focus on having good
> replacements for all of the basic jobs which are solid (OVB, multinode,
> scenarios), so we can switch over in early Pike.
>
> On the upgrades side, we really need to focus on having coverage for as
> many services to upgrade as possible.

I'm currently working on this front, by implementing the
scenarioXXX-upgrade jobs (with multinode, but not oooq yet):
https://review.openstack.org/#/c/425727/

Any feedback on the review is welcome, I hope it's aligned with our plans.

> As such, I think we should use the existing job for upgrades, and port
> it to quickstart after we have switched over the basic jobs in early Pike.
>
> One note about making it easier to get patches reviewed. As a group, I
> think we have been reviewing quickstart/extras patches at a very good
> pace. However, adding a very large feature with no CI covering it, makes
> me personally totally uninterested to review. Not only does it require
> me to follow some manual instructions just to see it actually works, but
> there is nothing preventing it from being completely broken within days
> of merging the feature.
>
> Another thing we should probably document for Tripleo CI somewhere is
> that we should be trying to create multinode based CI for anything that
> does not require nova/ironic interactions. Upgrades are in this category.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] To rootwrap or piggyback privsep helpers?

2017-01-26 Thread Clint Byrum
Excerpts from Thierry Carrez's message of 2017-01-26 10:08:52 +0100:
> Michael Still wrote:
> > I think #3 is the right call for now. The person we had working on
> > privsep has left the company, and I don't have anyone I could get to
> > work on this right now. Oh, and we're out of time.
> 
> Yes, as much as I'm an advocate of privsep adoption, I don't think the
> last minutes before feature freeze are the best moment to introduce a
> single isolated privsep-backed command in Nova. So I'd recommend #3.
> 
> In an ideal world, Nova would start migrating existing commands early in
> Pike so that in the near future, adding new privsep-backed commands
> doesn't feel so alien.
> 

Would it be too radical to propose the full migration of everything to
privsep as a Pike community goal?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [tripleo-quickstart] pending reviews for composable upgrade for Ocata

2017-01-26 Thread John Trowbridge


On 01/26/2017 04:03 AM, mathieu bultel wrote:
> Hi,
> 
> I'm sending this email to the list to request reviews about the
> composable upgrade work I have been done in Tripleo quickstart. It's
> pending for a while (dec 4 for one of those 2 reviews), and I have
> addressed all the comments on time, rebase & so one [1].
> Those reviews is required, and very important for 3 reasons:
> 1/ It addressed the following BP: [2]
> 2/ It would give a tool for the other Squad and DFGs to start to play
> with composable upgrade in order to support their own components.
> 3/ It will be a first shot for the Tripleo-CI / Tripleo-Quickstart
> transition for supporting the tripleo-ci upgrade jobs that we have
> implemented few weeks ago now.
> 
> I updated the documentation (README) regarding the upgrade workflow, the
> commit message explain the deployment workflow, I know it's not easy to
> review this stuff, and probably tripleo-quickstart cores don't give
> importance around this subject. I think I can't do much more now for
> making the review more easy for the Cores.
> 
> It was one of my concerns about adding all the very specific extras
> roles (upgrade / baremetal / scale) in one common repo, loosing flexibly
> and reaction, but it's more than that...
> 
> I'm planning to write a "How To" for helping to other DFGs/Squads to
> work on upgrade, but since this work is still under review, I'm stuck.
> 
> Thanks.
> 
> [1]
> tripleo-quickstart repo:
> https://review.openstack.org/#/c/410831/
> tripleo-quickstart-extras repo:
> https://review.openstack.org/#/c/416480/
> 
> [2]
> 
> https://blueprints.launchpad.net/tripleo/+spec/tripleo-composable-upgrade-job
> 

We discussed this a bit this morning on #tripleo, and the consensus
there was that we should be focusing upgrade CI efforts for the end of
Ocata cycle on the existing tripleo-ci multinode upgrades job. This is
due to priority constraints on both sides.

On the quickstart side, we really need to focus on having good
replacements for all of the basic jobs which are solid (OVB, multinode,
scenarios), so we can switch over in early Pike.

On the upgrades side, we really need to focus on having coverage for as
many services to upgrade as possible.

As such, I think we should use the existing job for upgrades, and port
it to quickstart after we have switched over the basic jobs in early Pike.

One note about making it easier to get patches reviewed. As a group, I
think we have been reviewing quickstart/extras patches at a very good
pace. However, adding a very large feature with no CI covering it, makes
me personally totally uninterested to review. Not only does it require
me to follow some manual instructions just to see it actually works, but
there is nothing preventing it from being completely broken within days
of merging the feature.

Another thing we should probably document for Tripleo CI somewhere is
that we should be trying to create multinode based CI for anything that
does not require nova/ironic interactions. Upgrades are in this category.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] Core team updates

2017-01-26 Thread Mathieu, Pierre-Arthur
Hello,

I would like to propose some modifications regarding the Freezer core team.

First, the removal of two inactive members:
  - Fabrizio Vanni
  - Jonas Pfannschmidt
Thank you very much for your contributions, your are welcome back in the core 
team if you start contributing again.


Secondly, I would like to propose that we promote Ruslan Aliev (raliev) core:
He has been a highly valuable developper for the past few month, and recently 
released a big feature: the Rsync engine.
His work can be found here: [1]
And his stackalitics profile here: [2]


If you agree with all these change, please approve with a +1 answer otherwise 
explain your opinion.
If there are no objection, I plan on applying these tomorrow evening.

Thanks
- Pierre, Freezer PTL

[1] https://review.openstack.org/#/q/owner:%22yapeng+Yang%22
[2] http://stackalytics.com/?release=all=loc_id=raliev

 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Latest and greatest on trying to get n-sch to require placement

2017-01-26 Thread Sylvain Bauza


Le 26/01/2017 05:42, Matt Riedemann a écrit :
> This is my public hand off to Sylvain for the work done tonight.
> 
> Starting with the multinode grenade failure in the nova patch to
> integrate placement with the filter scheduler:
> 
> https://review.openstack.org/#/c/417961/
> 
> The test_schedule_to_all_nodes tempest test was failing in there because
> that test explicitly forces hosts using AZs to build two instances.
> Because we didn't have nova.conf on the Newton subnode in the multinode
> grenade job configured to talk to placement, there was no resource
> provider for that Newton subnode when we started running smoke tests
> after the upgrade to Ocata, so that test failed since the request to the
> subnode had a NoValidHost (because no resource provider was checking in
> from the Newton node).
> 
> Grenade is not topology aware so it doesn't know anything about the
> subnode. When the subnode is stacked, it does so via a post-stack hook
> script that devstack-gate writes into the grenade run, so after stacking
> the primary Newton node, it then uses Ansible to ssh into the subnode
> and stack Newton there too:
> 
> https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L629
> 
> 
> logs.openstack.org/61/417961/26/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/15545e4/logs/grenade.sh.txt.gz#_2017-01-26_00_26_59_296
> 
> 
> And placement was optional in Newton so, you know, problems.
> 
> Some options came to mind:
> 
> 1. Change the test to not be a smoke test which would exclude it from
> running during grenade. QA would barf on this.
> 
> 2. Hack some kind of pre-upgrade callback from d-g into grenade just for
> configuring placement on the compute subnode. This would probably
> require adding a script to devstack just so d-g has something to call so
> we could keep branch logic out of d-g, like what we did for the
> discover_hosts stuff for cells v2. This is more complicated than what I
> wanted to deal with tonight with limited time on my hands.
> 
> 3. Change the nova filter scheduler patch to fallback to get all compute
> nodes if there are no resource providers. We've already talked about
> this a few times already in other threads and I consider it a safety net
> we'd like to avoid if all else fails. If we did this, we could
> potentially restrict it to just the forced-host case...
> 
> 4. Setup the Newton subnode in the grenade run to configure placement,
> which I think we can do from d-g using the features yaml file. That's
> what I opted to go with and the patch is here:
> 
> https://review.openstack.org/#/c/425524/
> 
> I've made the nova patch dependent on that *and* the other grenade patch
> to install and configure placement on the primary node when upgrading
> from Newton to Ocata.
> 
> -- 
> 
> That's where we're at right now. If #4 fails, I think we are stuck with
> adding a workaround for #3 into Ocata and then remove that in Pike when
> we know/expect computes to be running placement (they would be in our
> grenade runs from ocata->pike at least).
> 

Circling back to the problem as time flies. As the patch Matt proposed
for option #4 is not fully working yet, I'm implementing option #3 by
making the HostManager.get_filtered_hosts() method being resilient to
the fact that there are no hosts given by the placement API if and only
if the user asked for forced destinations.

-Sylvain

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Core team updates

2017-01-26 Thread Mathieu, Pierre-Arthur
Update: The review.openstack.org link is the following:

- https://review.openstack.org/#/q/owner:%22Ruslan+Aliev%22


Pierre

From: Mathieu, Pierre-Arthur
Sent: Thursday, January 26, 2017 2:25:38 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [freezer] Core team updates

Hello,

I would like to propose some modifications regarding the Freezer core team.

First, the removal of two inactive members:
  - Fabrizio Vanni
  - Jonas Pfannschmidt
Thank you very much for your contributions, your are welcome back in the core 
team if you start contributing again.


Secondly, I would like to propose that we promote Ruslan Aliev (raliev) core:
He has been a highly valuable developper for the past few month, and recently 
released a big feature: the Rsync engine.
His work can be found here: [1]
And his stackalitics profile here: [2]


If you agree with all these change, please approve with a +1 answer otherwise 
explain your opinion.
If there are no objection, I plan on applying these tomorrow evening.

Thanks
- Pierre, Freezer PTL

[1] https://review.openstack.org/#/q/owner:%22yapeng+Yang%22
[2] http://stackalytics.com/?release=all=loc_id=raliev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Latest and greatest on trying to get n-sch to require placement

2017-01-26 Thread Sylvain Bauza


Le 26/01/2017 15:14, Ed Leafe a écrit :
> On Jan 26, 2017, at 7:50 AM, Sylvain Bauza  wrote:
>>
>> That's where I think we have another problem, which is bigger than the
>> corner case you mentioned above : when upgrading from Newton to Ocata,
>> we said that all Newton computes have be upgraded to the latest point
>> release. Great. But we forgot to identify that it would also require to
>> *modify* their nova.conf so they would be able to call the placement API.
>>
>> That looks to me more than just a rolling upgrade mechanism. In theory,
>> a rolling upgrade process accepts that N-1 versioned computes can talk
>> to N versioned other services. That doesn't imply a necessary
>> configuration change (except the upgrade_levels flag) on the computes to
>> achieve that, right?
>>
>> http://docs.openstack.org/developer/nova/upgrade.html
> 
> Reading that page: "At this point, you must also ensure you update the 
> configuration, to stop using any deprecated features or options, and perform 
> any required work to transition to alternative features.”
> 
> So yes, "updating your configuration” is an expected action. I’m not sure why 
> this is so alarming.
> 

You give that phrase out of context. To give more details, that specific
sentence is related to what you should do *after* having your
maintenance window (ie. upgrading your controller while your API is
down) and the introduction paragraph mentions that all the bullet items
relate to all the nova services but the hypervisors.

And I'm not alarmed. I'm just trying to identify the correct upgrade
path that we should ask our operators to do. If that means adding an
extra step than the regular upgrade process, then I think everyone
should be aware of it.
Take myself, I'm probably exhausted and very narrow-eyed so I missed
that implication. I apologize for it and I want to clarify that.

-Sylvain

> 
> -- Ed Leafe
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Latest and greatest on trying to get n-sch to require placement

2017-01-26 Thread John Garbutt
On 26 January 2017 at 14:14, Ed Leafe  wrote:
> On Jan 26, 2017, at 7:50 AM, Sylvain Bauza  wrote:
>>
>> That's where I think we have another problem, which is bigger than the
>> corner case you mentioned above : when upgrading from Newton to Ocata,
>> we said that all Newton computes have be upgraded to the latest point
>> release. Great. But we forgot to identify that it would also require to
>> *modify* their nova.conf so they would be able to call the placement API.
>>
>> That looks to me more than just a rolling upgrade mechanism. In theory,
>> a rolling upgrade process accepts that N-1 versioned computes can talk
>> to N versioned other services. That doesn't imply a necessary
>> configuration change (except the upgrade_levels flag) on the computes to
>> achieve that, right?
>>
>> http://docs.openstack.org/developer/nova/upgrade.html
>
> Reading that page: "At this point, you must also ensure you update the 
> configuration, to stop using any deprecated features or options, and perform 
> any required work to transition to alternative features.”
>
> So yes, "updating your configuration” is an expected action. I’m not sure why 
> this is so alarming.

We did make this promise:
https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html#requirements

Its bending that configuration requirement a little bit.
That requirement was originally added at the direct request of operators.

Now there is a need to tidy up your configuration after completing the
upgrade to N+1 before upgrading to N+2, but I believe that was assumed
to happen at the end of the N+1 upgrade, using the N+1 release notes.
The idea being warning messages in the logs etc, would help that all
get fixed before attempting the next upgrade. But I agree thats not
what the docs are currently saying.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Latest and greatest on trying to get n-sch to require placement

2017-01-26 Thread Ed Leafe
On Jan 26, 2017, at 7:50 AM, Sylvain Bauza  wrote:
> 
> That's where I think we have another problem, which is bigger than the
> corner case you mentioned above : when upgrading from Newton to Ocata,
> we said that all Newton computes have be upgraded to the latest point
> release. Great. But we forgot to identify that it would also require to
> *modify* their nova.conf so they would be able to call the placement API.
> 
> That looks to me more than just a rolling upgrade mechanism. In theory,
> a rolling upgrade process accepts that N-1 versioned computes can talk
> to N versioned other services. That doesn't imply a necessary
> configuration change (except the upgrade_levels flag) on the computes to
> achieve that, right?
> 
> http://docs.openstack.org/developer/nova/upgrade.html

Reading that page: "At this point, you must also ensure you update the 
configuration, to stop using any deprecated features or options, and perform 
any required work to transition to alternative features.”

So yes, "updating your configuration” is an expected action. I’m not sure why 
this is so alarming.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Latest and greatest on trying to get n-sch to require placement

2017-01-26 Thread John Garbutt
On 26 January 2017 at 13:50, Sylvain Bauza  wrote:
> Le 26/01/2017 05:42, Matt Riedemann a écrit :
>> This is my public hand off to Sylvain for the work done tonight.
>>
>
> Thanks Matt for your help yesterday, was awesome to count you in even
> you're personally away.
>
>
>> Starting with the multinode grenade failure in the nova patch to
>> integrate placement with the filter scheduler:
>>
>> https://review.openstack.org/#/c/417961/
>>
>> The test_schedule_to_all_nodes tempest test was failing in there because
>> that test explicitly forces hosts using AZs to build two instances.
>> Because we didn't have nova.conf on the Newton subnode in the multinode
>> grenade job configured to talk to placement, there was no resource
>> provider for that Newton subnode when we started running smoke tests
>> after the upgrade to Ocata, so that test failed since the request to the
>> subnode had a NoValidHost (because no resource provider was checking in
>> from the Newton node).
>>
>
> That's where I think the current implementation is weird : if you force
> the scheduler to return you a destination (without even calling the
> filters) by just verifying if the corresponding service is up, then why
> are you needing to get the full list of computes before that ?
>
> To the placement extend, if you just *force* the scheduler to return you
> a destination, then why should we verify if the resources are happy ?
> FWIW, we now have a fully different semantics that replaces the
> "force_hosts" thing that I hate : it's called
> RequestSpec.requested_destination and it actually verifies the filters
> only for that destination. No straight bypass of the filters like
> force_hosts does.

Thats just a symptom though, as I understand it?

It seems the real problem seems to be the placement isn't configured
on the old node. Which by accident is what most deployers are likely
to hit, if they didn't setup placement when upgrading last cycle.

>> Grenade is not topology aware so it doesn't know anything about the
>> subnode. When the subnode is stacked, it does so via a post-stack hook
>> script that devstack-gate writes into the grenade run, so after stacking
>> the primary Newton node, it then uses Ansible to ssh into the subnode
>> and stack Newton there too:
>>
>> https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L629
>>
>>
>> logs.openstack.org/61/417961/26/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/15545e4/logs/grenade.sh.txt.gz#_2017-01-26_00_26_59_296
>>
>>
>> And placement was optional in Newton so, you know, problems.
>>
>
> That's where I think we have another problem, which is bigger than the
> corner case you mentioned above : when upgrading from Newton to Ocata,
> we said that all Newton computes have be upgraded to the latest point
> release. Great. But we forgot to identify that it would also require to
> *modify* their nova.conf so they would be able to call the placement API.
>
> That looks to me more than just a rolling upgrade mechanism. In theory,
> a rolling upgrade process accepts that N-1 versioned computes can talk
> to N versioned other services. That doesn't imply a necessary
> configuration change (except the upgrade_levels flag) on the computes to
> achieve that, right?
>
> http://docs.openstack.org/developer/nova/upgrade.html

We normally say the config that worked last cycle should be fine.

We probably should have said placement was required last cycle, then
this wouldn't have been an issue.

>> Some options came to mind:
>>
>> 1. Change the test to not be a smoke test which would exclude it from
>> running during grenade. QA would barf on this.
>>
>> 2. Hack some kind of pre-upgrade callback from d-g into grenade just for
>> configuring placement on the compute subnode. This would probably
>> require adding a script to devstack just so d-g has something to call so
>> we could keep branch logic out of d-g, like what we did for the
>> discover_hosts stuff for cells v2. This is more complicated than what I
>> wanted to deal with tonight with limited time on my hands.
>>
>> 3. Change the nova filter scheduler patch to fallback to get all compute
>> nodes if there are no resource providers. We've already talked about
>> this a few times already in other threads and I consider it a safety net
>> we'd like to avoid if all else fails. If we did this, we could
>> potentially restrict it to just the forced-host case...
>>
>> 4. Setup the Newton subnode in the grenade run to configure placement,
>> which I think we can do from d-g using the features yaml file. That's
>> what I opted to go with and the patch is here:
>>
>> https://review.openstack.org/#/c/425524/
>>
>> I've made the nova patch dependent on that *and* the other grenade patch
>> to install and configure placement on the primary node when upgrading
>> from Newton to Ocata.
>>
>> --
>>
>> That's where we're at right now. If #4 fails, I think we are stuck with
>> adding a workaround for 

Re: [openstack-dev] [glance] FFE Request

2017-01-26 Thread Brian Rosmaita
On 1/26/17 8:52 AM, Steve Lewis wrote:
> I'm requesting a FFE to enable us to complete the work described as the
> "Rolling Upgrades" priority [0].
> 
Thanks, Steve.  FFE exception requests are on the agenda for today's
Glance meeting at 14:00 UTC.

cheers,
brian


> 
> [0]
> http://specs.openstack.org/openstack/glance-specs/priorities/ocata-priorities.html
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] FFE Request

2017-01-26 Thread Steve Lewis
I'm requesting a FFE to enable us to complete the work described as the
"Rolling Upgrades" priority [0].


[0]
http://specs.openstack.org/openstack/glance-specs/priorities/ocata-priorities.html

-- 
SteveL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Latest and greatest on trying to get n-sch to require placement

2017-01-26 Thread Sylvain Bauza


Le 26/01/2017 05:42, Matt Riedemann a écrit :
> This is my public hand off to Sylvain for the work done tonight.
> 

Thanks Matt for your help yesterday, was awesome to count you in even
you're personally away.


> Starting with the multinode grenade failure in the nova patch to
> integrate placement with the filter scheduler:
> 
> https://review.openstack.org/#/c/417961/
> 
> The test_schedule_to_all_nodes tempest test was failing in there because
> that test explicitly forces hosts using AZs to build two instances.
> Because we didn't have nova.conf on the Newton subnode in the multinode
> grenade job configured to talk to placement, there was no resource
> provider for that Newton subnode when we started running smoke tests
> after the upgrade to Ocata, so that test failed since the request to the
> subnode had a NoValidHost (because no resource provider was checking in
> from the Newton node).
> 

That's where I think the current implementation is weird : if you force
the scheduler to return you a destination (without even calling the
filters) by just verifying if the corresponding service is up, then why
are you needing to get the full list of computes before that ?

To the placement extend, if you just *force* the scheduler to return you
a destination, then why should we verify if the resources are happy ?
FWIW, we now have a fully different semantics that replaces the
"force_hosts" thing that I hate : it's called
RequestSpec.requested_destination and it actually verifies the filters
only for that destination. No straight bypass of the filters like
force_hosts does.

> Grenade is not topology aware so it doesn't know anything about the
> subnode. When the subnode is stacked, it does so via a post-stack hook
> script that devstack-gate writes into the grenade run, so after stacking
> the primary Newton node, it then uses Ansible to ssh into the subnode
> and stack Newton there too:
> 
> https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L629
> 
> 
> logs.openstack.org/61/417961/26/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/15545e4/logs/grenade.sh.txt.gz#_2017-01-26_00_26_59_296
> 
> 
> And placement was optional in Newton so, you know, problems.
> 

That's where I think we have another problem, which is bigger than the
corner case you mentioned above : when upgrading from Newton to Ocata,
we said that all Newton computes have be upgraded to the latest point
release. Great. But we forgot to identify that it would also require to
*modify* their nova.conf so they would be able to call the placement API.

That looks to me more than just a rolling upgrade mechanism. In theory,
a rolling upgrade process accepts that N-1 versioned computes can talk
to N versioned other services. That doesn't imply a necessary
configuration change (except the upgrade_levels flag) on the computes to
achieve that, right?

http://docs.openstack.org/developer/nova/upgrade.html


> Some options came to mind:
> 
> 1. Change the test to not be a smoke test which would exclude it from
> running during grenade. QA would barf on this.
> 
> 2. Hack some kind of pre-upgrade callback from d-g into grenade just for
> configuring placement on the compute subnode. This would probably
> require adding a script to devstack just so d-g has something to call so
> we could keep branch logic out of d-g, like what we did for the
> discover_hosts stuff for cells v2. This is more complicated than what I
> wanted to deal with tonight with limited time on my hands.
> 
> 3. Change the nova filter scheduler patch to fallback to get all compute
> nodes if there are no resource providers. We've already talked about
> this a few times already in other threads and I consider it a safety net
> we'd like to avoid if all else fails. If we did this, we could
> potentially restrict it to just the forced-host case...
> 
> 4. Setup the Newton subnode in the grenade run to configure placement,
> which I think we can do from d-g using the features yaml file. That's
> what I opted to go with and the patch is here:
> 
> https://review.openstack.org/#/c/425524/
> 
> I've made the nova patch dependent on that *and* the other grenade patch
> to install and configure placement on the primary node when upgrading
> from Newton to Ocata.
> 
> -- 
> 
> That's where we're at right now. If #4 fails, I think we are stuck with
> adding a workaround for #3 into Ocata and then remove that in Pike when
> we know/expect computes to be running placement (they would be in our
> grenade runs from ocata->pike at least).
> 


Given the above two problems that I stated, I think I'm in favor of a #3
approach now that would do the following :

 - modify the scheduler so that it's acceptable to have the placement
returning nothing if you force hosts

 - modify the scheduler so in the event of an empty list returned by the
placement API, fallback getting the list of all computes


That still leaves the problem where a few computes are not all 

[openstack-dev] [devstack][telemetry][gate] Some gate jobs are broken

2017-01-26 Thread Julien Danjou
Hi,

I just want to bring everyone attention on the recent change that broke
some of our gate jobs due to a recent change in devstack.

Mehdi was kind enough to investigate and send 2 potential fixes:

  https://review.openstack.org/425620
  https://review.openstack.org/425615

It'd be nice if the concerned parties could review those fixes and
acknowledge ASAP as it's a blocker for the Telemetry team. :)

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] focus for RC1 week

2017-01-26 Thread Emilien Macchi
Folks,

Here's a short term agenda for action items in TripleO team:

## Jan 26th (today)
We are releasing python-tripleoclient and stable/ocata will be created
for this project.
If you're working on a bug that is candidate for backport, please tag
it "ocata-backport-potential".
Priority has to be critical or high to be backported.

## Jan 27th (tomorrow)
Once we have python-tripleoclient in place with stable/ocata branch,
we'll need to make advanced testing of TripleO CI and make sure
everything is in place to deploy Ocata packaging from the right RDO
builds.
We'll work closely with RDO folks on this side, but both
project-config & tripleo-ci should be ready™.

## Next week until March 10th
RC & final releases.
Feature & CI freeze will start.
During this time, folks should focus on upgrades from Newton to Ocata,
fixing bugs [1].
Please do the FFE or CIFE [2] requests on openstack-dev [tripleo].

Please let us know any concern or feedback, it's always welcome!

[1] https://launchpad.net/tripleo/+milestone/ocata-3
 https://launchpad.net/tripleo/+milestone/ocata-rc1
[2] I think I just invented it: CI feature exception

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] PTL Candidacy for Pike

2017-01-26 Thread Rob Cresswell
Hi everyone,

I’m announcing my candidacy for PTL of Horizon for the Pike release cycle.

Over the next 6 months I’d like to:

- Narrow the blueprint/feature scope to ensure we focus on high priority areas, 
like non-performant panels or buggy user experiences. In previous cycles, our 
attitude has been to opportunistically accept features as they were developed; 
however, given the rapid decline in review count I don’t believe this is 
maintainable going forward. I’d like to follow a similar structure to Neutron, 
with a smaller feature count that is well understood by core reviewers and 
other time being spent on addressing bugs. We’ll do this by accepting only a 
handful of blueprints at a time, and designating a core reviewer to monitor 
each blueprints progress.

- Continue working with Keystone via cross-project meetings to fix key bugs in 
Identity management. Over the past couple of cycles we’ve closed some long 
standing Keystone interaction bugs in Horizon, and I’d like to continue this 
effort. There are several patches still to work on in Horizon and Django 
OpenStack Auth, and we should make sure we capitalise on the excellent help 
from the Keystone community to improve the Horizon Identity panels.

- Keep up community interaction. Richard has maintained a list of priority 
patches through Ocata, as well as sending out weekly meeting reminders and 
progress updates. I’ll continue this work, and hold bug days to highlight key 
issues to invest our time in.

Thanks,

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo-heat-templates, vendor plugins and the new hiera hook

2017-01-26 Thread Sofer Athlan-Guyot
Hi,

Steven Hardy  writes:

> On Wed, Jan 25, 2017 at 02:59:42PM +0200, Marios Andreou wrote:
>> Hi, as part of the composable upgrades workflow shaping up for Newton to
>> Ocata, we need to install the new hiera hook that was first added with
>> [1] and disable the old hook and data as part of the upgrade
>> initialization [2]. Most of the existing hieradata was ported to use the
>> new hook in [3]. The deletion of the old hiera data is necessary for the
>> Ocata upgrade, but it also means it will break any plugins still using
>> the 'old' os-apply-config hiera hook.
>> 
>> In order to be able to upgrade to Ocata any templates that define hiera
>> data need to be using the new hiera hook and then the overcloud nodes
>> need to have the new hook installed (installing is done in [2] as a
>> matter of necessity, and that is what prompted this email in the first
>> place). I've had a go at updating all the plugin templates that are
>> still using the old hiera data with a review at [4] which I have -1 for now.
>> 
>> I'll try and reach out to some individuals more directly as well but
>> wanted to get the review at [4] and this email out as a first step,
>
> Thanks for raising this marios, and yeah it's unfortunate as we've had to
> do a switch from the old to new hiera hook this release with out a
> transition where both work.
>
> I think we probably need to do the following:
>
> 1. Convert anything in t-h-t refering to the old hook to the new (seems you
> have this in progress, we need to ensure it all lands before ocata)
>
> 2. Write a good release note for t-h-t explaining the change, referencing
> docs which show how to convert to use the new hook
>
> 3. Figure out a way to make the 99-refresh-completed script signal failure
> if anyone tries to deploy with the old hook (vs potentially silently
> failing then hanging the deploy, which I think is what will happen atm).

I've created a bug to make sure this isn't forgotten before the release:

  https://bugs.launchpad.net/tripleo/+bug/1659540

>
> I think ensuring a good error path should mitigate this change, since it's
> fairly simple for folks to switch to the new hook provided we can document
> it and point to those docs in the error I think.
>
> Be good to get input from Dan on this too, as he might have ideas on how we
> could maintain both hooks for one release.

This would be ideal, and would remove the need for the previous bug.

Thanks,

>
> Thanks!
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Sofer Athlan-Guyot

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [relmgt] PTL candidacy for Pike

2017-01-26 Thread Thierry Carrez
Hi!

I would like to submit my candidacy to return as PTL of the Release
Management team for the Pike cycle.

You may remember me as the release manager from Bexar to Grizzly, and
PTL of the Release Management team from Havana to Liberty. I'd like to
thank Doug Hellmann for his service as PTL from Mitaka to Ocata. Under
his leadership, the Release Management team transformed from an artisan
shop into a highly efficient and scalable factory. His focus on writing
down everything and introducing automation everywhere will make the work
of succeeding him easier than ever. But we always wanted to introduce
regular PTL rotation in the team, so Doug won't run again, and for Pike
I volunteer to take back that baton.

I would personally prefer to let someone else take it (and would love to
see other election candidates for this role !), but during this cycle we
probably failed to grow new members who would want to take over team
leadership. People with interest in cross-project functions like Release
Management and time to dedicate to it are a rare resource those days.

If elected, my plan is to:

- continue in the direction set by Doug toward more self-service
automation around Release Management, and focus the team on providing a
framework, advice and last-minute sanity checks before tagging releases.

- anticipate changes that we may need to do to accommodate the
introduction of new programming languages.

- add a few new members in the team, to grow the set of people able to
participate in a PTL rotation scheme in the future.

Thanks for reading until the last line!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] FFE Request

2017-01-26 Thread Lingxian Kong
On Thu, Jan 26, 2017 at 9:37 PM, Rob Cresswell  wrote:

> I'll put up Security Groups and Floating IPs once they start moving
> (maintaining huge patch chains is a waste of time)


​Huge +1!​



Cheers,
Lingxian Kong (Larry)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [Performance] Gathering quota usage data in Horizon

2017-01-26 Thread Lingxian Kong
On Fri, Jan 27, 2017 at 12:28 AM, Rob Cresswell <
robert.cressw...@outlook.com> wrote:

> There's quite a lot to this. So, first off, quotas in horizon are not in
> great shape, and it should be one of our priorities next cycle to improve
> on this. As you've pointed out, it seems any check on quotas right now runs
> multiple serial API calls for everything that has quotas; I haven't checked
> this myself, but others have mentioned the same behaviour.
>
> I don't think anyone is actively working on improving quota behaviour, but
> in the past cycle these two efforts spring to mind:
> - https://blueprints.launchpad.net/horizon/+spec/make-quotas-great-again
> - https://review.openstack.org/#/c/334017/
>
> If there are people with time to work on this effort I'd be happy to
> review. Instances management, quotas, overview pages, Identity work are
> what I'd currently consider the top priorities for improvement.
>

​Thanks Rob for the information. We (Catalyst Cloud) are far more happy to
help with this effort​. I will take a look at the blueprint and check the
latest status of that work with the author.

Cheers,
Lingxian Kong (Larry)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][kuryr] python-k8sclient vs client-python (was Fwd: client-python Beta Release)

2017-01-26 Thread Davanum Srinivas
Team,

A bit of history, we had a client generated from swagger definition for a
while in Magnum, we plucked it out into python-k8sclient which then got
used by fuel-ccp, kuryr etc. Recently the kuberneted team started an effort
called client-python. Please see 1.0.0b1 announcement.

* It's on pypi[1] and readthedocs[2]
* i've ported the e2e tests in python-k8sclient that runs against an actual
k8s setup and got that working
* i've looked at various tests in kuryr, fuel-ccp, magnum etc to see what
could be ported as well. most of it is merged already. i have a couple of
things in progress

So, when client-python hits 1.0.0, Can we please mothball our
python-k8sclient and switch over to the k8s community supported option?
Can you please evaluate what's missing so we can make sure those things get
into 1.0.0 final?

Thanks,
Dims

[1] https://pypi.python.org/pypi/kubernetes
[2] http://kubernetes.readthedocs.io/en/latest/kubernetes.html

-- Forwarded message --
From: 'Mehdy Bohlool' via Kubernetes developer/contributor discussion <
kubernetes-...@googlegroups.com>
Date: Wed, Jan 25, 2017 at 8:34 PM
Subject: client-python Beta Release
To: Kubernetes developer/contributor discussion <
kubernetes-...@googlegroups.com>, kubernetes-us...@googlegroups.com


Python client is now in beta. Please find more information here:
https://github.com/kubernetes-incubator/client-python/releases/tag/v1.0.0b1

You can reach the maintainers of this project at SIG API Machinery
. If
you have any problem with the client or any suggestions, please file an
issue .


Mehdy Bohlool |  Software Engineer |  me...@google.com |  mbohlool@github


-- 
You received this message because you are subscribed to the Google Groups
"Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-dev+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/kubernetes-dev/CACd0WeG3O1t%3DXt7AGykyK7CcLmVYyJAB918c%
2BXvteqVrW3nb7A%40mail.gmail.com

.
For more options, visit https://groups.google.com/d/optout.



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [Performance] Gathering quota usage data in Horizon

2017-01-26 Thread Rob Cresswell
There's quite a lot to this. So, first off, quotas in horizon are not in great 
shape, and it should be one of our priorities next cycle to improve on this. As 
you've pointed out, it seems any check on quotas right now runs multiple serial 
API calls for everything that has quotas; I haven't checked this myself, but 
others have mentioned the same behaviour.

I don't think anyone is actively working on improving quota behaviour, but in 
the past cycle these two efforts spring to mind:
- https://blueprints.launchpad.net/horizon/+spec/make-quotas-great-again
- https://review.openstack.org/#/c/334017/

If there are people with time to work on this effort I'd be happy to review. 
Instances management, quotas, overview pages, Identity work are what I'd 
currently consider the top priorities for improvement.

Rob

On 26 January 2017 at 03:24, Lingxian Kong 
> wrote:
Hi, guys,

Sorry for recalling this thread after 1 year, but we are currently suffering 
from the poor performance issue for our public cloud.

As usage of our customers keeps growing, we are at a stage that should 
seriously pay more attention to horizon performance problem, so Google took me 
to this email after a lot of search.

Currently, when loading a page that may contain some buttons for 
creating/allocating resource (e.g. 'Access & Security'), horizon will check the 
quota usage first to see if a specific button should be disabled or not, and 
the checkings just happen *in sequence*, which makes things even worse.

What's more, the quota usage query in horizon is included in one function[1], 
it will invoke Nova, Cinder, Neutron (perhaps more in future) APIs to get usage 
of bunch of resources, rather than the resource that page is rendering, which 
is another flaw IMHO. I know that this function call is already put in cache, 
but most of our customers' pain just come from the first click.

So, I have a few questions:

1. Does horizon support some config option that could disable quota check? As a 
public cloud, it doesn't make much sense that usage should be limited, and we 
have a monitoring tool that will increase quotas automatically when customer's 
usage will hit the quota limit. So, getting rid of that check will save our 
customers appreciable mass of waiting time.

2. Another option is to support get quota usage for specific resource rather 
than all the resources, e.g. when loading floating ip tab, horizon only get 
floating ip quota usage from Neutron, which has only 2 api calls.

3. I found this FFE[2] which is great (also replied), but splitting tabs is not 
the end, more effort should be put into the performance improvement.

4. Some other trivial improvement like this: https://review.openstack.org/425494

[1]: 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/usage/quotas.py#L396
[2]: 
http://openstack.markmail.org/thread/ra3brm6voo4ouxtx#query:+page:1+mid:oata2tifthnhy5b7+state:results


Cheers,
Lingxian Kong (Larry)

On Wed, Dec 23, 2015 at 9:50 PM, Timur Sufiev 
> wrote:
Duncan,

Thank you for the suggestion, will do.

On Wed, 23 Dec 2015 at 10:55, Duncan Thomas 
> wrote:
On a cloud with a large number of tenants, this is going to involve a large 
number of API calls. I'd suggest you put a spec into cinder to add an API call 
for getting the totals straight out of the DB - it should be easy enough to add.

On 18 December 2015 at 20:35, Timur Sufiev 
> wrote:
Matt,

actually Ivan (Ivan, thanks a lot!) showed me the exact cinderclient call that 
I needed. Now I know how to retrieve Cinder quota usage info per-tenant, seems 
that to retrieve the same info cloud-wide I should sum up all the available 
tenant usages.

With Cinder quota usages being sorted out, my next goal is Nova and Neutron. As 
for Neutron, there are plenty of quota-related calls I'm going to play with 
next week, perhaps there is something suitable for my use case. But as for 
Nova, I haven't found something similar to 'usage' of cinderclient call, so 
help from someone familiar with Nova is very appreciated :).

[0] 
https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/quotas.py#L36

On Fri, Dec 18, 2015 at 5:17 PM Matt Riedemann 
> wrote:


On 12/17/2015 2:40 PM, Ivan Kolodyazhny wrote:
> Hi Timur,
>
> Did you try this Cinder API [1]?  Here [2] is cinderclient output.
>
>
>
> [1]
> https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/quotas.py#L33
> [2] http://paste.openstack.org/show/482225/
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Thu, Dec 17, 2015 at 8:41 PM, Timur Sufiev 
> 
> >> wrote:
>
>   

[openstack-dev] [glance] PTL candidacy for Pike

2017-01-26 Thread Brian Rosmaita
Hello everyone,

I'm asking for the opportunity to continue to serve as the PTL of
Glance into the Pike cycle.

The Current State of Glance
---

This has been an interesting cycle.  We accomplished the community
goals and eventually merged Community Images (after a thorough and
extensive discussion of almost every aspect of the feature),
completing half of our Ocata priorities.  We also fixed some bugs and
implemented some lite specs.  As of this date, however, our Rolling
Upgrades priority is going to require a FFE, and there's not a chance
that we're going to merge anything related to Image Import.

That's kind of a mixed record to be running for re-election on, so I
figure I should explain what I learned and how I see things shaping up
for Pike if I continue as PTL.

The Glance Community


First, I'd like to discuss the Glance Community.  There are several
contributors who are active, energetic, knowledgeable, and great to
work with.  There are some contributors who aren't quite so active,
but who attend meetings and help out occasionally reviewing the weekly
priority items.  Unfortunately, there are some current Glance cores
who don't currently fall into either of the above categories.

Before the Pike PTG, I'm going to ask people on the core list to
consult with their managers and determine whether they have sufficient
bandwidth to be effective core contributors to the project.  I was
hoping that people would naturally do this themselves, but that didn't
happen during Ocata ... primarily, I think, because it's an honor to
be a core contributor to an OpenStack project, and it's not easy to
face up to hanging up your stirrups, or turning in your badge and gun,
or whatever the appropriate metaphor is.  But for the success of the
project, I need an honest self-assessment from the current cores of
the amount of time they can commit to Glance.

(I didn't want to force the issue because, as is common with the core
contributors across all our projects, we're talking about high-quality
developers here, and I was hoping they'd be able to find more time to
work on Glance.  That was a mistake.  I'm mentioning it not to
embarrass any of the people I'm talking about, but rather as a point
of information to anyone reading this who will be a first-time PTL in
Pike.)

Anyway, my goal is to have the core team reconstituted before the Pike
PTG so that we'll have a better idea of what kind of bandwidth the
team will have in Pike.

I think this will cascade into a better overall Glance community
experience because there will be more participation in the weekly
meetings and on IRC, making the community feel more vibrant.  Further,
because those active people are currently reviewing all the time, we
haven't had as much bandwidth for bug triaging and bug smashing.  We
need to change that in Pike.  Plus, a larger group of active cores
will provide Glance with a bigger pool of PTL candidates for the
Queens cycle.

Glance Priorities for Pike
--

This is how I see things shaping up for Pike.

1. Image Import

Now I understand why my predecessors as PTL had such a hard time with
this.  The situation is that this is an extremely important feature
for the OpenStack community in general, but it's not really a priority
for anyone in particular.  Thus, it's been difficult to keep one or
two people, no matter how well-intentioned, working steadily on it
(and I don't exempt myself from this statement).

Since image import is an important feature for Glance, the entire team
needs to be working on it.  And we can make it a priority for everyone
by committing to *not* merge anything else (other than security bugs)
until image import is merged.  You can see how this is connected to my
discussion above about the core team.  I want all active cores to be
working on some part of this, and that way we are all accountable to
each other much more explicitly than we have been in previous cycles.

I'm not even going to enumerate anything else.  We'll work on the
"community goals", of course, but not until after image import has
been merged.  And we may have some more to do on the rolling
upgrades work.

We'll discuss other possible items at the PTG, but at this point I
don't want to look past image import.

Conclusion
--

Well, that's my election platform.  I'd like to continue as Glance
PTL, and I've outlined above what my plans are for the Pike cycle.

Thank you for your consideration,
Brian Rosmaita

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] PTG planning

2017-01-26 Thread Alexandra Settle
Hi everyone,

For anyone who is not on the docs mailing list but is interested in 
attending/knowing what we’re up to for the PTG, the joint docs and i18n 
planning is here: https://etherpad.openstack.org/p/docs-i18n-ptg-pike

I would like to extend an invitation to anyone who would like to attend and get 
involved, but previously hasn’t before. The docs session are Monday and Tuesday 
but I have been granted the time to float around till Friday. I am happy to sit 
down and discuss docs-things with you all.

If anyone would like assistance from a doc team member with their developer 
documentation or projects, please reach out and contact us. We’d be happy to 
help in any way we can ☺

I promise we’re nice.

Thanks,

Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] To rootwrap or piggyback privsep helpers?

2017-01-26 Thread Maxim Nestratov

26-Jan-17 12:08, Thierry Carrez пишет:


Michael Still wrote:

I think #3 is the right call for now. The person we had working on
privsep has left the company, and I don't have anyone I could get to
work on this right now. Oh, and we're out of time.

Yes, as much as I'm an advocate of privsep adoption, I don't think the
last minutes before feature freeze are the best moment to introduce a
single isolated privsep-backed command in Nova. So I'd recommend #3.

In an ideal world, Nova would start migrating existing commands early in
Pike so that in the near future, adding new privsep-backed commands
doesn't feel so alien.



Yeah, #3 option works for us perfectly, thanks.
Thanks for suggesting it Matt.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] PTL non-candidacy

2017-01-26 Thread Richard Jones
Hi folks,

I won't be standing for PTL for the Pike release.

Ocata has been quite the ride, and I will continue to be a contributor
to Horizon after this release.

Thanks for giving me a go in the big seat, and I look forward to
supporting whoever steps up as PTL for Pike!


 Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] To rootwrap or piggyback privsep helpers?

2017-01-26 Thread Thierry Carrez
Michael Still wrote:
> I think #3 is the right call for now. The person we had working on
> privsep has left the company, and I don't have anyone I could get to
> work on this right now. Oh, and we're out of time.

Yes, as much as I'm an advocate of privsep adoption, I don't think the
last minutes before feature freeze are the best moment to introduce a
single isolated privsep-backed command in Nova. So I'd recommend #3.

In an ideal world, Nova would start migrating existing commands early in
Pike so that in the near future, adding new privsep-backed commands
doesn't feel so alien.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] [tripleo-quickstart] pending reviews for composable upgrade for Ocata

2017-01-26 Thread mathieu bultel
Hi,

I'm sending this email to the list to request reviews about the
composable upgrade work I have been done in Tripleo quickstart. It's
pending for a while (dec 4 for one of those 2 reviews), and I have
addressed all the comments on time, rebase & so one [1].
Those reviews is required, and very important for 3 reasons:
1/ It addressed the following BP: [2]
2/ It would give a tool for the other Squad and DFGs to start to play
with composable upgrade in order to support their own components.
3/ It will be a first shot for the Tripleo-CI / Tripleo-Quickstart
transition for supporting the tripleo-ci upgrade jobs that we have
implemented few weeks ago now.

I updated the documentation (README) regarding the upgrade workflow, the
commit message explain the deployment workflow, I know it's not easy to
review this stuff, and probably tripleo-quickstart cores don't give
importance around this subject. I think I can't do much more now for
making the review more easy for the Cores.

It was one of my concerns about adding all the very specific extras
roles (upgrade / baremetal / scale) in one common repo, loosing flexibly
and reaction, but it's more than that...

I'm planning to write a "How To" for helping to other DFGs/Squads to
work on upgrade, but since this work is still under review, I'm stuck.

Thanks.

[1]
tripleo-quickstart repo:
https://review.openstack.org/#/c/410831/
tripleo-quickstart-extras repo:
https://review.openstack.org/#/c/416480/

[2]

https://blueprints.launchpad.net/tripleo/+spec/tripleo-composable-upgrade-job



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] FFE Request

2017-01-26 Thread Adrian Turjak
Yeah, I'll 1+ the patch. The move is an easy one once the bug is resolved.I'm quite amused by the bug, and that someone else filed the same bug a couple of hours right before me (and I missed it). Will review his fix.Looking forward to your other two patches.On 26/01/2017 9:37 PM, Rob Cresswell  wrote:




Wow, lots of replies. Thanks for the FFE. I'd request that we proceed with the panel separation now, and resolve the final location when any bug fixes have merged. I don't really want to hold up a whole chain of patches over a one line path change.
 API Access and Key Pairs are done, and I'll put up Security Groups and Floating IPs once they start moving (maintaining huge patch chains is a waste of time)


Rob


On 26 January 2017 at 02:09, Adrian Turjak 
 wrote:

I've posted some comments on the API Access patch.


The blueprint was saying that 'API Access' would be both at the Project level, but the way panel groups worked meant that setting the 'default' panel group didn't work when that dashboard already had panel groups, since the default panel group was annoyingly
 hidden away because of somewhat odd template logic.

I submitted a bug report here:
https://bugs.launchpad.net/horizon/+bug/1659456

And proposed a fix for that here:
https://review.openstack.org/#/c/425486

With that change the default group panels are not hidden, and displayed at the same level as the other panel groups. This then allows us to move API Access to the top level where the blueprint says. This makes much more sense since API Access isn't a compute
 only thing.



On 26/01/17 12:02, Fox, Kevin M wrote:


Big Thanks! from me too. The old UI here was very unintuitive, so I had to field a lot of questions related to it. This is great. :)

Kevin


From: Lingxian Kong [anlin.k...@gmail.com]
Sent: Wednesday, January 25, 2017 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [horizon] FFE Request




Hi, Rob,


First, thanks for your work!


What's your plan for the other two tabs (security group, floatingip)? I could see the split is very helpful no matter from performance perspective and both useful from end user's perspective.


BTW, a huge +1 for this FFE!












Cheers,
Lingxian Kong (Larry)






On Thu, Jan 26, 2017 at 9:01 AM, Adrian Turjak 
 wrote:


+1


We very much need this as the performance of that panel is awful. This solves that problem while being a fairly minor code change which also provides much better UX.



On 26/01/2017 8:07 AM, Rob Cresswell 
 wrote:


o/ 


I'd like to request an FFE on https://blueprints.launchpad.net/horizon/+spec/reorganise-access-and-security.
 This blueprint splits up the access and security tabs into 4 distinct panels. The first two patches are https://review.openstack.org/#/c/408247
 and https://review.openstack.org/#/c/425345/ 


It's low risk; no API layer changes, mostly just moving code around.


Rob











__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev









 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev












__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Planning for the Pike PTG

2017-01-26 Thread Thierry Carrez
Ken'ichi Ohmichi wrote:
> I am preparing for PTG sessions.
> How much capacity is in each room ? 30 people or more?

Rooms are all different. Some can fit 60+, some can only fit 15. The
events team is trying to optimize room allocation based on predicted
attendance (what meeting(s) people indicate in their registration they
will attend). Registrations are still coming in (only a few tickets
left!), so allocation was not finalized yet. I hope to get more
information soon, with several deadlines ending this week[1].

[1]
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110676.html

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] FFE Request

2017-01-26 Thread Rob Cresswell
Wow, lots of replies. Thanks for the FFE. I'd request that we proceed with the 
panel separation now, and resolve the final location when any bug fixes have 
merged. I don't really want to hold up a whole chain of patches over a one line 
path change. API Access and Key Pairs are done, and I'll put up Security Groups 
and Floating IPs once they start moving (maintaining huge patch chains is a 
waste of time)

Rob

On 26 January 2017 at 02:09, Adrian Turjak 
> wrote:
I've posted some comments on the API Access patch.


The blueprint was saying that 'API Access' would be both at the Project level, 
but the way panel groups worked meant that setting the 'default' panel group 
didn't work when that dashboard already had panel groups, since the default 
panel group was annoyingly hidden away because of somewhat odd template logic.

I submitted a bug report here:
https://bugs.launchpad.net/horizon/+bug/1659456

And proposed a fix for that here:
https://review.openstack.org/#/c/425486

With that change the default group panels are not hidden, and displayed at the 
same level as the other panel groups. This then allows us to move API Access to 
the top level where the blueprint says. This makes much more sense since API 
Access isn't a compute only thing.


On 26/01/17 12:02, Fox, Kevin M wrote:
Big Thanks! from me too. The old UI here was very unintuitive, so I had to 
field a lot of questions related to it. This is great. :)

Kevin

From: Lingxian Kong [anlin.k...@gmail.com]
Sent: Wednesday, January 25, 2017 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [horizon] FFE Request

Hi, Rob,

First, thanks for your work!

What's your plan for the other two tabs (security group, floatingip)? I could 
see the split is very helpful no matter from performance perspective and both 
useful from end user's perspective.

BTW, a huge +1 for this FFE!




Cheers,
Lingxian Kong (Larry)

On Thu, Jan 26, 2017 at 9:01 AM, Adrian Turjak 
> wrote:
+1

We very much need this as the performance of that panel is awful. This solves 
that problem while being a fairly minor code change which also provides much 
better UX.


On 26/01/2017 8:07 AM, Rob Cresswell 
<robert.cressw...@outlook.com>
 wrote:
o/

I'd like to request an FFE on 
 
https://blueprints.launchpad.net/horizon/+spec/reorganise-access-and-security.
 This blueprint splits up the access and security tabs into 4 distinct panels. 
The first two patches are  
https://review.openstack.org/#/c/408247 and 
https://review.openstack.org/#/c/425345/

It's low risk; no API layer changes, mostly just moving code around.

Rob


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] PTL Candidacy

2017-01-26 Thread Renat Akhmerov
Hi,

I'm Renat Akhmerov. I'm again running for PTL of Mistral in Pike.
Mistral is a workflow service developed within the OpenStack community from
the ground up.

In the last two development cycles (newton and ocata) we made a huge progress
on improving project maturity. Workflow engine works two orders of magnitude
faster than a year ago and it finally started working in multi-node mode.
We also significantly improvemed Mistral docs and Mistral Dashboard which
is finally a usable tool, made a number of backward compatible improvements in
Mistral workflow language, implemented an alternative RPC that eliminates
problems of the previous one, and other hundreds of bugs and smaller changes.
But our biggest achievement is that Mistral is now being used by even more
users and found its place in such fields as NFV, deployment, automation etc.

For the next cycle I'd like to propose the following roadmap which is built
on our users' needs:

* Performance & benchmarking
  * Less overhead per task
  * Big workflow graphs
  * Optimize ‘join’ tasks
* HA
  * Primarily we need to add test harness to make sure that HA is achieved
* Failover. Take care of running workflows on:
  * Mistral component restart
  * Infrastructure failures (DB, MQ, network etc.)
* Usability
  * New CLI/API (more consistent and human friendly interface)
  * Debugging workflows
  * Workflow failure analysis (error messages, navigate through nested
workflows etc.)
* Refactor Actions subsystem
  * Formalised Python API to develop actions
  * Actions testability
  * Actions versioning (i.e. actions working with different versions of
OpenStack services)

I'm hoping to gain your support regarding this roadmap.

We're always happy to get new contributors on the project and always ready
to help people interested in Mistral development get up to speed. The best
way to get in touch with us is IRC channel #openstack-mistral.

My patch to openstack/election repo: https://review.openstack.org/#/c/425573/ 


Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev