[openstack-dev] [kolla] Kolla PTG Day #2

2017-02-17 Thread Steven Dake (stdake)
Hey folks,

To take some load off Michal I’ve setup remote participation for day #2 of the 
PTG.

Michal had suggested he might setup zoom instead of webex, however, I haven’t 
seen that happen and several people have asked on IRC how remote participation 
will work at the PTG.  If Michal sets up remote participation in some other 
way, ignore this email (and he will follow up with a zoom.us calendar invite).

Note I feel like google hangouts is difficult to manage and is a non-starter 
because google Is not available to our worldwide participants.

See below for details:


From: Steven Dake 
Reply-To: "Steven Dake (stdake)" 
Date: Saturday, February 18, 2017 at 12:07 AM
To: "Steven Dake (stdake)" 
Subject: (Forward to others) WebEx meeting invitation: Kolla PTG Day #2

You can forward this invitation to others.


Hello,

Steven Dake invites you to join this WebEx meeting.





Kolla PTG Day #2

Saturday, February 18, 2017

9:00 am  |  Eastern Standard Time (New York, GMT-05:00)  |  8 hrs





Join WebEx meeting 



Meeting number:

200 758 630

Meeting password:

OpenStackKolla (67367822 from phones)


Join from a video conferencing system or application

Dial 200758...@cisco.webex.com<%20sip:200758...@cisco.webex.com>

From the Cisco internal network, dial *267* and the 9-digit meeting number. If 
you are the host, enter your PIN when prompted.





Join by phone

+1-866-432-9903 Call-in toll-free number (US/Canada)

+1-408-525-6800 Call-in toll number (US/Canada)

Access code: 200 758 630

Global call-in 
numbers
  |  Toll-free calling 
restrictions





Add this 
meeting
 to your calendar. (Cannot add from mobile devices.)





Can't join the meeting? Contact support.





IMPORTANT NOTICE: Please note that this WebEx service allows audio and other 
information sent during the session to be recorded, which may be discoverable 
in a legal matter. By joining this session, you automatically consent to such 
recordings. If you do not consent to being recorded, discuss your concerns with 
the host or do not join the session.






WebEx_Meeting.ics
Description: WebEx_Meeting.ics
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] PTG Day #1 Webex remote participation

2017-02-17 Thread Steven Dake (stdake)
Hey folks,

To take some load off Michal I’ve setup remote participation for day #1 of the 
PTG.

Michal had suggested he might setup zoom instead of webex, however, I haven’t 
seen that happen and several people have asked on IRC how remote participation 
will work at the PTG.  If Michal sets up remote participation in some othe 
rway, ignore this email (and he will follow up with a zoom.us calendar invite).

Note I feel like google hangouts is difficult to manage and is a non-starter 
because google Is not available to our worldwide participants.

See below for details:



From: Steven Dake 
Reply-To: "Steven Dake (stdake)" 
Date: Saturday, February 18, 2017 at 12:02 AM
To: "Steven Dake (stdake)" 
Subject: (Forward to others) WebEx meeting invitation: Kolla PTG Day #1

You can forward this invitation to others.


Hello,

Steven Dake invites you to join this WebEx meeting.





Kolla PTG Day #1

Monday, February 20, 2017

9:00 am  |  Eastern Standard Time (New York, GMT-05:00)  |  6 hrs





Join WebEx meeting 



Meeting number:

204 403 514

Meeting password:

OpenStackKolla (67367822 from phones)


Join from a video conferencing system or application

Dial 204403...@cisco.webex.com<%20sip:204403...@cisco.webex.com>

From the Cisco internal network, dial *267* and the 9-digit meeting number. If 
you are the host, enter your PIN when prompted.





Join by phone

+1-866-432-9903 Call-in toll-free number (US/Canada)

+1-408-525-6800 Call-in toll number (US/Canada)

Access code: 204 403 514

Global call-in 
numbers
  |  Toll-free calling 
restrictions





Add this 
meeting
 to your calendar. (Cannot add from mobile devices.)





Can't join the meeting? Contact support.





IMPORTANT NOTICE: Please note that this WebEx service allows audio and other 
information sent during the session to be recorded, which may be discoverable 
in a legal matter. By joining this session, you automatically consent to such 
recordings. If you do not consent to being recorded, discuss your concerns with 
the host or do not join the session.






WebEx_Meeting.ics
Description: WebEx_Meeting.ics
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The end of OpenStack packages in Debian?

2017-02-17 Thread Clint Byrum
Excerpts from Thomas Goirand's message of 2017-02-17 01:54:55 +0100:
> On 02/16/2017 05:55 PM, Clint Byrum wrote:
> > Excerpts from Thomas Goirand's message of 2017-02-15 13:43:46 +0100:
> >> All this to say that, unless someone wants to hire me for it (which
> >> would be the best outcome, but I fear this wont happen), or if someone
> >> steps in (this seems unlikely at this point), both the packaging-deb and
> >> the faith of OpenStack packages in Debian are currently compromised.
> >>
> >> I will continue to maintain OpenStack Newton during the lifetime of
> >> Debian Stretch though, but I don't plan on doing anything more. This
> >> means that maybe, Newton will be the last release of OpenStack in
> >> Debian. If things continue this way, I probably will ask for the removal
> >> of all OpenStack packages from Debian Sid after Stretch gets released
> >> (unless I know that someone will do the work).
> >>
> > 
> > Thomas, thanks for all your hard work. I hope you can return to it soon
> > and that this serves as a notice that we need more investment to keep
> > OpenStack viable in Debian and Ubuntu.
> 
> Yeah, thanks!
> 
> > 
> > Can I propose that we start to move some of the libraries and things
> > like OpenStack Client into DPMT/DPAT?  They don't require constant
> > attention, and it will be helpful to have a larger team assisting in
> > the packaging where it doesn't require anything OpenStack specific to
> > keep moving forward.
> 
> 3rd party libs, maybe. Things like OpenStack client and such may not be
> good candidates, as they interact too much with oslo stuff. Also,
> pushing these packages to a different team wont give you more
> contributors. Last, DPMT/DPAT insist in using git-dpm which is horrible,
> while I've successfully pushed all of the packaging into the OpenStack
> CI, which checks the build on every commit. I consider it a way safer
> and nicer than just the "stupid" Git on Alioth. If you want packages to
> go back to Alioth, at least someone has to setup a Jenkins the way I
> used to do, so that packages are build on git push. I don't have such an
> infrastructure anymore at my disposal, Mirantis even is destroying the
> server I used to work on (at this point in time, the FTP data may even
> already be lost for the older releases...).
> 

Indeed, DPMT uses all the worst choices for maintaining most of the
python module packages in Debian. However, something will need to be
done to spread the load of maintaining the essential libraries, and the
usual answer to that for Python libraries is DPMT.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] nova_powervm 4.0.0.0rc2 (ocata)

2017-02-17 Thread no-reply

Hello everyone,

A new release candidate for nova_powervm for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/nova-powervm/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/nova_powervm/log/?h=stable/ocata

Release notes for nova_powervm can be found at:

http://docs.openstack.org/releasenotes/nova_powervm/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread Terry Wilson
+1

On Feb 17, 2017 1:22 PM, "Kevin Benton"  wrote:

> Hi all,
>
> I'm organizing a Neutron social event for Thursday evening in Atlanta
> somewhere near the venue for dinner/drinks. If you're interested, please
> reply to this email with a "+1" so I can get a general count for a
> reservation.
>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Choose a project mascot

2017-02-17 Thread Hongbin Lu
Hi all,

Thanks for the inputs. By aggregating feedback from different source, the 
choice is as below:
* Barrel
* Storks
* Falcon (I am not sure this one since another team already chose Hawk)
* Dolphins
* Tiger

We will make a decision at the next team meeting.

Best regards,
Hongbin

From: Pradeep Singh [mailto:ps4openst...@gmail.com]
Sent: February-16-17 10:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zun] Choose a project mascot

I was thinking about falcon(light, powerful and fast), or dolphins or tiger.

On Wed, Feb 15, 2017 at 12:29 AM, Hongbin Lu 
> wrote:
Hi Zun team,

OpenStack has a mascot program [1]. Basically, if we like, we can choose a 
mascot to represent our team. The process is as following:
* We choose a mascot from the natural world, which can be an animal (i.e. fish, 
bird), natural feature (i.e. waterfall) or other natural element (i.e. flame).
* Once we choose a mascot, I communicate the choice with OpenStack foundation 
staff.
* Someone will work on a draft based on the style of the family of logos.
* The draft will be sent back to us for approval.

The final mascot will be used to present our team. All, any idea for the mascot 
choice?

[1] https://www.openstack.org/project-mascots/

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-17 Thread Steven Dake
On Thu, Feb 16, 2017 at 11:24 AM, Joshua Harlow 
wrote:

> Alex Schultz wrote:
>
>> On Thu, Feb 16, 2017 at 9:12 AM, Ed Leafe  wrote:
>>
>>> On Feb 16, 2017, at 10:07 AM, Doug Hellmann
>>> wrote:
>>>
>>> When we signed off on the Big Tent changes we said competition
 between projects was desirable, and that deployers and contributors
 would make choices based on the work being done in those competing
 projects. Basically, the market would decide on the "optimal"
 solution. It's a hard message to hear, but that seems to be what
 is happening.

>>> This.
>>>
>>> We got much better at adding new things to OpenStack. We need to get
>>> better at letting go of old things.
>>>
>>> -- Ed Leafe
>>>
>>>
>>>
>>>
>> I agree that the market will dictate what continues to survive, but if
>> you're not careful you may be speeding up the decline as the end user
>> (deployer/operator/cloud consumer) will switch completely to something
>> else because it becomes to difficult to continue to consume via what
>> used to be there and no longer is.  I thought the whole point was to
>> not have vendor lock-in.  Honestly I think the focus is too much on
>> the development and not enough on the consumption of the development
>> output.  What are the point of all these features if no one can
>> actually consume them.
>>
>>
> +1 to that.
>
> I've been in the boat of development and consumption of it for my *whole*
> journey in openstack land and I can say the product as a whole seems
> 'underbaked' with regards to the way people consume the development output.
> It seems we have focused on how to do the dev. stuff nicely and a nice
> process there, but sort of forgotten about all that being quite useless if
> no one can consume them (without going through much pain or paying a
> vendor).
>
> This has or has IMHO been a factor in why certain are companies (and the
> people they support) are exiting openstack and just going elsewhere.
>
> I personally don't believe fixing this is 'let the market forces' figure
> it out for us (what a slow & horrible way to let this play out; I'd almost
> rather go pull my fingernails out). I do believe it will require making
> opinionated decisions which we have all never been very good at.
>
>
I understand Samuel's situation and understand that a free market
capitalism as Doug mentioned appears to be how OpenStack has operated until
today.  For most of my life I was an ardent free market capitalist.  I have
heard many pundits on the news, blog posts, financial spam, etc say free
market capitalism is the best system humankind has found to managing the
flow of resources to people (In this thread's case the flow of contributors
to Chef).  Unfortunately this form of capitalism has resulted in all sorts
of disparity in terms of education, income, freedom, and many other aspects
of our society (which translated into technical components might be what we
see in the diversion of resources to other tools such as Ansible).  I would
pick on soda manufacturers now in the US for their usage of HFCS rather
then pure cane sugar in soda, however, you can hear me rant about that at
the PTG.

OpenStack is an experiment in governance.  Part of that experiment was the
Big Tent, which unlike a circus, was meant to encompass everyone's
political and technical viewpoints to arrive at harmonious working
relationships among the community.  This choice was excellent, however, it
has re-enforced a capitalist approach to developing and delivering
OpenStack.

There is however, always room for improvement in any system.  I'm not
suggesting we can live in some magical Star Trek universe where nobody
suffers and resources are endless via a replicator.

I am suggesting we can make improvements to our governance to solve some of
these problems by applying the approaches of "Conscious Capitalism" the
credo of which is outlined here [1].  How we go about applying these
approaches to OpenStack's governance process I am unclear on.  It is clear
that the few companies that have adopted this "movement" are clearly
improving the human experience for everyone involved, not just a limited
subset of blessed individuals.  I have only studied this subject for less
than 20 hours, but it seems like a big improvement on a free-market
capitalist system and something the entire OpenStack ecosystem should
examine.

[1]  https://www.consciouscapitalism.org/about/credo

Warm Regards,
-steve


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread reedip banerjee
+1 :)

On Sat, Feb 18, 2017 at 4:42 AM, MCCASLAND, TREVOR  wrote:

> +1
>
>
>
> *From:* Kevin Benton [mailto:ke...@benton.pub]
> *Sent:* Friday, February 17, 2017 1:19 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [neutron] - Neutron team social in Atlanta on
> Thursday
>
>
>
> Hi all,
>
>
>
> I'm organizing a Neutron social event for Thursday evening in Atlanta
> somewhere near the venue for dinner/drinks. If you're interested, please
> reply to this email with a "+1" so I can get a general count for a
> reservation.
>
>
>
> Cheers,
>
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks and Regards,
Reedip Banerjee
IRC: reedip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] congress 5.0.0.0rc2 (ocata)

2017-02-17 Thread no-reply

Hello everyone,

A new release candidate for congress for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/congress/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/congress/log/?h=stable/ocata

Release notes for congress can be found at:

http://docs.openstack.org/releasenotes/congress/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [architecture][nova][neutron][cinder][ceilometer][ironic] PTG Arch-WG

2017-02-17 Thread Clint Byrum
I've been told that the original subject of this broke some mail
clients. Hopefully this gets past their brokenness and you can all see
the content now. :)

Excerpts from Clint Byrum's message of 2017-02-17 10:16:05 -0800:
> Hello, I'm looking forward to seeing many of you next week in Atlanta.
> We're going to be working on Arch-WG topics all day Tuesday, and if
> you'd like to join us for that in general, please add your topic here:
> 
> https://etherpad.openstack.org/p/ptg-architecture-workgroup
> 
> I specifically want to call out an important discussion session for one
> of our active work streams, nova-compute-api:
> 
> https://review.openstack.org/411527
> https://review.openstack.org/43
> 
> At this point, we've gotten a ton of information from various
> contributors, and I want to thank everyone who commented on 411527 with
> helpful data. I'll be compiling the data we have into some bullet points
> which I intend to share on the projector in an etherpad[1], and then invite
> the room to ensure the accuracy and completeness of what we have there.
> I grabbed two 30-minute slots in Macon for Tuesday to do this, and I'd
> like to invite anyone who has thoughts on how nova-compute interacts to
> join us and participate. If you will not be able to attend, please read
> the documents and comments in the reviews above and fill in any information
> you think is missing on the etherpad[1] so we can address it there.
> 
> [1] https://etherpad.openstack.org/p/arch-wg-nova-compute-api-ptg-pike
> 
> Once we have this data, I'll likely spend a small amount of time grabbing 
> people from
> each relevant project team on Wednesday/Thursday to get a deeper 
> understanding of some
> of the pieces that we talk about on Tuesday.
> 
> >From that, as a group we'll produce a detailed analysis of all the ways
> nova-compute is interacted with today, and ongoing efforts to change
> them. If you are interested in this please do raise your hand and come
> to our meetings[2] as my time to work on this is limited, and the idea
> for the Arch-WG isn't "Arch-WG solves OpenStack" but "Arch-WG provides
> a structure by which teams can raise understanding of architecture."
> 
> [2] https://wiki.openstack.org/wiki/Meetings/Arch-WG
> 
> Once we've produced that analysis, which we intend to land as a document
> in our arch-wg repository, we'll produce a set of specs in the appropriate
> places (likely openstack-specs) for how to get it to where we want to
> go.
> 
> Also, speaking of the meeting -- Since we'll all be meeting on Tuesday
> at the PTG, the meeting for next week is cancelled.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] Reminder Boston Summit voting ends next week

2017-02-17 Thread Michael Johnson

Voting for presentations closes next Tuesday/Wednesday (TUESDAY, FEBRUARY 21
AT 11:59PM PST / WEDNESDAY, FEBRUARY 22 AT 6:59AM UTC).

If you have not already voted for sessions of interest, please do here:
https://www.openstack.org/summit/boston-2017/vote-for-speakers

Somehow this announcement e-mail got grouped under the [OpenStack Marketing]
tag for me so I didn't see it to mention it in the meeting announcements.

Michael



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-17 Thread Clint Byrum
Excerpts from Michael Still's message of 2017-02-18 09:41:03 +1100:
> We have had this discussion several times in the past for other reasons.
> The reality is that some people will never deploy the metadata API, so I
> feel like we need a better solution than what we have now.
> 
> However, I would consider it probably unsafe for the hypervisor to read the
> current config drive to get values, and persisting things like the instance
> root password in the Nova DB sounds like a bad idea too.
> 

Agreed. What if we simply have a second config drive that is for "things
that change" and only rebuild that one on reboot?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all] Ocata release candidates frozen

2017-02-17 Thread Doug Hellmann
Excerpts from Eric K's message of 2017-02-17 15:18:46 -0800:
> Hi all,
> 
> I¹d like to request an exception to release Congress RC2. I¹m really sorry
> that we got bogged down by a tricky, critical bug that we didn¹t manage to
> root cause and patch until the very last minute. I replied to Doug earlier
> about it, but neglected to reply to the list.
> 
> Here¹s the release request in question:
> https://review.openstack.org/#/c/435551/
> 
> Thanks so much for considering the request.

Given that the bug results in data loss, I think it clearly qualifies
for an exception. I've approved the new release candidate.

Doug

> 
> Eric
> 
> On 2/17/17, 8:08 AM, "Doug Hellmann"  wrote:
> 
> >Later today we will be entering the freeze period between the release
> >candidates and the final release next Wednesday. We have a couple
> >of releases in progress now for senlin and python-magnumclient, but
> >after those are completed we will not be releasing anything until
> >after the PTG.
> >
> >I hope to see you in Atlanta!
> >
> >Doug
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova Bug Team Coordinator for Pike

2017-02-17 Thread Augustina Ragwitz
As I announced in our last bug team meeting, I will be stepping down
from the Bug Coordinator role. Since taking on the Bug Team Coordinator
role I was laid off from my position where I was able to focus on
upstream Openstack 100% of the time. I've had less time for upstream
Nova work in my new position although I've tried to keep the Bug Team
meetings going and to be a resource for interested folks.

I've appointed Maciej Szankin (macsz on irc) as the new Nova Bug Team
Coordinator for the Pike release. Maciej has been a regular meeting
attendee and bugs team participant. Also he'll be attending the PTG next
week!

Thanks to everyone who attended the bug meetings and has been helping
out!

-- 
Augustina Ragwitz
Señora Software Engineer
---
Waiting for your change to get through the gate? Clean up some Nova
bugs!
http://45.55.105.55:8082/bugs-dashboard.html
---
email: aragwitz+n...@pobox.com
irc: auggy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all] Ocata release candidates frozen

2017-02-17 Thread Davanum Srinivas
Eric,

No worries. thanks for the heads up. +1 to the exception

-- Dims

On Fri, Feb 17, 2017 at 6:18 PM, Eric K  wrote:
> Hi all,
>
> I¹d like to request an exception to release Congress RC2. I¹m really sorry
> that we got bogged down by a tricky, critical bug that we didn¹t manage to
> root cause and patch until the very last minute. I replied to Doug earlier
> about it, but neglected to reply to the list.
>
> Here¹s the release request in question:
> https://review.openstack.org/#/c/435551/
>
> Thanks so much for considering the request.
>
> Eric
>
> On 2/17/17, 8:08 AM, "Doug Hellmann"  wrote:
>
>>Later today we will be entering the freeze period between the release
>>candidates and the final release next Wednesday. We have a couple
>>of releases in progress now for senlin and python-magnumclient, but
>>after those are completed we will not be releasing anything until
>>after the PTG.
>>
>>I hope to see you in Atlanta!
>>
>>Doug
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] End-of-Ocata core team updates

2017-02-17 Thread John Villalovos
+1 to both Vasyl and Mario.

Hopefully Deva will be able to come back again to Ironic in the future.

On Fri, Feb 17, 2017 at 10:42 AM, Julia Kreger 
wrote:

> Thank you Dmitry!
>
> I’m +1 to all of these actions. Vasyl and Mario will be great additions.
> As for Devananda, it saddens me but I agree and I hope to work with him
> again in the future.
>
> -Julia
>
> > On Feb 17, 2017, at 4:40 AM, Dmitry Tantsur  wrote:
> >
> > Hi all!
> >
> > I'd like to propose a few changes based on the recent contributor
> activity.
> >
> > I have two candidates that look very good and pass the formal barrier of
> 3 reviews a day on average [1].
> >
> > First, Vasyl Saienko (vsaienk0). I'm pretty confident in him, his stats
> [2] are high, he's doing a lot of extremely useful work around networking
> and CI.
> >
> > Second, Mario Villaplana (mariojv). His stats [3] are quite high too, he
> has been doing some quality reviews for critical patches in the Ocata cycle.
> >
> > Active cores and interested contributors, please respond with your +-1
> to these suggestions.
> >
> > Unfortunately, there is one removal as well. Devananda, our team leader
> for several cycles since the very beginning of the project, has not been
> active on the project for some time [4]. I propose to (hopefully temporary)
> remove him from the core team. Of course, when (look, I'm not even saying
> "if"!) he comes back to active reviewing, I suggest we fast-forward him
> back. Thanks for everything Deva, good luck with your current challenges!
> >
> > Thanks,
> > Dmitry
> >
> > [1] http://stackalytics.com/report/contribution/ironic-group/90
> > [2] http://stackalytics.com/?user_id=vsaienko=marks
> > [3] http://stackalytics.com/?user_id=mario-villaplana-j=marks
> > [4] http://stackalytics.com/?user_id=devananda=marks
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all] Ocata release candidates frozen

2017-02-17 Thread Eric K
Hi all,

I¹d like to request an exception to release Congress RC2. I¹m really sorry
that we got bogged down by a tricky, critical bug that we didn¹t manage to
root cause and patch until the very last minute. I replied to Doug earlier
about it, but neglected to reply to the list.

Here¹s the release request in question:
https://review.openstack.org/#/c/435551/

Thanks so much for considering the request.

Eric

On 2/17/17, 8:08 AM, "Doug Hellmann"  wrote:

>Later today we will be entering the freeze period between the release
>candidates and the final release next Wednesday. We have a couple
>of releases in progress now for senlin and python-magnumclient, but
>after those are completed we will not be releasing anything until
>after the PTG.
>
>I hope to see you in Atlanta!
>
>Doug
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread MCCASLAND, TREVOR
+1

From: Kevin Benton [mailto:ke...@benton.pub]
Sent: Friday, February 17, 2017 1:19 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

Hi all,

I'm organizing a Neutron social event for Thursday evening in Atlanta somewhere 
near the venue for dinner/drinks. If you're interested, please reply to this 
email with a "+1" so I can get a general count for a reservation.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - Team photo

2017-02-17 Thread Kevin Benton
Hello!

Is everyone free Thursday at 11:20AM (right before lunch break) for 10
minutes for a group photo?

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-17 Thread Michael Still
We have had this discussion several times in the past for other reasons.
The reality is that some people will never deploy the metadata API, so I
feel like we need a better solution than what we have now.

However, I would consider it probably unsafe for the hypervisor to read the
current config drive to get values, and persisting things like the instance
root password in the Nova DB sounds like a bad idea too.

Michael




On Feb 18, 2017 6:29 AM, "Artom Lifshitz"  wrote:

Early on in the inception of device role tagging, it was decided that
it's acceptable that the device metadata on the config drive lags
behind the metadata API, as long as it eventually catches up, for
example when the instance is rebooted and we get a chance to
regenerate the config drive.

So far this hasn't really been a problem because devices could only be
tagged at instance boot time, and the tags never changed. So the
config drive was pretty always much up to date.

In Pike the tagged device attachment series of patches [1] will
hopefully merge, and we'll be in a situation where device tags can
change during instance uptime, which makes it that much more important
to regenerate the config drive whenever we get a chance.

However, when the config drive is first generated, some of the
information stored in there is only available at instance boot time
and is not persisted anywhere, as far as I can tell. Specifically, the
injected_files and admin_pass parameters [2] are passed from the API
and are not stored anywhere.

This creates a problem when we want to regenerated the config drive,
because the information that we're supposed to put in it is no longer
available to us.

We could start persisting this information in instance_extra, for
example, and pulling it up when the config drive is regenerated. We
could even conceivably hack something to read the metadata files from
the "old" config drive before refreshing them with new information.
However, is that really worth it? I feel like saying "the config drive
is static, deal with it - if you want to up to date metadata, use the
API" is an equally, if not more, valid option.

Thoughts? I know y'all are flying out to the PTG, so I'm unlikely to
get responses, but I've at least put my thoughts into writing, and
will be able to refer to them later on :)

[1] https://review.openstack.org/#/q/status:open+topic:bp/virt-
device-tagged-attach-detach
[2] https://github.com/openstack/nova/blob/master/nova/virt/
libvirt/driver.py#L2667-L2672

--
Artom Lifshitz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread Michael Johnson
+1

 

Thanks for setting this up,

 

Michael

 

From: Kevin Benton [mailto:ke...@benton.pub] 
Sent: Friday, February 17, 2017 11:19 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

 

Hi all,

 

I'm organizing a Neutron social event for Thursday evening in Atlanta somewhere 
near the venue for dinner/drinks. If you're interested, please reply to this 
email with a "+1" so I can get a general count for a reservation.

 

Cheers,

Kevin Benton

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread Dariusz Śmigiel
+2

sent from phone
Darek

On Feb 17, 2017 15:33, "Ihar Hrachyshka"  wrote:

> +1.
>
> On Fri, Feb 17, 2017 at 11:18 AM, Kevin Benton  wrote:
> > Hi all,
> >
> > I'm organizing a Neutron social event for Thursday evening in Atlanta
> > somewhere near the venue for dinner/drinks. If you're interested, please
> > reply to this email with a "+1" so I can get a general count for a
> > reservation.
> >
> > Cheers,
> > Kevin Benton
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Interop-wg] Interop/RefStack PTG Agenda

2017-02-17 Thread Egle Sigler
Hello Everyone,

If you are in Atlanta next week and care about interop, please join us.
Schedule: https://etherpad.openstack.org/p/RefStackInteropWGAtlantaPTG
Thank you,
Egle

On 2/17/17, 3:20 PM, "Mark Voelker"  wrote:

>Hi Folks,
>
>We¹re looking forward to seeing many of you in Atlanta next week!  Please
>take a look at the agenda below:
>
>https://etherpad.openstack.org/p/RefStackInteropWGAtlantaPTG
>
>If you have any last-minute suggestions or changes, please let us know
>ASAP.  Otherwise, safe travels!
>
>At Your Service,
>
>Mark T. Voelker
>___
>Interop-wg mailing list
>interop...@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/interop-wg


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] my work on Debian and non-x86 architectures

2017-02-17 Thread Marcin Juszkiewicz
W dniu 17.02.2017 o 13:19, Marcin Juszkiewicz pisze:
> W dniu 17.02.2017 o 12:47, Marcin Juszkiewicz pisze:
>> As you know I added support for non-x86 architectures: aarch64 and
>> ppc64le. Also resurrected Debian support.
> 
> Forgot two things:
> 
> Blueprint:
> https://blueprints.launchpad.net/kolla/+spec/multiarch-and-arm64-containers
> 
> Logs: http://people.linaro.org/~marcin.juszkiewicz/kolla/ (updated every
> few minutes)

Current stats:

x86-64 (finished):
centos-binary:  165
centos-source:  207
debian-binary:  141
debian-source:  196
ubuntu-binary:  160
ubuntu-source:  208

ppc64le (finished):
debian-binary:  134
debian-source:  184
ubuntu-binary:  147
ubuntu-source:  191

aarch64 (still building):
debian-binary:  124
debian-source:  114





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread Ihar Hrachyshka
+1.

On Fri, Feb 17, 2017 at 11:18 AM, Kevin Benton  wrote:
> Hi all,
>
> I'm organizing a Neutron social event for Thursday evening in Atlanta
> somewhere near the venue for dinner/drinks. If you're interested, please
> reply to this email with a "+1" so I can get a general count for a
> reservation.
>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][heat] Heat stable-maint additions

2017-02-17 Thread Thomas Herve
On Fri, Feb 17, 2017 at 5:48 PM, Ian Cordasco  wrote:
> -Original Message-
> From: Thomas Herve 
[snip]
>> Respecting the guidelines is totally fair, but review stats won't tell
>> you much, at least in my case: I barely do any stable reviews because
>> I don't have approve rights. In the case of Heat, 90% of the backports
>> are without conflicts, so stable reviews are just about verifying the
>> guidelines and that the patch matches what's in master.
>>
>> But, I've been working on Heat for 4 years, I made about 1400 reviews
>> on it, and I've been PTL. And the same for the other people that Zane
>> mentioned. I feel we should be trusted on stable branches.
>
> That seems like a very poor excuse - "I can't approve so I don't
> review". I'm a stable maintenance core because I was reviewing stable
> branch changes first. I had a good track record, and both the existing
> Glance stable maint core reviewers and the global team agreed I had
> displayed sound judgment for those.

It's not an excuse, I'm explaining why I don't do many stable reviews.
My time is valuable, and I don't do all the reviews that I could on
master already. I'd rather spend review time where I can move the
needle, instead on patches where ultimately that won't matter. If I
see a stable patch which doesn't make sense, I'll comment, but it's
very rare. On others, if that looks fine I don't do anything. Because
most contributors (on Heat at least) already made the effort to think
whether their change was backport worthy. And I don't chase stats.

> Without being able to assess the quality of your reviews, how should
> anyone else trust you with the stability of those branches?

You can assess the quality of my reviews on master. I don't see how
stable is so different. We can't break APIs, we can't change the DB
randomly, we can't break compatibility. The pain points (dependencies,
config options, features) are most of the time easy to spot.

(Also, Matt mentioned review stats. I could have 200 stable +1, that
would maybe look nice on paper, but not prove anything, if there is
anything to prove).

>> > There are reviewstats tools for seeing the stable review numbers for Heat, 
>> > I
>> > haven't run that though to check against those proposed above, but it's
>> > probably something I'd do first before just adding a bunch of people.
>>
>> I appreciate your guidance and input, but shouldn't we decide our
>> stable maintainers, the same way we decide cores? The current list
>> contains at least one person that doesn't contribute anymore, so it's
>> not like it's super curated.
>
> This is how every other service team works (Nova, Keystone, Glance,
> etc.). Just because the global stable maint team hasn't removed an
> inactive person doesn't invalidate their assessment of potential core
> reviewers.

Saying that how other work is not a fantastic argument. We'd need to
know if that actually works for them.

At any rate, it's a matter of trust, a subject that comes from time to
time, and it's fairly divisive. In this case though, I find it ironic
that I can approve whatever garbage I want on master, it can make its
way into a release, but if I want a bugfix backported into another
branch, someone else has to supervise me.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread Robert Kukura

+1


On 2/17/17 2:18 PM, Kevin Benton wrote:

Hi all,

I'm organizing a Neutron social event for Thursday evening in Atlanta 
somewhere near the venue for dinner/drinks. If you're interested, 
please reply to this email with a "+1" so I can get a general count 
for a reservation.


Cheers,
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread Das, Anindita
+1

--Anindita Das (irc: dasanind)

From: "Vasudevan, Swaminathan (PNB Roseville)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, February 17, 2017 at 2:30 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on 
Thursday

Count me in.

From: Kevin Benton [mailto:ke...@benton.pub]
Sent: Friday, February 17, 2017 11:19 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

Hi all,

I'm organizing a Neutron social event for Thursday evening in Atlanta somewhere 
near the venue for dinner/drinks. If you're interested, please reply to this 
email with a "+1" so I can get a general count for a reservation.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread Vasudevan, Swaminathan (PNB Roseville)
Count me in.

From: Kevin Benton [mailto:ke...@benton.pub]
Sent: Friday, February 17, 2017 11:19 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

Hi all,

I'm organizing a Neutron social event for Thursday evening in Atlanta somewhere 
near the venue for dinner/drinks. If you're interested, please reply to this 
email with a "+1" so I can get a general count for a reservation.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron] PTG cross team session

2017-02-17 Thread Kevin Benton
We should use this session to discuss the issues at a high-level to agree
on a direction and then we can break into smaller groups for discussion
later in the week.

On Fri, Feb 17, 2017 at 3:57 AM, Dmitry Tantsur  wrote:

> Thanks! I wonder if 1 hour is actually enough though, given the complexity
> of the problem (actually three problems already proposed for discussion in
> the etherpad). I'd personally double it (at least).
>
> On 02/17/2017 10:16 AM, Kevin Benton wrote:
>
>> Hi,
>>
>> I added a slot on the calendar to get a room from 2:30-3:30PM on Tuesday
>> in the
>> Macon room.[1] Let me know if anyone has any conflicts with this.
>>
>> 1. https://ethercalc.openstack.org/Pike-PTG-Discussion-Rooms
>>
>> On Thu, Feb 16, 2017 at 8:25 AM, Vasyl Saienko > > wrote:
>>
>> Hello Ironic/Neutron teams,
>>
>>
>> Ironic team would like to schedule cross session with Neutron team on
>> Mon -
>> Tues except for Tue 9.30 - 10.00
>> The topics we would like to talk are added
>> to: https://etherpad.openstack.org/p/neutron-ptg-pike
>>  L151
>>
>>
>> Sincerely,
>> Vasyl Saienko
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [puppet][fuel][packstack][tripleo] puppet 3 end of life

2017-02-17 Thread Alex Schultz
Top posting this thread because we're entering the Pike cycle.  So as
we enter Pike, we are officially dropping support for Puppet 3.  We
managed to not introduce any puppet 4 only requirements for the puppet
OpenStack modules during the Ocata cycle.  The Ocata modules[0] are
officially the last cycle where puppet 3 is supported. Please be aware
that we will be removing the puppet 3 CI for all the modules for Pike
onward and are officially dropping puppet 3 support as it was EOL on
December 31, 2016.

Thanks,
-Alex

[0] 
https://docs.openstack.org/developer/puppet-openstack-guide/releases.html#releases-summary

On Fri, Nov 11, 2016 at 2:11 PM, Alex Schultz  wrote:
> On Thu, Nov 3, 2016 at 11:31 PM, Sam Morrison  wrote:
>>
>> On 4 Nov. 2016, at 1:33 pm, Emilien Macchi  wrote:
>>
>> On Thu, Nov 3, 2016 at 9:10 PM, Sam Morrison  wrote:
>>
>> Wow I didn’t realise puppet3 was being deprecated, is anyone actually using
>> puppet4?
>>
>> I would hope that the openstack puppet modules would support puppet3 for a
>> while still, at lest until the next ubuntu LTS is out else we would get to
>> the stage where the openstack  release supports Xenial but the corresponding
>> puppet module would not? (Xenial has puppet3)
>>
>>
>> I'm afraid we made a lot of communications around it but you might
>> have missed it, no problem.
>> I have 3 questions for you:
>> - for what reasons would you not upgrade puppet?
>>
>>
>> Because I’m a time poor operator with more important stuff to upgrade :-)
>> Upgrading puppet *could* be a big task and something we haven’t had time to
>> look into. Don’t follow along with puppetlabs so didn’t realise puppet3 was
>> being deprecated. Now that this has come to my attention we’ll look into it
>> for sure.
>>
>> - would it be possible for you to use puppetlabs packaging if you need
>> puppet4 on Xenial? (that's what upstream CI is using, and it works
>> quite well).
>>
>>
>> OK thats promising, good to know that the CI is using puppet4. It’s all my
>> other dodgy puppet code I’m worried about.
>>
>> - what version of the modules do you deploy? (and therefore what
>> version of OpenStack)
>>
>>
>> We’re using a mixture of newton/mitaka/liberty/kilo, sometimes the puppet
>> module version is newer than the openstack version too depending on where
>> we’re at in the upgrade process of the particular openstack project.
>>
>> I understand progress must go on, I am interested though in how many
>> operators use puppet4. We may be in the minority and then I’ll be quiet :-)
>>
>> Maybe it should be deprecated in one release and then dropped in the next?
>>
>
> So this has been talked about for a while and we have attempted to
> gauge the 3/4 over the last year or so.  Unfortunately with the
> upstream modules also dropping 3 support, we're kind of stuck
> following their lead. We recently got nailed when the puppetlabs-ntp
> module finally became puppet 3 incompatible and we had to finally pin
> to an older version.  That being said we can try and hold off any
> possible incompatibilities in our modules until either late in this
> cycle or maybe until the start of the next cycle.  We will have
> several milestone releases for Ocata that will still be puppet 3
> compatible (one being next week) so that might be an option as well.
> I understand the extra work this may cause which is why we're trying
> to give as much advanced notice as possible.  In the current forecast
> I don't see any work that will make our modules puppet 3 incompatible,
> but we're also at the mercy of the community at large.  We will
> definitely drop puppet 3 at the start of Pike if we manage to make it
> through Ocata without any required changes.  I think it'll be more
> evident early next year after the puppet 3 EOL finally hits.
>
> Thanks,
> -Alex
>
>>
>> Cheers,
>> Sam
>>
>>
>>
>>
>>
>>
>> My guess is that this would also be the case for RedHat and other distros
>> too.
>>
>>
>> Fedora is shipping Puppet 4 and we're going to do the same for Red Hat
>> and CentOS7.
>>
>> Thoughts?
>>
>>
>>
>> On 4 Nov. 2016, at 2:58 am, Alex Schultz  wrote:
>>
>> Hey everyone,
>>
>> Puppet 3 is reaching it's end of life at the end of this year[0].
>> Because of this we are planning on dropping official puppet 3 support
>> as part of the Ocata cycle.  While we currently are not planning on
>> doing any large scale conversion of code over to puppet 4 only syntax,
>> we may allow some minor things in that could break backwards
>> compatibility.  Based on feedback we've received, it seems that most
>> people who may still be using puppet 3 are using older (< Newton)
>> versions of the modules.  These modules will continue to be puppet 3.x
>> compatible but we're using Ocata as the version where Puppet 4 should
>> be the target version.
>>
>> If anyone has any concerns or issues around this, please let us know.
>>
>> Thanks,
>> -Alex
>>
>> [0] 

[openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-17 Thread Artom Lifshitz
Early on in the inception of device role tagging, it was decided that
it's acceptable that the device metadata on the config drive lags
behind the metadata API, as long as it eventually catches up, for
example when the instance is rebooted and we get a chance to
regenerate the config drive.

So far this hasn't really been a problem because devices could only be
tagged at instance boot time, and the tags never changed. So the
config drive was pretty always much up to date.

In Pike the tagged device attachment series of patches [1] will
hopefully merge, and we'll be in a situation where device tags can
change during instance uptime, which makes it that much more important
to regenerate the config drive whenever we get a chance.

However, when the config drive is first generated, some of the
information stored in there is only available at instance boot time
and is not persisted anywhere, as far as I can tell. Specifically, the
injected_files and admin_pass parameters [2] are passed from the API
and are not stored anywhere.

This creates a problem when we want to regenerated the config drive,
because the information that we're supposed to put in it is no longer
available to us.

We could start persisting this information in instance_extra, for
example, and pulling it up when the config drive is regenerated. We
could even conceivably hack something to read the metadata files from
the "old" config drive before refreshing them with new information.
However, is that really worth it? I feel like saying "the config drive
is static, deal with it - if you want to up to date metadata, use the
API" is an equally, if not more, valid option.

Thoughts? I know y'all are flying out to the PTG, so I'm unlikely to
get responses, but I've at least put my thoughts into writing, and
will be able to refer to them later on :)

[1] 
https://review.openstack.org/#/q/status:open+topic:bp/virt-device-tagged-attach-detach
[2] 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2667-L2672

--
Artom Lifshitz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] - nova/neutron cross project on Tuesday

2017-02-17 Thread Kevin Benton
Hi all,

I've booked us an hour slot on Tuesday from 3:30PM to 4:30PM in the Macon
room for some Nova/Neutron cross project discussions. We can use this to
discuss high-level goals and then people can make plans to meet up in
smaller groups later in the week to discuss specifics about each goal.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread Kevin Benton
Hi all,

I'm organizing a Neutron social event for Thursday evening in Atlanta
somewhere near the venue for dinner/drinks. If you're interested, please
reply to this email with a "+1" so I can get a general count for a
reservation.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [all] Pike PTG QA Input / Feedback Session

2017-02-17 Thread Andrea Frittoli
Time flies and the PTG is going to start in a few days.

If you've been using OpenStack Health or Stackviz to debug a gate issue, or
if you're building devstack grenade and tempest plugins for your projects,
or using any other QA tool, please take a comment to share your experience
in [0], so we can discuss about it at the PTG and use it to improve
ourselves :)



On Tue, Feb 14, 2017 at 12:38 AM Andrea Frittoli 
wrote:

> Hi folks,
>
> at the PTG in Atlanta we will schedule a session [0] to collect and
> discuss feedback and
> input from the community on existing QA projects.
> We will use the resulting material in a later session to set priorities of
> the QA team for Pike.
> Note that for Tempest plugins specifically there will be another dedicated
> session [1].
>
> Priorities of the team are not written in stone, but I would like to be
> able to set foot in the right direction
> from the beginning of the cycle, and input in the etherpad before the PTG
> would be very beneficial for the QA team.
> Please accompany your input with your name / IRC nick.
>
> If you plan to attend the session and/or would like your input to be
> discussed please make a note on the etherpad.
>
> Thank you!
>
> Andrea
>
> IRC: andreaf
>
> [0] https://etherpad.openstack.org/p/qa-ptg-pike-community-input
> [1] https://etherpad.openstack.org/p/qa-ptg-pike-tempest-plugins
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] End-of-Ocata core team updates

2017-02-17 Thread Julia Kreger
Thank you Dmitry!

I’m +1 to all of these actions. Vasyl and Mario will be great additions.  As 
for Devananda, it saddens me but I agree and I hope to work with him again in 
the future.

-Julia

> On Feb 17, 2017, at 4:40 AM, Dmitry Tantsur  wrote:
> 
> Hi all!
> 
> I'd like to propose a few changes based on the recent contributor activity.
> 
> I have two candidates that look very good and pass the formal barrier of 3 
> reviews a day on average [1].
> 
> First, Vasyl Saienko (vsaienk0). I'm pretty confident in him, his stats [2] 
> are high, he's doing a lot of extremely useful work around networking and CI.
> 
> Second, Mario Villaplana (mariojv). His stats [3] are quite high too, he has 
> been doing some quality reviews for critical patches in the Ocata cycle.
> 
> Active cores and interested contributors, please respond with your +-1 to 
> these suggestions.
> 
> Unfortunately, there is one removal as well. Devananda, our team leader for 
> several cycles since the very beginning of the project, has not been active 
> on the project for some time [4]. I propose to (hopefully temporary) remove 
> him from the core team. Of course, when (look, I'm not even saying "if"!) he 
> comes back to active reviewing, I suggest we fast-forward him back. Thanks 
> for everything Deva, good luck with your current challenges!
> 
> Thanks,
> Dmitry
> 
> [1] http://stackalytics.com/report/contribution/ironic-group/90
> [2] http://stackalytics.com/?user_id=vsaienko=marks
> [3] http://stackalytics.com/?user_id=mario-villaplana-j=marks
> [4] http://stackalytics.com/?user_id=devananda=marks
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Boston Summit "project updates" track open for self-nominations

2017-02-17 Thread Heidi Joy Tretheway
The Product Work Group and the Foundation have created a new track for the 
Boston Summit dedicated exclusively to project updates. We wanted to give you 
some additional details so that you can speak with your PTLs at the Project 
Teams Gathering and plan to update the community on your efforts. 

Invitations have gone out to the 25 most-adopted OpenStack projects’ Pike PTLs. 
The PTLs can choose to speak solo or can ask core contributors to co-present 
with them (and there are free speaker codes for co-presenters, in case you 
don’t already have a free code from the PTG). 

We have a limited number of spaces available for emerging projects. These are 
short, 20-minute speaking slots that will be recorded and the video will be 
integrated into the OpenStack project navigator at 
openstack.org/software/project-navigator 
. It’s a great way to showcase 
your project and recruit new developers. 

We’re taking self-nominations through March 2. We’ll confirm your speaking 
space during the week of March 6. Just fill out this form (much simpler than 
the CFP) to nominate yourself and/or colleagues to present a project update: 
https://goo.gl/forms/nDqIJ6oks0PTcA8l2 

Please reach out to me if I can answer any questions!__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 7)

2017-02-17 Thread Paul Belanger
On Fri, Feb 17, 2017 at 03:39:44PM +0100, Attila Darazs wrote:
> As always, if these topics interest you and you want to contribute to the
> discussion, feel free to join the next meeting:
> 
> Time: Thursdays, 15:30-16:30 UTC
> Place: https://bluejeans.com/4113567798/
> 
> Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
> 
Was this meeting recorded in some manner? I see you are using bluejeans, but
don't see any recordings of the discussion.

Additionally, I am a little sad IRC is not being used for these meetings. Some
of the things tripleo is doing is of interest of me, but I find it difficult to
join a video session for 1hour just to listen.  With IRC, it is easier for me to
multi task into other things, then come back and review what has been discussed.

> * We discussed about the state of the Quickstart based update/upgrade jobs
> upstream. matbu is working on them and the changes for the jobs are under
> review. Sagi will help with adding project definitions upstream when the
> changes are merged.
> 
> * John started to draft out the details of the CI related PTG sessions[1].
> 
> * A couple of us brought up reviews that they wanted merged. We discussed
> the reasons, and agreed that sometimes an encouraging email to the mailing
> list has the best effect to move important or slow-to-merge changes moving
> forward.
> 
> * We talked quite a lot about log collection upstream. Currently Quickstart
> doesn't collect logs exactly as upstream, and that might be okay, as we
> collect more, and hopefully in a more easy to digest format.
> 
> * However we might collect too much, and finding the way around the logs is
> not that easy. So John suggested to create an entry page in html for the
> jobs that point to different possible places to find debug output.
> 
Yes, logging was something of an issue this week.  We are still purging data on
logs.o.o, but it does look like quickstart is too aggressive with log
collection. We currently only have 12TB of HDD space for logs.o.o and our
retention policy has dropped from 6 months to 6 weeks.

I believe we are going have a discussion at the PTG about this for
openstack-infra and implement some changes (caps) for jobs in the coming future.
If you are planning on attending the PTG, I encourage you to attend the
discussions.

> * We also discussed adding back debug output to elastic search, as the
> current console output doesn't contain everything, we log a lot of
> deployment output in seperate log files in undercloud/home/stack/*.log
> 
> * Migration to the new Quickstart jobs will happen at or close to 10th of
> March, in the beginning of the Pike cycle when the gates are still stable.
> 
> That was all for this week.
> 
> Best regards,
> Attila
> 
> [1] https://etherpad.openstack.org/p/tripleo-ci-roadmap
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [architecture][nova][neutron][cinder][ceilometer][ironic] PTG stuff -- Arch-WG nova-compute-api fact-gathering session Tuesday 10:30 Macon

2017-02-17 Thread Clint Byrum
Hello, I'm looking forward to seeing many of you next week in Atlanta.
We're going to be working on Arch-WG topics all day Tuesday, and if
you'd like to join us for that in general, please add your topic here:

https://etherpad.openstack.org/p/ptg-architecture-workgroup

I specifically want to call out an important discussion session for one
of our active work streams, nova-compute-api:

https://review.openstack.org/411527
https://review.openstack.org/43

At this point, we've gotten a ton of information from various
contributors, and I want to thank everyone who commented on 411527 with
helpful data. I'll be compiling the data we have into some bullet points
which I intend to share on the projector in an etherpad[1], and then invite
the room to ensure the accuracy and completeness of what we have there.
I grabbed two 30-minute slots in Macon for Tuesday to do this, and I'd
like to invite anyone who has thoughts on how nova-compute interacts to
join us and participate. If you will not be able to attend, please read
the documents and comments in the reviews above and fill in any information
you think is missing on the etherpad[1] so we can address it there.

[1] https://etherpad.openstack.org/p/arch-wg-nova-compute-api-ptg-pike

Once we have this data, I'll likely spend a small amount of time grabbing 
people from
each relevant project team on Wednesday/Thursday to get a deeper understanding 
of some
of the pieces that we talk about on Tuesday.

>From that, as a group we'll produce a detailed analysis of all the ways
nova-compute is interacted with today, and ongoing efforts to change
them. If you are interested in this please do raise your hand and come
to our meetings[2] as my time to work on this is limited, and the idea
for the Arch-WG isn't "Arch-WG solves OpenStack" but "Arch-WG provides
a structure by which teams can raise understanding of architecture."

[2] https://wiki.openstack.org/wiki/Meetings/Arch-WG

Once we've produced that analysis, which we intend to land as a document
in our arch-wg repository, we'll produce a set of specs in the appropriate
places (likely openstack-specs) for how to get it to where we want to
go.

Also, speaking of the meeting -- Since we'll all be meeting on Tuesday
at the PTG, the meeting for next week is cancelled.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-17 Thread Lance Bragstad
On Fri, Feb 17, 2017 at 11:22 AM, Clint Byrum  wrote:

> Excerpts from 王玺源's message of 2017-02-17 14:08:30 +:
> > Hi David:
> >
> > We have not find the perfect solution to solve the fernet performance
> > issue, we will try the different crypt strength setting with fernet in
> > future.
> >
>
> One important thing: did you try throwing more hardware at Keystone?
> Keystone instances are almost entirely immutable (the fernet keys
> are the only mutable part), which makes it pretty easy to scale them
> horizontally as-needed. Your test has a static 3 nodes, but you didn't
> include system status, so we don't know if the CPUs were overwhelmed,
> or how many database nodes you had, what its level of activity was, etc.
>

+1

Several folks in the community have tested token performance using a
variety of hardware and configurations. Sharing your specific setup might
draw similarities to other environments people have used. If not, then we
at least have an environment description that we can use to experience the
issues you're seeing first-hand.


>
> >
> >
> > There are multiple customers have more than 6-region cascade, how to
> > synchronous keystone data between these region disturbed us a lot. It
> does
> > not need to synchronize these data while using pki token, because the pki
> > token including the roles information.
> >
>
> The amount of mutable data to synchronize between datacenters with Fernet
> is the fernet keys. If you set up region-local caches, you should be
> able to ship queries back to a central database cluster and not have to
> worry about a painful global database cluster, since you'll only feel
> the latency of those cross-region queries when your caches are cold.
>
> However, I believe work was done to allow local read queries to be sent
> to local slaves, so you can use traditional MySQL replication if the
> cold-cache latency is too painful.
>
> Replication lag becomes a problem if you get a ton of revocation events,
> but this lag's consequences are pretty low, with the worst effect being a
> larger window for stolen, revoked tokens to be used. Caching also keeps
> that window open longer, so it becomes a game of tuning that window
> against desired API latency.
>
>
Good point, Clint. We also merged a patch in Ocata that helped improve
token validation performance, which was not proposed as a stable backport:

https://github.com/openstack/keystone/commit/9e84371461831880ce5736e9888c7d9648e3a77b


> >
> >
> > The pki token has been verified that can support such large-scale
> > production environment, which even the uuid token has performance issue
> in
> > too.
> >
>
> As others have said, the other problems stacked on top of the critical
> security problems in PKI made it very undesirable for the community to
> support. There is, however, nothing preventing you from maintaining it
> out of tree, though I'd hope you would instead collaborate with the
> community to perhaps address those problems and come up with a "PKIv2"
> provider that has the qualities you want for your scale.
>

+1

Having personally maintained a token provider out-of-tree prior to the
refactoring done last release [0], I think the improvements made are
extremely beneficial for cases like this. But, again re-iterating what
Clint said, I would only suggest that if for some reason we couldn't find a
way to get a supported token provider to suit your needs.

We typically have a session dedicated to performance at the PTG, and I have
that tentatively scheduled for Friday morning (11:30 - 12:00) [1].
Otherwise it's usually a topic that comes up during our operator feedback
session, which is scheduled for Wednesday afternoon (1:30 - 2:20). Both are
going to be in the dedicated keystone room (which I'll be advertising when
I know exactly which room that is).


[0]
https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
[1] https://etherpad.openstack.org/p/keystone-pike-ptg

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-17 Thread Clint Byrum
Excerpts from 王玺源's message of 2017-02-17 14:08:30 +:
> Hi David:
> 
> We have not find the perfect solution to solve the fernet performance
> issue, we will try the different crypt strength setting with fernet in
> future.
> 

One important thing: did you try throwing more hardware at Keystone?
Keystone instances are almost entirely immutable (the fernet keys
are the only mutable part), which makes it pretty easy to scale them
horizontally as-needed. Your test has a static 3 nodes, but you didn't
include system status, so we don't know if the CPUs were overwhelmed,
or how many database nodes you had, what its level of activity was, etc.

> 
> 
> There are multiple customers have more than 6-region cascade, how to
> synchronous keystone data between these region disturbed us a lot. It does
> not need to synchronize these data while using pki token, because the pki
> token including the roles information.
> 

The amount of mutable data to synchronize between datacenters with Fernet
is the fernet keys. If you set up region-local caches, you should be
able to ship queries back to a central database cluster and not have to
worry about a painful global database cluster, since you'll only feel
the latency of those cross-region queries when your caches are cold.

However, I believe work was done to allow local read queries to be sent
to local slaves, so you can use traditional MySQL replication if the
cold-cache latency is too painful.

Replication lag becomes a problem if you get a ton of revocation events,
but this lag's consequences are pretty low, with the worst effect being a
larger window for stolen, revoked tokens to be used. Caching also keeps
that window open longer, so it becomes a game of tuning that window
against desired API latency.

> 
> 
> The pki token has been verified that can support such large-scale
> production environment, which even the uuid token has performance issue in
> too.
> 

As others have said, the other problems stacked on top of the critical
security problems in PKI made it very undesirable for the community to
support. There is, however, nothing preventing you from maintaining it
out of tree, though I'd hope you would instead collaborate with the
community to perhaps address those problems and come up with a "PKIv2"
provider that has the qualities you want for your scale.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][heat] Heat stable-maint additions

2017-02-17 Thread Ian Cordasco
-Original Message-
From: Thomas Herve 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: February 17, 2017 at 09:40:23
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [stable][heat] Heat stable-maint additions

> On Fri, Feb 17, 2017 at 4:14 PM, Matt Riedemann wrote:
> > On 2/15/2017 12:40 PM, Zane Bitter wrote:
> >>
> >> Traditionally Heat has given current and former PTLs of the project +2
> >> rights on stable branches for as long as they remain core reviewers.
> >> Usually I've done that by adding them to the heat-release group.
> >>
> >> At some point the system changed so that the review rights for these
> >> branches are no longer under the team's control (instead, the
> >> stable-maint core team is in charge), and as a result at least the
> >> current PTL (Rico Lin) and the previous PTL (Rabi Mishra), and possibly
> >> others (Thomas Herve, Sergey Kraynev), haven't been added to the group.
> >> That's slowing down getting backports merged, amongst other things.
> >>
> >> I'd like to request that we update the membership to be the same as
> >> https://review.openstack.org/#/admin/groups/152,members
> >>
> >> Rabi Mishra
> >> Rico Lin
> >> Sergey Kraynev
> >> Steve Baker
> >> Steven Hardy
> >> Thomas Herve
> >> Zane Bitter
> >>
> >> I also wonder if the stable-maint team would consider allowing the Heat
> >> team to manage the group membership again if we commit to the criteria
> >> above (all current/former PTLs who are also core reviewers) by just
> >> adding that group as a member of heat-stable-maint?
> >>
> >> thanks,
> >> Zane.
> >>
> >
> > Reviewing patches on stable branches have different guidelines, expressed
> > here [1]. In the past when this comes up I've asked if the people being
> > asked to be added to the stable team for a project have actually been doing
> > reviews on the stable branches to show they are following the guidelines,
> > and at times when this has come up the people proposed (usually PTLs)
> > haven't, so I've declined at that time until they start actually doing
> > reviews and can show they are following the guidelines.
>
> Respecting the guidelines is totally fair, but review stats won't tell
> you much, at least in my case: I barely do any stable reviews because
> I don't have approve rights. In the case of Heat, 90% of the backports
> are without conflicts, so stable reviews are just about verifying the
> guidelines and that the patch matches what's in master.
>
> But, I've been working on Heat for 4 years, I made about 1400 reviews
> on it, and I've been PTL. And the same for the other people that Zane
> mentioned. I feel we should be trusted on stable branches.

That seems like a very poor excuse - "I can't approve so I don't
review". I'm a stable maintenance core because I was reviewing stable
branch changes first. I had a good track record, and both the existing
Glance stable maint core reviewers and the global team agreed I had
displayed sound judgment for those.

Without being able to assess the quality of your reviews, how should
anyone else trust you with the stability of those branches?

> > There are reviewstats tools for seeing the stable review numbers for Heat, I
> > haven't run that though to check against those proposed above, but it's
> > probably something I'd do first before just adding a bunch of people.
>
> I appreciate your guidance and input, but shouldn't we decide our
> stable maintainers, the same way we decide cores? The current list
> contains at least one person that doesn't contribute anymore, so it's
> not like it's super curated.

This is how every other service team works (Nova, Keystone, Glance,
etc.). Just because the global stable maint team hasn't removed an
inactive person doesn't invalidate their assessment of potential core
reviewers.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] revising the core list

2017-02-17 Thread Louis Taylor
On Fri, Feb 17, 2017 at 4:19 PM, Brian Rosmaita
 wrote:
> Finally, the following people are dropped from the Glance core list due
> to inactivity during Ocata.  On behalf of the entire Glance team, I
> thank each of you for your past service to Glance, and hope to see you
> again as Glance contributors:
> - Kairat Kushaev
> - Mike Fedosin
> - Louis Taylor

So long, and thanks for all the images!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][heat] Heat stable-maint additions

2017-02-17 Thread Rabi Mishra
On Fri, Feb 17, 2017 at 8:44 PM, Matt Riedemann  wrote:

> On 2/15/2017 12:40 PM, Zane Bitter wrote:
>
>> Traditionally Heat has given current and former PTLs of the project +2
>> rights on stable branches for as long as they remain core reviewers.
>> Usually I've done that by adding them to the heat-release group.
>>
>> At some point the system changed so that the review rights for these
>> branches are no longer under the team's control (instead, the
>> stable-maint core team is in charge), and as a result at least the
>> current PTL (Rico Lin) and the previous PTL (Rabi Mishra), and possibly
>> others (Thomas Herve, Sergey Kraynev), haven't been added to the group.
>> That's slowing down getting backports merged, amongst other things.
>>
>> I'd like to request that we update the membership to be the same as
>> https://review.openstack.org/#/admin/groups/152,members
>>
>> Rabi Mishra
>> Rico Lin
>> Sergey Kraynev
>> Steve Baker
>> Steven Hardy
>> Thomas Herve
>> Zane Bitter
>>
>> I also wonder if the stable-maint team would consider allowing the Heat
>> team to manage the group membership again if we commit to the criteria
>> above (all current/former PTLs who are also core reviewers) by just
>> adding that group as a member of heat-stable-maint?
>>
>> thanks,
>> Zane.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Reviewing patches on stable branches have different guidelines, expressed
> here [1]. In the past when this comes up I've asked if the people being
> asked to be added to the stable team for a project have actually been doing
> reviews on the stable branches to show they are following the guidelines,
> and at times when this has come up the people proposed (usually PTLs)
> haven't, so I've declined at that time until they start actually doing
> reviews and can show they are following the guidelines.
>
> There are reviewstats tools for seeing the stable review numbers for Heat,
> I haven't run that though to check against those proposed above, but it's
> probably something I'd do first before just adding a bunch of people.
>

Would it not be appropriate to trust the stable cross-project liaison for
heat when he nominates stable cores? Having been the PTL for Ocata and one
who struggled to get the backports on time for a stable release as planned,
I don't recall seeing many reviews from stable maintenance core team for
them to be able to judge the quality of reviews. So I don't think it's fair
to decide eligibility only based on the review numbers and stats.


> [1] https://docs.openstack.org/project-team-guide/stable-branches.html
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] End-of-Ocata core team updates

2017-02-17 Thread Jay Faulkner
+2 to all proposed -- Vasyl and Mario have been great folks to work with, and 
I'm glad they're gettting core access.


Thanks for all the work over the years, Devananda, I know I learned quite a few 
things working with you. Hopefully you'll be able to dedicate time to ironic 
again someday. o/


-Jay Faulkner



From: Dmitry Tantsur 
Sent: Friday, February 17, 2017 1:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [ironic] End-of-Ocata core team updates

Hi all!

I'd like to propose a few changes based on the recent contributor activity.

I have two candidates that look very good and pass the formal barrier of 3
reviews a day on average [1].

First, Vasyl Saienko (vsaienk0). I'm pretty confident in him, his stats [2] are
high, he's doing a lot of extremely useful work around networking and CI.

Second, Mario Villaplana (mariojv). His stats [3] are quite high too, he has
been doing some quality reviews for critical patches in the Ocata cycle.

Active cores and interested contributors, please respond with your +-1 to these
suggestions.

Unfortunately, there is one removal as well. Devananda, our team leader for
several cycles since the very beginning of the project, has not been active on
the project for some time [4]. I propose to (hopefully temporary) remove him
from the core team. Of course, when (look, I'm not even saying "if"!) he comes
back to active reviewing, I suggest we fast-forward him back. Thanks for
everything Deva, good luck with your current challenges!

Thanks,
Dmitry

[1] http://stackalytics.com/report/contribution/ironic-group/90
[2] http://stackalytics.com/?user_id=vsaienko=marks
[3] http://stackalytics.com/?user_id=mario-villaplana-j=marks
[4] http://stackalytics.com/?user_id=devananda=marks
[http://stackalytics.com/static/images/stackalytics_logo.png]

Stackalytics | Devananda van der Veen contribution in OpenStack Ocata 
release
stackalytics.com
Devananda van der Veen contribution in OpenStack Ocata release




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
openstack-dev mailing 
list
lists.openstack.org
This list for the developers of OpenStack to discuss development issues and 
roadmap. It is focused on the next release of OpenStack: you should post on 
this list if ...



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] revising the core list

2017-02-17 Thread Nikhil Komawar
Brian, thanks for revamping the rotation. I don't have any comments on
specific cores and their inactivity but I like the idea of we checking up
on the activeness of the glance community in a regular manner.

:thumbsup:

On Fri, Feb 17, 2017 at 11:19 AM, Brian Rosmaita  wrote:

> Following Doug's suggestion in [0], I'm revising the Glance core list
> before next week's PTG.
>
> First, I'd again like to thank the following former Glance cores, who
> stepped down during the Ocata cycle, for their past service to Glance:
> - Sabari Murugesan
> - Stuart McLaren
>
> Second, I'd like to mention two people who were added to the Glance core
> team in Ocata, both of whom have made great contributions during the cycle:
> - Dharini Chandrasekar
> - Steve Lewis
>
> Third, I'd like to thank the Glance cores who continued to serve during
> the Ocata cycle:
> - Erno Kuvaja
> - Fei Long Wang
> - Flavio Percoco
> - Hemanth Makkapati
> - Ian Cordasco
> - Nikhil Komawar
>
> Finally, the following people are dropped from the Glance core list due
> to inactivity during Ocata.  On behalf of the entire Glance team, I
> thank each of you for your past service to Glance, and hope to see you
> again as Glance contributors:
> - Kairat Kushaev
> - Mike Fedosin
> - Louis Taylor
>
> This will leave some openings for new core contributors during the Pike
> cycle.  If you're interested in getting some advice about how to
> position yourself to become a Glance core, please seek out the active
> cores listed above during the PTG.  For people who won't be at the PTG,
> you can always look for active cores in #openstack-glance.
>
> Let's have a productive PTG!
>
> cheers,
> brian
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2017-
> February/112407.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] End-of-Ocata core team updates

2017-02-17 Thread Lucas Alvares Gomes
Hi,

Thanks Dmitry for putting this up!

> I'd like to propose a few changes based on the recent contributor activity.
>
> I have two candidates that look very good and pass the formal barrier of 3
> reviews a day on average [1].
>
> First, Vasyl Saienko (vsaienk0). I'm pretty confident in him, his stats [2]
> are high, he's doing a lot of extremely useful work around networking and
> CI.

+1

>
> Second, Mario Villaplana (mariojv). His stats [3] are quite high too, he has
> been doing some quality reviews for critical patches in the Ocata cycle.
>

+1

> Active cores and interested contributors, please respond with your +-1 to
> these suggestions.
>
> Unfortunately, there is one removal as well. Devananda, our team leader for
> several cycles since the very beginning of the project, has not been active
> on the project for some time [4]. I propose to (hopefully temporary) remove
> him from the core team. Of course, when (look, I'm not even saying "if"!) he
> comes back to active reviewing, I suggest we fast-forward him back. Thanks
> for everything Deva, good luck with your current challenges!

That's unfortunate. Thanks for everything Deva, I'm hopeful that we'll
see you back!

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] End-of-Ocata core team updates

2017-02-17 Thread Jim Rollenhagen
On Fri, Feb 17, 2017 at 4:40 AM, Dmitry Tantsur  wrote:

> Hi all!
>
> I'd like to propose a few changes based on the recent contributor activity.
>
> I have two candidates that look very good and pass the formal barrier of 3
> reviews a day on average [1].
>
> First, Vasyl Saienko (vsaienk0). I'm pretty confident in him, his stats
> [2] are high, he's doing a lot of extremely useful work around networking
> and CI.
>

+2


>
> Second, Mario Villaplana (mariojv). His stats [3] are quite high too, he
> has been doing some quality reviews for critical patches in the Ocata cycle.
>

+2


>
> Active cores and interested contributors, please respond with your +-1 to
> these suggestions.
>
> Unfortunately, there is one removal as well. Devananda, our team leader
> for several cycles since the very beginning of the project, has not been
> active on the project for some time [4]. I propose to (hopefully temporary)
> remove him from the core team. Of course, when (look, I'm not even saying
> "if"!) he comes back to active reviewing, I suggest we fast-forward him
> back. Thanks for everything Deva, good luck with your current challenges!
>

Sadly agree. Deva, thank you for everything you've done for the project -
from building a nemesis^W^Wnova-baremetal to founding ironic to leading the
project for quite some time, and everything in between. It's been a ride,
and I certainly hope you'll return sooner than later. :)

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [senlin] senlin 3.0.0.0rc2 (ocata)

2017-02-17 Thread no-reply

Hello everyone,

A new release candidate for senlin for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/senlin/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/senlin/log/?h=stable/ocata

Release notes for senlin can be found at:

http://docs.openstack.org/releasenotes/senlin/

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/senlin

and tag it *ocata-rc-potential* to bring it to the senlin
release crew's attention.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] revising the core list

2017-02-17 Thread Brian Rosmaita
Following Doug's suggestion in [0], I'm revising the Glance core list
before next week's PTG.

First, I'd again like to thank the following former Glance cores, who
stepped down during the Ocata cycle, for their past service to Glance:
- Sabari Murugesan
- Stuart McLaren

Second, I'd like to mention two people who were added to the Glance core
team in Ocata, both of whom have made great contributions during the cycle:
- Dharini Chandrasekar
- Steve Lewis

Third, I'd like to thank the Glance cores who continued to serve during
the Ocata cycle:
- Erno Kuvaja
- Fei Long Wang
- Flavio Percoco
- Hemanth Makkapati
- Ian Cordasco
- Nikhil Komawar

Finally, the following people are dropped from the Glance core list due
to inactivity during Ocata.  On behalf of the entire Glance team, I
thank each of you for your past service to Glance, and hope to see you
again as Glance contributors:
- Kairat Kushaev
- Mike Fedosin
- Louis Taylor

This will leave some openings for new core contributors during the Pike
cycle.  If you're interested in getting some advice about how to
position yourself to become a Glance core, please seek out the active
cores listed above during the PTG.  For people who won't be at the PTG,
you can always look for active cores in #openstack-glance.

Let's have a productive PTG!

cheers,
brian

[0]
http://lists.openstack.org/pipermail/openstack-dev/2017-February/112407.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][magnum][heat][mistral][rally][requirements] late magnum client release

2017-02-17 Thread Doug Hellmann
The magnum team has requested a very late release of python-magnumclient
[1]. Given that the request is coming 4 weeks after the client
release deadline, the release team discussed it carefully and decided
we have 3 options:

1. approve the new release and branch from that BEFORE the final
   deadline next week

2. approve the new release and branch from that AFTER the final
   deadline next week

3. branch from the most current existing release before the final
   (the release that was avaialble at the deadline)

The client library is listed as a dependency for heat, mistral, and
rally. Because we are in the quiet period leading up to the final
release, we do not want to introduce extra uncertainty by adding a
new version of a dependency while those projects are wrapping up
their final release and testing. However, we also do not want to
delay creating the branch any later than necessary because downstream
packagers rely on having the branches for their production pipelines.

Considering the balance of those two requirements, and the fact
that the upper constraints list managed by the requirements team
should mitigate most of the risk of the release, we have agreed to
make an exception and allow the new release. The purpose of this
email is to explain the thought process, and to make it clear that
this decision is an exception because of the shortend Ocata cycle,
and that we will not do this for Pike.

Dims has agreed to process the release and look run some tests with
the affected projects today, so watch for news from him if there
are issues.

Doug

[1] https://review.openstack.org/#/c/435241/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][all] Ocata release candidates frozen

2017-02-17 Thread Doug Hellmann
Later today we will be entering the freeze period between the release
candidates and the final release next Wednesday. We have a couple
of releases in progress now for senlin and python-magnumclient, but
after those are completed we will not be releasing anything until
after the PTG.

I hope to see you in Atlanta!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Heirarchical Quota Implementation Issues

2017-02-17 Thread Sajeesh Cimson Sasi
Hi All,
 It is heard that Cinder faced some issues while implementing 
hierarchical quotas.I would be nice if somebody from Cinder team can share the 
issues , as it will be useful for the PTG next week.

  best regards,

 sajeesh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3-subteam] Weekly IRC meeting canceled on February 23rd

2017-02-17 Thread Miguel Lavalle
Dear L3-subteam,

Due to the PTG next week in Atlanta, we will cancel our weekly meeting on
February 23rd. We will resume normally on March 2nd.

See you in Atlanta!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical quotas at the PTG?

2017-02-17 Thread Sajeesh Cimson Sasi
Hi Matt,
  Thanks for adding Hierarchical quotas at the PTG.
Following was the spec made for Kilo.
  https://review.openstack.org/#/c/129420/
 best regards,
 sajeesh

From: Matt Riedemann [mriede...@gmail.com]
Sent: 16 February 2017 04:52:22
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Hierarchical quotas at the PTG?

On 2/15/2017 1:40 PM, Lance Bragstad wrote:
>
> Will there be a dedicated time finalized for this as we get closer to
> next week?
>

I have no idea. I'm guessing someone will ping me when the time comes
and I'll mosey on over. For whatever doesn't get covered, or needs to
spill over, Nova is already going to have a block of time to talk about
quota-related things (not just this) so we can pick it up there too
later in the week.

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][heat] Heat stable-maint additions

2017-02-17 Thread Thomas Herve
On Fri, Feb 17, 2017 at 4:14 PM, Matt Riedemann  wrote:
> On 2/15/2017 12:40 PM, Zane Bitter wrote:
>>
>> Traditionally Heat has given current and former PTLs of the project +2
>> rights on stable branches for as long as they remain core reviewers.
>> Usually I've done that by adding them to the heat-release group.
>>
>> At some point the system changed so that the review rights for these
>> branches are no longer under the team's control (instead, the
>> stable-maint core team is in charge), and as a result at least the
>> current PTL (Rico Lin) and the previous PTL (Rabi Mishra), and possibly
>> others (Thomas Herve, Sergey Kraynev), haven't been added to the group.
>> That's slowing down getting backports merged, amongst other things.
>>
>> I'd like to request that we update the membership to be the same as
>> https://review.openstack.org/#/admin/groups/152,members
>>
>> Rabi Mishra
>> Rico Lin
>> Sergey Kraynev
>> Steve Baker
>> Steven Hardy
>> Thomas Herve
>> Zane Bitter
>>
>> I also wonder if the stable-maint team would consider allowing the Heat
>> team to manage the group membership again if we commit to the criteria
>> above (all current/former PTLs who are also core reviewers) by just
>> adding that group as a member of heat-stable-maint?
>>
>> thanks,
>> Zane.
>>
>
> Reviewing patches on stable branches have different guidelines, expressed
> here [1]. In the past when this comes up I've asked if the people being
> asked to be added to the stable team for a project have actually been doing
> reviews on the stable branches to show they are following the guidelines,
> and at times when this has come up the people proposed (usually PTLs)
> haven't, so I've declined at that time until they start actually doing
> reviews and can show they are following the guidelines.

Respecting the guidelines is totally fair, but review stats won't tell
you much, at least in my case: I barely do any stable reviews because
I don't have approve rights. In the case of Heat, 90% of the backports
are without conflicts, so stable reviews are just about verifying the
guidelines and that the patch matches what's in master.

But, I've been working on Heat for 4 years, I made about 1400 reviews
on it, and I've been PTL. And the same for the other people that Zane
mentioned. I feel we should be trusted on stable branches.

> There are reviewstats tools for seeing the stable review numbers for Heat, I
> haven't run that though to check against those proposed above, but it's
> probably something I'd do first before just adding a bunch of people.

I appreciate your guidance and input, but shouldn't we decide our
stable maintainers, the same way we decide cores? The current list
contains at least one person that doesn't contribute anymore, so it's
not like it's super curated.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][heat] Heat stable-maint additions

2017-02-17 Thread Matt Riedemann

On 2/15/2017 12:40 PM, Zane Bitter wrote:

Traditionally Heat has given current and former PTLs of the project +2
rights on stable branches for as long as they remain core reviewers.
Usually I've done that by adding them to the heat-release group.

At some point the system changed so that the review rights for these
branches are no longer under the team's control (instead, the
stable-maint core team is in charge), and as a result at least the
current PTL (Rico Lin) and the previous PTL (Rabi Mishra), and possibly
others (Thomas Herve, Sergey Kraynev), haven't been added to the group.
That's slowing down getting backports merged, amongst other things.

I'd like to request that we update the membership to be the same as
https://review.openstack.org/#/admin/groups/152,members

Rabi Mishra
Rico Lin
Sergey Kraynev
Steve Baker
Steven Hardy
Thomas Herve
Zane Bitter

I also wonder if the stable-maint team would consider allowing the Heat
team to manage the group membership again if we commit to the criteria
above (all current/former PTLs who are also core reviewers) by just
adding that group as a member of heat-stable-maint?

thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Reviewing patches on stable branches have different guidelines, 
expressed here [1]. In the past when this comes up I've asked if the 
people being asked to be added to the stable team for a project have 
actually been doing reviews on the stable branches to show they are 
following the guidelines, and at times when this has come up the people 
proposed (usually PTLs) haven't, so I've declined at that time until 
they start actually doing reviews and can show they are following the 
guidelines.


There are reviewstats tools for seeing the stable review numbers for 
Heat, I haven't run that though to check against those proposed above, 
but it's probably something I'd do first before just adding a bunch of 
people.


[1] https://docs.openstack.org/project-team-guide/stable-branches.html

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] neutron-classifier (CCF) at the PTG

2017-02-17 Thread Thomas Morin

Hi,

I have no opinion on where/when this should happen, but will be 
interested to participate.


-Thomas

Tue Feb 14 2017 15:39:22 GMT+0100 (CET), Duarte Cardoso, Igor:


Hi neutron,

Me and David would like to discuss the Common Classification Framework 
(CCF) (current approach based on openstack/neutron-classifier) at the 
PTG but we aren’t sure if the main session is the appropriate forum 
for that or if we should only have a meeting with the interested 
people and a few Neutron cores or PTL (to discuss if and how this work 
could be brought closer to Neutron itself).


I appreciate your feedback, thanks!

Best regards,

Igor.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 7)

2017-02-17 Thread Attila Darazs
As always, if these topics interest you and you want to contribute to 
the discussion, feel free to join the next meeting:


Time: Thursdays, 15:30-16:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

* We discussed about the state of the Quickstart based update/upgrade 
jobs upstream. matbu is working on them and the changes for the jobs are 
under review. Sagi will help with adding project definitions upstream 
when the changes are merged.


* John started to draft out the details of the CI related PTG sessions[1].

* A couple of us brought up reviews that they wanted merged. We 
discussed the reasons, and agreed that sometimes an encouraging email to 
the mailing list has the best effect to move important or slow-to-merge 
changes moving forward.


* We talked quite a lot about log collection upstream. Currently 
Quickstart doesn't collect logs exactly as upstream, and that might be 
okay, as we collect more, and hopefully in a more easy to digest format.


* However we might collect too much, and finding the way around the logs 
is not that easy. So John suggested to create an entry page in html for 
the jobs that point to different possible places to find debug output.


* We also discussed adding back debug output to elastic search, as the 
current console output doesn't contain everything, we log a lot of 
deployment output in seperate log files in undercloud/home/stack/*.log


* Migration to the new Quickstart jobs will happen at or close to 10th 
of March, in the beginning of the Pike cycle when the gates are still 
stable.


That was all for this week.

Best regards,
Attila

[1] https://etherpad.openstack.org/p/tripleo-ci-roadmap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-17 Thread Arkady.Kanevsky
There is no project that can stand on its own.
Even Swift need some identity management.

Thus, even if you are contributing to only one project your are still dependent 
on many others. Including QA and infrastructure and so on.

While most Customers are looking on a few projects together and not all 
projects combined it is still referred to as OpenStack. The release is of 
openstack.
There are a lot of features that span many projects and just because a feature 
is done in one project it is not sufficient for customer needs. HA, upgrade, 
log consistency are all examples of it.

The strength of openstack is in combination of projects working together. 

I will skip topic what is core and what is not.
I personally think that we did customer and ourselves a big disservice when we 
abandon integrated release concept for the same reasons I stated above.
Thanks,
Arkady

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com] 
Sent: Friday, February 17, 2017 6:31 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help 
your team?

On 02/17/2017 12:01 AM, Chris Dent wrote:
> On Thu, 16 Feb 2017, Dan Prince wrote:
>
>> And yes. We are all OpenStack developers in a sense. We want to align 
>> things in the technical arena. But I think you'll also find that most 
>> people more closely associate themselves to a team within OpenStack 
>> than they perhaps do with the larger project. Many of us in TripleO 
>> feel that way I think. This is a healthy thing, being part of a team.
>> Don't make us feel bad because of it by suggesting that uber 
>> OpenStack graphics styling takes precedent.
>
> I'd very much like to have a more clear picture of the number of 
> people who think of themselves primarily as "OpenStack developers"
> or primarily as "$PROJECT developers".
>
> I've always assumed that most people in the community(tm) thought of 
> themselves as the former but I'm realizing (in part because of what 
> Dan's said here) that's bias or solipsism on my part and I really have 
> no clue what the situation is.
>
> Anyone have a clue?

I don't have a clue, and I don't personally think it matters. But I suspect the 
latter is the majority. At least because very few contributors have a chance to 
contribute to something OpenStack-wide, while many people get assigned to work 
on a project or a few of them.

That being said, I don't believe that the "OpenStack vs $PROJECT" question is 
as important as it may seem from this thread :)

>
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-17 Thread 王玺源
Hi Dolph:

We made the keystone.conf same with the example.

[token]

provider = fernet



[fernet_tokens]   //all configuration is default

#

# From keystone

#



# Directory containing Fernet token keys. (string value)

#key_repository = /etc/keystone/fernet-keys/



# This controls how many keys are held in rotation by keystone-manage

# fernet_rotate before they are discarded. The default value of 3 means that

# keystone will maintain one staged key, one primary key, and one secondary

# key. Increasing this value means that additional secondary keys will be
kept

# in the rotation. (integer value)

# max_active_keys = 3
Dolph Mathews 于2017年2月17日 周五上午7:22写道:

> Thank you for the data and your test scripts! As Lance and Stanek already
> alluded, Fernet performance is very sensitive to keystone's configuration.
> Can your share your keystone.conf as well?
>
> I'll also be in Atlanta and would love to talk Fernet performance, even if
> we don't have a formal time slot on the schedule.
>
> On Wed, Feb 15, 2017 at 9:08 AM Lance Bragstad 
> wrote:
>
> In addition to what David said, have you played around with caching in
> keystone [0]? After the initial implementation of fernet landed, we
> attempted to make it the default token provider. We ended up reverting the
> default back to uuid because we hit several issues. Around the Liberty and
> Mitaka timeframe, we reworked the caching implementation to fix those
> issues and improve overall performance of all token formats, especially
> fernet.
>
> We have a few different performance perspectives available, too. Some were
> run nearly 2 years ago [1] and some are run today [2]. Since the Newton
> release, we've made drastic improvements to the overall structure of the
> token provider [3] [4] [5]. At the very least, it should make understanding
> keystone's approach to tokens easier. Maintaining out-of-tree token
> providers should also be easier since we cleaned up a lot of the interfaces
> that affect developers maintaining their own providers.
>
> We can try and set something up at the PTG. We are getting pretty tight
> for time slots, but I'm sure we can find some time to work through the
> issues you're seeing (also, feel free to hop into #openstack-keystone on
> freenode if you want to visit prior to the PTG).
>
>
> [0]
> https://docs.openstack.org/developer/keystone/configuration.html#caching-layer
> [1] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
> [2] https://github.com/lbragstad/keystone-performance
> [3]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:make-fernet-default
> [4]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
> [5]
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ocata/token-provider-cleanup.html
>
> On Wed, Feb 15, 2017 at 8:44 AM, David Stanek  wrote:
>
> On 15-Feb 18:16, 王玺源 wrote:
> > Hello everyone,
> >   PKI/PKIZ token has been removed from keystone in Ocata. But recently
> our
> > production team did some test about PKI and Fernet token (With Keystone
> > Mitaka). They found that in large-scale production environment, Fernet
> > token's performance is not as good as PKI. Here is the test data:
> >
> >
> https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM25NzZcdzPE0fvY/edit?usp=sharing
>
> This is nice to see. Thanks.
>
>
> >
> > From the data, we can see that:
> > 1. In large-scale concurrency test, PKI is much faster than Fernet.
> > 2. PKI token revoke can't immediately make the token invalid. So it has
> the
> > revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062
> >
> > But in our production team's opinion, the revoke issue is a small
> problem,
> > and can be avoided by some periphery ways. (More detail solution could be
> > explained by them in the follow email).
> > They think that the performance issue is the most important thing. Maybe
> > you can see that in some production environment, performance is the first
> > thing to be considered.
>
> I'd like to hear solutions to this if you have already come up with
> them. This issue, however, isn't the only one that led us to remove PKI
> tokens.
>
> >
> > So here I'd like to ask you, especially the keystone experts:
> > 1. Is there any chance to bring PKI/PKIZ back to Keystone?
>
> I would guess that, at least in the immediate future, we would not want
> to put it back into keystone until someone can fix the issues. Also
> ideally running the token provider in production.
>
>
> > 2. Has Fernet token improved the performance during these releases? Or
> any
> > road map so that we can make sure Fernet is better than PKI in all side.
> > Otherwise, I don't think that remove PKI in Ocata is the right way. Or
> > even, we can keep the PKI token in Keystone for more one or two cycles,
> > then remove it once Fernet is stable 

Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-17 Thread 王玺源
Hi Lance:

We may try cache or other setting to test fernet token in future.



Mentioned the uuid as default token, it must remind there have a big reason
to solve uuid performance issue, with the implement of pkitoken.



Openstack API using restful protocol, which built on top of Http, but these
API can be re-encapsulated by the Web Console to protect the Token from
internet. The deployment of the environment is higher security than the Web
application which exposed on internet.

Therefore, the risk of leakage of Token is lower than Web Application, and
the risk of Token leakage due to PKI Token revocation delay can be reduced
by corresponding security measures.
Lance Bragstad 于2017年2月15日 周三下午11:08写道:

> In addition to what David said, have you played around with caching in
> keystone [0]? After the initial implementation of fernet landed, we
> attempted to make it the default token provider. We ended up reverting the
> default back to uuid because we hit several issues. Around the Liberty and
> Mitaka timeframe, we reworked the caching implementation to fix those
> issues and improve overall performance of all token formats, especially
> fernet.
>
> We have a few different performance perspectives available, too. Some were
> run nearly 2 years ago [1] and some are run today [2]. Since the Newton
> release, we've made drastic improvements to the overall structure of the
> token provider [3] [4] [5]. At the very least, it should make understanding
> keystone's approach to tokens easier. Maintaining out-of-tree token
> providers should also be easier since we cleaned up a lot of the interfaces
> that affect developers maintaining their own providers.
>
> We can try and set something up at the PTG. We are getting pretty tight
> for time slots, but I'm sure we can find some time to work through the
> issues you're seeing (also, feel free to hop into #openstack-keystone on
> freenode if you want to visit prior to the PTG).
>
>
> [0]
> https://docs.openstack.org/developer/keystone/configuration.html#caching-layer
> [1] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
> [2] https://github.com/lbragstad/keystone-performance
> [3]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:make-fernet-default
> [4]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
> [5]
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ocata/token-provider-cleanup.html
>
> On Wed, Feb 15, 2017 at 8:44 AM, David Stanek  wrote:
>
> On 15-Feb 18:16, 王玺源 wrote:
> > Hello everyone,
> >   PKI/PKIZ token has been removed from keystone in Ocata. But recently
> our
> > production team did some test about PKI and Fernet token (With Keystone
> > Mitaka). They found that in large-scale production environment, Fernet
> > token's performance is not as good as PKI. Here is the test data:
> >
> >
> https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM25NzZcdzPE0fvY/edit?usp=sharing
>
> This is nice to see. Thanks.
>
>
> >
> > From the data, we can see that:
> > 1. In large-scale concurrency test, PKI is much faster than Fernet.
> > 2. PKI token revoke can't immediately make the token invalid. So it has
> the
> > revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062
> >
> > But in our production team's opinion, the revoke issue is a small
> problem,
> > and can be avoided by some periphery ways. (More detail solution could be
> > explained by them in the follow email).
> > They think that the performance issue is the most important thing. Maybe
> > you can see that in some production environment, performance is the first
> > thing to be considered.
>
> I'd like to hear solutions to this if you have already come up with
> them. This issue, however, isn't the only one that led us to remove PKI
> tokens.
>
> >
> > So here I'd like to ask you, especially the keystone experts:
> > 1. Is there any chance to bring PKI/PKIZ back to Keystone?
>
> I would guess that, at least in the immediate future, we would not want
> to put it back into keystone until someone can fix the issues. Also
> ideally running the token provider in production.
>
>
> > 2. Has Fernet token improved the performance during these releases? Or
> any
> > road map so that we can make sure Fernet is better than PKI in all side.
> > Otherwise, I don't think that remove PKI in Ocata is the right way. Or
> > even, we can keep the PKI token in Keystone for more one or two cycles,
> > then remove it once Fernet is stable enough.
> > 3. Since I'll be in Atalanta next week, if it is possible, I'd like to
> > bring this topic to Keystone PTG. can I?
>
> Sure. We have a pretty packed calendar, but I'm sure you could steal a
> few minutes somewhere.
>
>
> >
> > It is a real production problem and I really need your feedback.
> >
>
> Have you tried playing with the crypt_strength[1]? If the 

Re: [openstack-dev] [ironic] Retiring python-wsmanclient

2017-02-17 Thread Dmitry Tantsur

The project is officially retired now.

On 02/14/2017 02:28 PM, Dmitry Tantsur wrote:

Hi everyone!

Following the discussion below, we would like to officially retire
python-wsmanclient as soon as possible. We haven't maintained it for a while, it
hasn't had any commits and releases since Aug 2016.

On 11/07/2016 02:51 PM, Dmitry Tantsur wrote:

Hi folks!

In view of the Ironic governance discussion [1] I'd like to talk about
wsmanclient [2] future.

This project was created to split away wsman code from python-dracclient to be
reused in other drivers (I can only think of AMT right now). This was never
finished: dracclient still uses its internal wsman implementation.

To make it worse, the guy behind this effort (ifarkas) has left our team,
python-dracclient is likely to leave Ironic governance per [1], and the AMT
driver is going to leave the Ironic tree.

At least the majority of the folks currently behind dracclient (Miles, Lucas and
myself) do not have resources to continue this wsmanclient effort. Unless
somebody is ready to take over both wsmanclient itself and the effort to port
dracclient, I suggest we abandon wsmanclient.

Any thoughts?

[1] https://review.openstack.org/#/c/392685/
[2] https://github.com/openstack/python-wsmanclient

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-17 Thread 王玺源
Hi David:

We have not find the perfect solution to solve the fernet performance
issue, we will try the different crypt strength setting with fernet in
future.



There are multiple customers have more than 6-region cascade, how to
synchronous keystone data between these region disturbed us a lot. It does
not need to synchronize these data while using pki token, because the pki
token including the roles information.



The pki token has been verified that can support such large-scale
production environment, which even the uuid token has performance issue in
too.



Through the pki token is large size, but the size also make it difficult
for guessing or steal.
David Stanek 于2017年2月15日 周三下午10:45写道:

> On 15-Feb 18:16, 王玺源 wrote:
> > Hello everyone,
> >   PKI/PKIZ token has been removed from keystone in Ocata. But recently
> our
> > production team did some test about PKI and Fernet token (With Keystone
> > Mitaka). They found that in large-scale production environment, Fernet
> > token's performance is not as good as PKI. Here is the test data:
> >
> >
> https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM25NzZcdzPE0fvY/edit?usp=sharing
>
> This is nice to see. Thanks.
>
>
> >
> > From the data, we can see that:
> > 1. In large-scale concurrency test, PKI is much faster than Fernet.
> > 2. PKI token revoke can't immediately make the token invalid. So it has
> the
> > revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062
> >
> > But in our production team's opinion, the revoke issue is a small
> problem,
> > and can be avoided by some periphery ways. (More detail solution could be
> > explained by them in the follow email).
> > They think that the performance issue is the most important thing. Maybe
> > you can see that in some production environment, performance is the first
> > thing to be considered.
>
> I'd like to hear solutions to this if you have already come up with
> them. This issue, however, isn't the only one that led us to remove PKI
> tokens.
>
> >
> > So here I'd like to ask you, especially the keystone experts:
> > 1. Is there any chance to bring PKI/PKIZ back to Keystone?
>
> I would guess that, at least in the immediate future, we would not want
> to put it back into keystone until someone can fix the issues. Also
> ideally running the token provider in production.
>
>
> > 2. Has Fernet token improved the performance during these releases? Or
> any
> > road map so that we can make sure Fernet is better than PKI in all side.
> > Otherwise, I don't think that remove PKI in Ocata is the right way. Or
> > even, we can keep the PKI token in Keystone for more one or two cycles,
> > then remove it once Fernet is stable enough.
> > 3. Since I'll be in Atalanta next week, if it is possible, I'd like to
> > bring this topic to Keystone PTG. can I?
>
> Sure. We have a pretty packed calendar, but I'm sure you could steal a
> few minutes somewhere.
>
>
> >
> > It is a real production problem and I really need your feedback.
> >
>
> Have you tried playing with the crypt_strength[1]? If the slowness is
> the crypto (which it was in the past) then you can tune it a little bit.
> Another option might be to keep the same token flow and find a faster
> method for hashing a token.
>
> 1.
> http://git.openstack.org/cgit/openstack/keystone/tree/etc/keystone.conf.sample#n67
>
>
> --
> david stanek
> web: https://dstanek.com
> twitter: https://twitter.com/dstanek
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Please give your opinion about "openstack server migrate" command.

2017-02-17 Thread David Medberry
Replying more to the "thread" and stream of thought than a specific message.

1) Yes, it is confusing. Rikimaru's description is more or less what I
believe.
2) Because it is confusing, I continue to use NovaClient commands instead
of OpenstackClient

I don't know what drove the creation of the OpenStack Client server
commands the way that they are it might be a good deep dive of launchpad
and git to find out. i.e., I can't "guess" what drove the design as it
seems wrong and overly opaque and complex.

On Fri, Feb 17, 2017 at 3:38 AM, Rikimaru Honjo <
honjo.rikim...@po.ntts.co.jp> wrote:

> Hi Marcus,
>
>
> On 2017/02/17 15:05, Marcus Furlong wrote:
>
>> On 17 February 2017 at 16:47, Rikimaru Honjo
>>  wrote:
>>
>>> Hi all,
>>>
>>> I found and reported a unkind behavior of "openstack server migrate"
>>> command
>>> when I maintained my environment.[1]
>>> But, I'm wondering which solution is better.
>>> Do you have opinions about following my solutions by operating point of
>>> view?
>>> I will commit a patch according to your opinions if those are gotten.
>>>
>>> [1]https://bugs.launchpad.net/python-openstackclient/+bug/1662755
>>> ---
>>> [Actual]
>>> If user run "openstack server migrate --block-migration ",
>>> openstack client call Cold migration API.
>>> "--block migration" option will be ignored if user don't specify
>>> "--live".
>>>
>>> But, IMO, this is unkindly.
>>> This cause unexpected operation for operator.
>>>
>>
>> +1 This has confused/annoyed me before.
>>
>>
>>> P.S.
>>> "--shared-migration" option has same issue.
>>>
>>
>> For the shared migration case, there is also this bug:
>>
>>https://bugs.launchpad.net/nova/+bug/1459782
>>
>> which, if fixed/implemented would negate the need for
>> --shared-migration? And would fix also "nova resize" on shared
>> storage.
>>
> In my understanding, that report says about libvirt driver's behavior.
> In the other hand, my report says about the logic of openstack client.
>
> Current "openstack server migrate" command has following logic:
>
> * openstack server migrate
>+-User don't specify "--live"
>| + Call cold-migrate API.
>|   Ignore "--block-migration" and "--shard-migration" option if user
> specify those.
>|
>+-User specify "--live"
>| + Call live-migration API.
>|
>+-User specify "--live --block-migration"
>| + Call block-live-migration API.
>|
>+-User specify "--live --shared-migration"
>  + Call live-migration API.[1]
>
> [1]
> "--shared-migration" means live-migration(not block-live-migrate) in
> "server migrate" command.
> In other words, "server migrate --live" and "server migrate --live
> --shared-migration"
> are same operation.
> I'm wondering why "--shared-migration" is existed...
>
>
> Cheers,
>> Marcus.
>>
>>
> --
> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
> NTTソフトウェア株式会社
> クラウド&セキュリティ事業部 第一事業ユニット(CS1BU)
> 本上力丸
> TEL.  :045-212-7539
> E-mail:honjo.rikim...@po.ntts.co.jp
> 〒220-0012
>   横浜市西区みなとみらい4丁目4番5号
>   横浜アイマークプレイス 13階
>
>
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [winstackers][hyperv] Atlanta PTG meetup

2017-02-17 Thread Claudiu Belu
Hello,

Our team will be attending the Atlanta PTG next week. We don't have a dedicated 
session, but if you want to meet up and discuss about various Windows / Hyper-V 
related features in OpenStack projects (what has been done, what's in the 
pipeline, future plans, or what you'd like to see in future releases) you can 
simply drop me an email, or ping me on IRC (claudiub), and we'll schedule a 
meeting.

We will also be attending other sessions on other projects as well.

Best regards,

Claudiu Belu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 12

2017-02-17 Thread Chris Dent



Thanks to edleafe for doing last week's resource providers and placement 
update. This one will try to situate things for next week's PTG. Because of the 
PTG there will be no update next week. If everything goes to plan there should 
be a wealth of etherpads and some summaries which will be further summarized in 
the following week.

# What Matters Most

To make sure we address all the most relevant stuff next week, it would be 
useful for anyone interested or invested in placement to have a review of some 
etherpads, even if you aren't going to be at the PTG.

The overarching nova etherpad for the PTG is at:

https://etherpad.openstack.org/p/nova-ptg-pike

The list of things related to placement is long enough to get its own:

https://etherpad.openstack.org/p/nova-ptg-pike-placement

There's an also an etherpad for doing a retrospective on the ocata cycle. That 
has some placement related things on it. If you have some things to say on how 
placement development went this cycle, good or bad, please add them to:

https://etherpad.openstack.org/p/nova-ocata-retrospective

# What's Changed

In the last week or so the main things to have changed are merges to master and 
backports of tweaks to status checks and deployment ordering (in systems like 
TripleO) and a fair number of bugs (for example, two fixes to generated SQL to 
make it work with postgresql). Master is now pike, so the functionality that 
didn't make it into ocata is once again being actively worked on (see lots of 
links below).

The full extent of the "Placement API Developer Notes" has merged. It's at:

https://docs.openstack.org/developer/nova/placement_dev.html

If you're working on placement and have not read that, it's probably worth 
reading. If you find that something is missing, please say so so we can figure 
out how to fix it.

# Main Themes

The first two items listed below are the immediate feature priorities for 
placement.

## Custom Resource Classes (Ironic Inventories)

https://review.openstack.org/#/q/status:open+topic:bp/custom-resource-classes

We hoped to get this into Ocata but we didn't manage to bring it together in a 
way that was comprehensible and tested enough to be confident. There's since 
been some adjustments that should help with that but there remain some concerns 
on where in the layers of code the inventory handling should happen.

## Real use of Shared Resource Providers

https://review.openstack.org/#/q/status:open+topic:bp/shared-resources-pike

One of the big payoffs of the resource providers concept is that we'll finally 
be able to allocate use of shared resources (such as farms of disk) in a 
rational fashion. The changes at the topic above start that work. There's also 
an etherpad where discussion of some of
the options is in progress:

https://etherpad.openstack.org/p/decision-finding-shared-inventory

You'll see from that we've still got some distance to go before we're all on 
the same page about how this is supposed to work.

## Resource Provider Traits

https://review.openstack.org/#/q/status:open+topic:bp/resource-provider-traits
https://review.openstack.org/#/q/status:open+topic:bp/resource-provider-tags

(We should decide on just one bp name here.)

This is the qualitative aspect of resource providers (e.g., disk which is SSD) 
and will allow requests to express preferences or requirements for types of 
things, not just quantities of things.

## Nested Resource Providers

https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers

How to represent resources that are within other resources. Mostly to do with 
things like NUMA functionality and PCI devices hosted on a compute node.

## Docs

https://review.openstack.org/#/q/topic:cd/placement-api-ref

The start of creating an API ref for the placement API. Not a lot there yet as 
I haven't had much of an opportunity to move it along. There is, however, 
enough there for content to be started, if people have the opportunity to do 
so. Check with me to divvy up the work if you'd like to contribute.

## Claims into the Scheduler

This is something that will be talked about at the PTG as part of long term 
placement planning. Eventually requesting and claiming resources will be a 
single request to the placement API. We need to figure out the flow of how that 
is going to work.

# Other Code/Specs

Miscellaneous changes in progress. Bugs fixes, cleanups, leftovers. These need 
review and eventual merging.

* https://review.openstack.org/#/c/428612/
  Better exception and response message when failing to create a
  resource class.

* https://bugs.launchpad.net/nova/+bug/1635182
  Fixing it so we don't have to add json_error_formatter everywhere.
  There's a collection of related fixes attached to that bug report.

  Pushkar, you might want to make all of those have the same topic, or
  put them in a stack of related changes.

* https://review.openstack.org/#/q/status:open+topic:valid_inventories
  Fixes 

Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-17 Thread Jeremy Stanley
On 2017-02-16 18:43:29 -0800 (-0800), John Dickinson wrote:
[...]
> Second, the foundation messaging around the PTG emphasizes
> per-project developers. From https://www.openstack.org/ptg/
> 
> The event is not optimized for non-contributors or people who
> can’t relate to a specific project team. Each team is given a
> single room and attendees are expected to spend all their time
> with their project team. Attendees who can’t pick a specific team
> to work with are encouraged to skip the event in favor of
> attending the OpenStack Summit, where a broader range of topics is
> discussed.

Hard to pin this one on the foundation. The way the event is
designed and organized is based on feedback and recommendations from
our community over the past several years, so in this case it seems
they're just trying to give us what we said we wanted.

Of course, to your point, it does still make a statement about
developer identity and how we may collectively view ourselves (that
we just wanted easier cross-project interaction but not a fully
cross-project event).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] mistral-dashboard 4.0.0.0rc2 (ocata)

2017-02-17 Thread no-reply

Hello everyone,

A new release candidate for mistral-dashboard for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/mistral-dashboard/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:


http://git.openstack.org/cgit/openstack/mistral-dashboard/log/?h=stable/ocata

Release notes for mistral-dashboard can be found at:

http://docs.openstack.org/releasenotes/mistral-dashboard/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] mistral 4.0.0.0rc2 (ocata)

2017-02-17 Thread no-reply

Hello everyone,

A new release candidate for mistral for the end of the Ocata
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/mistral/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/mistral/log/?h=stable/ocata

Release notes for mistral can be found at:

http://docs.openstack.org/releasenotes/mistral/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-17 Thread Dmitry Tantsur

On 02/17/2017 12:01 AM, Chris Dent wrote:

On Thu, 16 Feb 2017, Dan Prince wrote:


And yes. We are all OpenStack developers in a sense. We want to align
things in the technical arena. But I think you'll also find that most
people more closely associate themselves to a team within OpenStack
than they perhaps do with the larger project. Many of us in TripleO
feel that way I think. This is a healthy thing, being part of a team.
Don't make us feel bad because of it by suggesting that uber OpenStack
graphics styling takes precedent.


I'd very much like to have a more clear picture of the number of
people who think of themselves primarily as "OpenStack developers"
or primarily as "$PROJECT developers".

I've always assumed that most people in the community™ thought of
themselves as the former but I'm realizing (in part because of what
Dan's said here) that's bias or solipsism on my part and I really
have no clue what the situation is.

Anyone have a clue?


I don't have a clue, and I don't personally think it matters. But I suspect the 
latter is the majority. At least because very few contributors have a chance to 
contribute to something OpenStack-wide, while many people get assigned to work 
on a project or a few of them.


That being said, I don't believe that the "OpenStack vs $PROJECT" question is as 
important as it may seem from this thread :)






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] my work on Debian and non-x86 architectures

2017-02-17 Thread Marcin Juszkiewicz
W dniu 17.02.2017 o 12:47, Marcin Juszkiewicz pisze:
> As you know I added support for non-x86 architectures: aarch64 and
> ppc64le. Also resurrected Debian support.

Forgot two things:

Blueprint:
https://blueprints.launchpad.net/kolla/+spec/multiarch-and-arm64-containers

Logs: http://people.linaro.org/~marcin.juszkiewicz/kolla/ (updated every
few minutes)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron] PTG cross team session

2017-02-17 Thread Dmitry Tantsur
Thanks! I wonder if 1 hour is actually enough though, given the complexity of 
the problem (actually three problems already proposed for discussion in the 
etherpad). I'd personally double it (at least).


On 02/17/2017 10:16 AM, Kevin Benton wrote:

Hi,

I added a slot on the calendar to get a room from 2:30-3:30PM on Tuesday in the
Macon room.[1] Let me know if anyone has any conflicts with this.

1. https://ethercalc.openstack.org/Pike-PTG-Discussion-Rooms

On Thu, Feb 16, 2017 at 8:25 AM, Vasyl Saienko > wrote:

Hello Ironic/Neutron teams,


Ironic team would like to schedule cross session with Neutron team on Mon -
Tues except for Tue 9.30 - 10.00
The topics we would like to talk are added
to: https://etherpad.openstack.org/p/neutron-ptg-pike
 L151


Sincerely,
Vasyl Saienko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] my work on Debian and non-x86 architectures

2017-02-17 Thread Marcin Juszkiewicz
Ok, I mailed separately about each of them but I work on them at once so
hard to split.

As you know I added support for non-x86 architectures: aarch64 and
ppc64le. Also resurrected Debian support.

# A bit of background

At Linaro we work on getting AArch64 (64-bit ARM, arm64) to be present
in many places. We have at least two OpenStack instances running at the
moment - on AArch64 hardware only.

First we used Debian/jessie and 'liberty' version. Was working. Not best
but we helped many projects by providing virtual machines for porting
software.

It was built from packages and later (when 'mitaka' was released) we
moved to virtualenv per component. Out second "cloud" runs that. With
proper Neutron networking, live migration and few other nice things.

But virtualenvs were done as quick solution. We decided to move to
Docker containers for next release.

And Kolla was chosen as a tool for it. We do not like to reinvent the
wheel again and again...


# Non-x86 support in Kolla

Kolla is x86-64 centric. As most of software nowadays. But thanks to
work done by Sajauddin Mohammad I had something [1] to use as a base for
adding aarch64 support.

1. https://review.openstack.org/#/c/423239/6

I took his patch, slashed out most of it and concentrated on getting
minimal changes needed to get something built on AArch64. Effect was
sent for review [2] and is now at 9th version (few more changes coming).

2. https://review.openstack.org/#/c/430940

Docker images started to appear. But at beginning I was building Ubuntu
ones as Debian support was "basically abandoned, on a way out". From
CentOS guys I got confirmation that official Docker image will be
generated (it is done already).

I spent some time on making sure that whole non-x86 support is free from
any hardcoding wherever possible. As you can see in my working branch
[3] it went quite well. Most of arch related changes are related to
"distro does not provide package ZYS for that architecture" or to
handling of external repositories.

3.
https://github.com/hrw/kolla/commits/to-merge/multiarch-and-arm64-containers


# Debian support

And here we come to Debian support. At Linaro we decided to support two
community based distributions: CentOS and Debian. But Debian was on
a way out in Kolla...

As this was not related much to non-x86 work I decided to use one of
x86-64 machines for that stuff.

First builds were against 'jessie-backports' base tag. I had to make
a patch to tell APT that if I want backports then I really want them. It
was sent for review [4] as rest of patches.

4. https://review.openstack.org/432780

Images were building but not so many as for Ubuntu. So I went through
all of them and enabled Debian where it was possible. Resulting patch
went for review [5] as usual.

5. https://review.openstack.org/432787

Effect was quite nice (on x86-64):

debian-binary:  158
debian-source:  201

But 'jessie' was missing several packages even with backports enabled.
So after discussion with my team I decided to drop it and go for
Debian/testing 'stretch' one instead. It is already frozen for release
so no big changes are allowed. Patch in review [6] of course.

6. https://review.openstack.org/434453

At that moment I abandoned patch [4] as 'jessie-backports' are not
something I plan to support.

Turned out that 'stretch' images have a bit different set of packages
installed than 'jessie' had. So 'gnupg' and 'dirmngr' were missing while
we need them for importing GPG keys into APT. Proper patch went to
review [7] again.

7. https://review.openstack.org/434431

Did rebuild on x86-64:

stretch-binary: 137
stretch-source: 195

A bit less than 'jessie-backports' had, right? Sure, but it also shows
that I have to make a new build to check numbers (laptop already has
~1500 docker images generated by kolla).

# Cleaning of old Power patch

Remember [1] which all that started from? I did not forgot it and after
building all those images I went back to it.

Some parts are just fugly so I skipped them but others were useful if
done properly. That's how new changes were done: [8], [9], [10] and some
updates to previous ones ([2], [5]).

8.  https://review.openstack.org/434810
9.  https://review.openstack.org/434809
10. https://review.openstack.org/434817

Then I managed to put remote hands on one of Power machines at Red Hat
and started builds:

debian-binary:  134
debian-source:  184
ubuntu-binary:  147
ubuntu-source:  190

No CentOS builds as there is no centos/ppc64le image available yet.

# Summary

Non-x86 support looks quite nice. There are some images which can not be
built as they rely on external repositories so no aarch64 nor ppc64le
packages to use.

Debian 'stretch' support is not perfect yet but it is something which
I plan to maintain so situation will be going to improve. Note that most
of my work will go into 'source' type of builds as we want to have same
images for both Debian and CentOS systems.


Next week I am on holidays Tue->Sun.



Re: [openstack-dev] [ironic] End-of-Ocata core team updates

2017-02-17 Thread Vladyslav Drok
On Fri, Feb 17, 2017 at 11:40 AM, Dmitry Tantsur 
wrote:

> Hi all!
>
> I'd like to propose a few changes based on the recent contributor activity.
>
> I have two candidates that look very good and pass the formal barrier of 3
> reviews a day on average [1].
>
> First, Vasyl Saienko (vsaienk0). I'm pretty confident in him, his stats
> [2] are high, he's doing a lot of extremely useful work around networking
> and CI.
>

+1


>
> Second, Mario Villaplana (mariojv). His stats [3] are quite high too, he
> has been doing some quality reviews for critical patches in the Ocata cycle.
>

+1


>
> Active cores and interested contributors, please respond with your +-1 to
> these suggestions.
>
> Unfortunately, there is one removal as well. Devananda, our team leader
> for several cycles since the very beginning of the project, has not been
> active on the project for some time [4]. I propose to (hopefully temporary)
> remove him from the core team. Of course, when (look, I'm not even saying
> "if"!) he comes back to active reviewing, I suggest we fast-forward him
> back. Thanks for everything Deva, good luck with your current challenges!
>

+1 :( Many thanks to Devananda for all his work since the very start of the
project!

Vlad


>
> Thanks,
> Dmitry
>
> [1] http://stackalytics.com/report/contribution/ironic-group/90
> [2] http://stackalytics.com/?user_id=vsaienko=marks
> [3] http://stackalytics.com/?user_id=mario-villaplana-j=marks
> [4] http://stackalytics.com/?user_id=devananda=marks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Why design VIR_DOMAIN_SHUTDOWN equals VIR_DOMAIN_SHUTOFF

2017-02-17 Thread luogan...@chinamobile.com
Hi, guys 

I find that nova define VIR_DOMAIN_SHUTDOWN and VIR_DOMAIN_SHUTOFF as equal.  
And the comment in source code writes  
'
# The libvirt API doc says that DOMAIN_SHUTDOWN means the domain 
# is being shut down. So technically the domain is still 
# running. SHUTOFF is the real powered off state.  But we will map 
# both to SHUTDOWN anyway.
'
This design cause some problems. For example, in _clean_shutdown function, 
the author assume the VM is really shutdown if its power state in SHUTDOWN.
But in fact, SHUTDOWN state could either be VIR_DOMAIN_SHUTDOWN  or 
VIR_DOMAIN_SHUTOFF. So this assumption is not right and may cause other 
problems like https://bugs.launchpad.net/nova/+bug/1642689 

   def _clean_shutdown(self, instance, timeout, retry_interval):
"""Attempt to shutdown the instance gracefully.
:param instance: The instance to be shutdown
:param timeout: How long to wait in seconds for the instance to
shutdown
:param retry_interval: How often in seconds to signal the instance
   to shutdown while waiting
:returns: True if the shutdown succeeded
"""

# List of states that represent a shutdown instance
SHUTDOWN_STATES = [power_state.SHUTDOWN,
   power_state.CRASHED]

try:
guest = self._host.get_guest(instance)
except exception.InstanceNotFound:
# If the instance has gone then we don't need to
# wait for it to shutdown
return True

state = guest.get_power_state(self._host)
if state in SHUTDOWN_STATES:
LOG.info(_LI("Instance already shutdown."),
 instance=instance)
return True

So I wonder why the original design let VIR_DOMAIN_SHUTDOWN 
equals VIR_DOMAIN_SHUTOFF? Is there anyone knows?


luogan...@chinamobile.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-17 Thread Thierry Carrez
Ed Leafe wrote:
> On Feb 16, 2017, at 10:07 AM, Doug Hellmann  wrote:
> 
>> When we signed off on the Big Tent changes we said competition
>> between projects was desirable, and that deployers and contributors
>> would make choices based on the work being done in those competing
>> projects. Basically, the market would decide on the "optimal"
>> solution. It's a hard message to hear, but that seems to be what
>> is happening.
> 
> This.
> 
> We got much better at adding new things to OpenStack. We need to get better 
> at letting go of old things.

Yes.

With the model we've built, it's difficult to move some project teams
from "official" to "unofficial": as long as there is the remnants of a
team working on a project, and this team is clearly made of OpenStack
community members following our principles, our governance model does
not leave many walls you can lean on.

But there is one: does the project help with the OpenStack mission, or
does it hurt it ? Some projects do fall below the level of
maintenance/contribution necessary to present a satisfying experience,
and keeping those in our blessed, official "mix" hurts us more than it
helps us. Some other projects make us appear as (badly) trying to
compete with successful other ecosystems, while we should just co-opt
those ecosystems -- this also hurts us more than it helps us in
achieving the OpenStack mission.

It will be difficult discussions, but at this precise stage in OpenStack
life we need to have them. Come talk to me next week if interested.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] End-of-Ocata core team updates

2017-02-17 Thread Sam Betts (sambetts)
+1 to both, thanks for your contributions Vasyl and Mario!!! 

Sam 

On 17/02/2017, 09:40, "Dmitry Tantsur"  wrote:

Hi all!

I'd like to propose a few changes based on the recent contributor activity.

I have two candidates that look very good and pass the formal barrier of 3 
reviews a day on average [1].

First, Vasyl Saienko (vsaienk0). I'm pretty confident in him, his stats [2] 
are 
high, he's doing a lot of extremely useful work around networking and CI.

Second, Mario Villaplana (mariojv). His stats [3] are quite high too, he 
has 
been doing some quality reviews for critical patches in the Ocata cycle.

Active cores and interested contributors, please respond with your +-1 to 
these 
suggestions.

Unfortunately, there is one removal as well. Devananda, our team leader for 
several cycles since the very beginning of the project, has not been active 
on 
the project for some time [4]. I propose to (hopefully temporary) remove 
him 
from the core team. Of course, when (look, I'm not even saying "if"!) he 
comes 
back to active reviewing, I suggest we fast-forward him back. Thanks for 
everything Deva, good luck with your current challenges!

Thanks,
Dmitry

[1] http://stackalytics.com/report/contribution/ironic-group/90
[2] http://stackalytics.com/?user_id=vsaienko=marks
[3] http://stackalytics.com/?user_id=mario-villaplana-j=marks
[4] http://stackalytics.com/?user_id=devananda=marks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] PTG Friday activities

2017-02-17 Thread Masahito MUROI

Hi Eric,

Both looks interesting, so I'm ok for either.  If I need to pick one of 
them, I prefer the Aquarium.


Masahito

On 2017/02/16 8:06, Eric K wrote:

Hi all,

Here are some options (thinrichs originally suggested) we could consider
for a Friday daytime outing for those interested.

Anyone interested?
Any other ideas?

Georgia Aquarium
- 1st or 2nd largest aquarium in the world.
- #1 on tripAdvisor
- $31.95+tax/adult (advanced online purchase)
http://www.georgiaaquarium.org
https://www.tripadvisor.com/Attraction_Review-g60898-d588792-Reviews-Georgia_Aquarium-Atlanta_Georgia.html

Atlanta Botanical Garden
- #3 on tripAdvisor
- $21.95+tax/adult
http://atlantabg.org
https://www.tripadvisor.com/Attraction_Review-g60898-d104713-Reviews-Atlanta_Botanical_Garden-Atlanta_Georgia.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] End-of-Ocata core team updates

2017-02-17 Thread Dmitry Tantsur

Hi all!

I'd like to propose a few changes based on the recent contributor activity.

I have two candidates that look very good and pass the formal barrier of 3 
reviews a day on average [1].


First, Vasyl Saienko (vsaienk0). I'm pretty confident in him, his stats [2] are 
high, he's doing a lot of extremely useful work around networking and CI.


Second, Mario Villaplana (mariojv). His stats [3] are quite high too, he has 
been doing some quality reviews for critical patches in the Ocata cycle.


Active cores and interested contributors, please respond with your +-1 to these 
suggestions.


Unfortunately, there is one removal as well. Devananda, our team leader for 
several cycles since the very beginning of the project, has not been active on 
the project for some time [4]. I propose to (hopefully temporary) remove him 
from the core team. Of course, when (look, I'm not even saying "if"!) he comes 
back to active reviewing, I suggest we fast-forward him back. Thanks for 
everything Deva, good luck with your current challenges!


Thanks,
Dmitry

[1] http://stackalytics.com/report/contribution/ironic-group/90
[2] http://stackalytics.com/?user_id=vsaienko=marks
[3] http://stackalytics.com/?user_id=mario-villaplana-j=marks
[4] http://stackalytics.com/?user_id=devananda=marks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron] PTG cross team session

2017-02-17 Thread Kevin Benton
Hi,

I added a slot on the calendar to get a room from 2:30-3:30PM on Tuesday in
the Macon room.[1] Let me know if anyone has any conflicts with this.

1. https://ethercalc.openstack.org/Pike-PTG-Discussion-Rooms

On Thu, Feb 16, 2017 at 8:25 AM, Vasyl Saienko 
wrote:

> Hello Ironic/Neutron teams,
>
>
> Ironic team would like to schedule cross session with Neutron team on Mon
> - Tues except for Tue 9.30 - 10.00
> The topics we would like to talk are added to: https://etherpad.
> openstack.org/p/neutron-ptg-pike L151
>
>
> Sincerely,
> Vasyl Saienko
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stable][requirements] Team dinner @ PTG

2017-02-17 Thread Thierry Carrez
Thierry Carrez wrote:
> Hi stable/requirements/release folks!
> 
> We are trying to organize a dinner for one of the PTG nights for people
> involved with the Stable / Release Management / Requirements teams
> (including team liaisons !)
> 
> If interested please enter your availability on:
> https://framadate.org/XIbbQnbxSKRPW1yK
> 
> I have a small preference for Tuesday (end of our room) or Wednesday
> (release day party !), but can make other days work as well. I removed
> Monday from the options since that's when the Infra team will have
> dinner and we have lots of overlap.

Looks like Tuesday is the winner.

We currently have 7 people signed up, I'll wait a bit and probably place
a reservation over the weekend. So sign up today if interested !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ceilometer event-list empty

2017-02-17 Thread Sam Huracan
Hi Gordon,

I've solved this issue.

I've checked event.sample queue and it has no information.
My publishers config of event_pipeline.yaml: - notifier://?topic=alarm.all,
therefore event information are poured only into alarm queue, instead of
event queue.

After I add - notifier:// and - notifier://?topic=event to config, all
things operate accurately as they must to do. :)

Thank you



2017-02-16 19:46 GMT+07:00 gordon chung :

>
>
> On 15/02/17 09:27 PM, Sam Huracan wrote:
> >
> > I check mongodb has event collection, but it is empty
> > http://prntscr.com/e9b96w
> >
> > I do not see any error in Ceilometer log.
> >
> > Could you check my
> > ceilometer.conf? http://paste.openstack.org/show/598925/
> > 
>
> you'll need to debug your system. you can probably start by checking if
> you have anything in your event.sample queue (try disabling your
> collector for a bit). if you get nothing there, it has something to do
> with notification agent not generating events or publishing events
> (check your event_pipeline.yaml)
>
> i should mention you don't need a collector in Ocata+
>
> cheers,
>
> --
> gord
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev