Re: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer

2018-06-04 Thread Steven Dake (stdake)
+1

On 5/31/18, 10:08 AM, "Borne Mace"  wrote:

Greetings all,

I would like to propose the addition of Steve Noyes to the kolla-cli 
core reviewer team.  Consider this nomination as my personal +1.

Steve has a long history with the kolla-cli and should be considered its 
co-creator as probably half or more of the existing code was due to his 
efforts.  He has now been working diligently since it was pushed 
upstream to improve the stability and testability of the cli and has the 
second most commits on the project.

The kolla core team consists of 19 people, and the kolla-cli team of 2, 
for a total of 21.  Steve therefore requires a minimum of 11 votes (so 
just 10 more after my +1), with no veto -2 votes within a 7 day voting 
window to end on June 6th.  Voting will be closed immediately on a veto 
or in the case of a unanimous vote.

As I'm not sure how active all of the 19 kolla cores are, your attention 
and timely vote is much appreciated.

Thanks!

-- Borne


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api][graphql] Feature branch creation please (PTL/Core)

2018-06-04 Thread Gilles Dubreuil



On 05/06/18 13:02, Akihiro Motoki wrote:

Hi Gilles,

2018年6月5日(火) 10:46 Gilles Dubreuil >:




On 04/06/18 22:20, Doug Hellmann wrote:
>> On Jun 4, 2018, at 7:57 AM, Gilles Dubreuil
mailto:gdubr...@redhat.com>> wrote:
>>
>> Hi,
>>
>> Can someone from the core team request infra to create a
feature branch for the Proof of Concept we agreed to do during API
SIG forum session [1] a Vancouver?
>>
>> Thanks,
>> Gilles
>>
>> [1] https://etherpad.openstack.org/p/YVR18-API-SIG-forum
> You can do this through the releases repo now. See the README
for instructions.
>
> Doug

Great, thanks Doug!

What about the UUID associated? Do I generate one?:

branches:
   - name: feature/graphql
 location:
   openstack/neutron: 


This needs to be a valid commit hash.
You can specify the latest conmit ID of the neutron repo.

Akihiro


Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api][graphql] Feature branch creation please (PTL/Core)

2018-06-04 Thread Akihiro Motoki
Hi Gilles,

2018年6月5日(火) 10:46 Gilles Dubreuil :

>
>
> On 04/06/18 22:20, Doug Hellmann wrote:
> >> On Jun 4, 2018, at 7:57 AM, Gilles Dubreuil 
> wrote:
> >>
> >> Hi,
> >>
> >> Can someone from the core team request infra to create a feature branch
> for the Proof of Concept we agreed to do during API SIG forum session [1] a
> Vancouver?
> >>
> >> Thanks,
> >> Gilles
> >>
> >> [1] https://etherpad.openstack.org/p/YVR18-API-SIG-forum
> > You can do this through the releases repo now. See the README for
> instructions.
> >
> > Doug
>
> Great, thanks Doug!
>
> What about the UUID associated? Do I generate one?:
>
> branches:
>- name: feature/graphql
>  location:
>openstack/neutron: 
>

This needs to be a valid commit hash.
You can specify the latest conmit ID of the neutron repo.

Akihiro


>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Ed Leafe
On Jun 4, 2018, at 2:10 PM, Doug Hellmann  wrote:
> 
>> Those rules were added because we wanted to avoid the appearance of one 
>> company implementing features that would only be beneficial to it. This 
>> arose from concerns in the early days when Rackspace was the dominant 
>> contributor: many of the other companies involved in OpenStack were worried 
>> that they would be investing their workers in a project that would only 
>> benefit Rackspace. As far as I know, there were never specific cases where 
>> Rackspace or any other company tried to push features in that no one else 
>> supported..
>> 
>> So even if now it doesn't seem that there is a problem, and we could remove 
>> these restrictions without ill effect, it just seems prudent to keep them. 
>> If a project is so small that the majority of its contributors/cores are 
>> from one company, maybe it should be an internal project for that company, 
>> and not a community project.
>> 
>> -- Ed Leafe
> 
> Where was the rule added, though? I am aware of some individual teams
> with the rule, but AFAIK it was never a global rule. It's certainly not
> in any of the projects for which I am currently a core reviewer.

If you're looking for a reference to a particular bit of governance, I can't 
help you there. But being one of the Nova cores who worked for Rackspace back 
then, I was part of many such discussions, and can tell you that Rackspace was 
very conscious of not wanting to appear to be dictating the direction, and that 
this agreement not to approve code committed by other Rackers was an important 
part of that.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api][graphql] Feature branch creation please (PTL/Core)

2018-06-04 Thread Gilles Dubreuil



On 04/06/18 22:20, Doug Hellmann wrote:

On Jun 4, 2018, at 7:57 AM, Gilles Dubreuil  wrote:

Hi,

Can someone from the core team request infra to create a feature branch for the 
Proof of Concept we agreed to do during API SIG forum session [1] a Vancouver?

Thanks,
Gilles

[1] https://etherpad.openstack.org/p/YVR18-API-SIG-forum

You can do this through the releases repo now. See the README for instructions.

Doug


Great, thanks Doug!

What about the UUID associated? Do I generate one?:

branches:
  - name: feature/graphql
    location:
  openstack/neutron: 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-04 Thread Rochelle Grober

Zane Bitter wrote:
> On 31/05/18 14:35, Julia Kreger wrote:
> > Back to the topic of nitpicking!
> >
> > I virtually sat down with Doug today and we hammered out the positive
> > aspects that we feel like are the things that we as a community want
> > to see as part of reviews coming out of this effort. The principles
> > change[1] in governance has been updated as a result.
> >
> > I think we are at a point where we have to state high level
> > principles, and then also update guidelines or other context providing
> > documentation to re-enforce some of items covered in this
> > discussion... not just to educate new contributors, but to serve as a
> > checkpoint for existing reviewers when making the decision as to how
> > to vote change set. The question then becomes where would such
> > guidelines or documentation best fit?
> 
> I think the contributor guide is the logical place for it. Kendall pointed 
> out this
> existing section:
> 
> https://docs.openstack.org/contributors/code-and-documentation/using-
> gerrit.html#reviewing-changes
> 
> It could go in there, or perhaps we separate out the parts about when to use
> which review scores into a separate page from the mechanics of how to use
> Gerrit.
> 
> > Should we explicitly detail the
> > cause/effect that occurs? Should we convey contributor perceptions, or
> > maybe even just link to this thread as there has been a massive amount
> > of feedback raising valid cases, points, and frustrations.
> >
> > Personally, I'd lean towards a blended approach, but the question of
> > where is one I'm unsure of. Thoughts?
> 
> Let's crowdsource a set of heuristics that reviewers and contributors should
> keep in mind when they're reviewing or having their changes reviewed. I
> made a start on collecting ideas from this and past threads, as well as my own
> reviewing experience, into a document that I've presumptuously titled "How
> to Review Changes the OpenStack Way" (but might be more accurately called
> "The Frank Sinatra Guide to Code Review"
> at the moment):
> 
> https://etherpad.openstack.org/p/review-the-openstack-way
> 
> It's in an etherpad to make it easier for everyone to add their suggestions
> and comments (folks in #openstack-tc have made some tweaks already).
> After a suitable interval has passed to collect feedback, I'll turn this into 
> a
> contributor guide change.

I offer the suggestion that there are some real examples of Good/Not Good in 
the document or maybe an addendum.  Since we have many non-native speakers in 
our community, examples are like pictures -- worth a thousand foreign words;-)

Maybe Zhipeng has a few favorites to supply.  I would suggest both score and 
comment to go with score.  In some cases, the example would show how to score 
and avoid nitpicking, in others, valid scores, but comments that are reasonable 
or not for the score.

--Rocky
 
> Have at it!
> 
> cheers,
> Zane.
> 
> > -Julia
> >
> > [1]: https://review.openstack.org/#/c/570940/
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Stepping down from core

2018-06-04 Thread Luo, Lujin
Hi Ihar,

I still cannot believe that you are leaving OpenStack world. Words can hardly 
express how I appreciate your guidance either in OVO work or Neutron as a 
whole. It was valuable experience to me both technically and non-technically to 
get to work with you. 

Please do hang out in the channels and let the comments bloom! 
I wish you all the best in your next endeavors. 

Thanks,
Lujin 

> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Tuesday, June 5, 2018 5:31 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Cc: Miguel Lavalle 
> Subject: [openstack-dev] [neutron][stable] Stepping down from core
> 
> Hi neutrinos and all,
> 
> As some of you've already noticed, the last several months I was scaling
> down my involvement in Neutron and, more generally, OpenStack.
> I am at a point where I feel confident my disappearance won't disturb the
> project, and so I am ready to make it official.
> 
> I am stepping down from all administrative roles I so far accumulated in
> Neutron and Stable teams. I shifted my focus to another project, and so I just
> removed myself from all relevant admin groups to reflect the change.
> 
> It was a nice 4.5 year ride for me. I am very happy with what we achieved in
> all these years and a bit sad to leave. The community is the most brilliant 
> and
> compassionate and dedicated to openness group of people I was lucky to
> work with, and I am reminded daily how awesome it is.
> 
> I am far from leaving the industry, or networking, or the promise of open
> source infrastructure, so I am sure we will cross our paths once in a while
> with most of you. :) I also plan to hang out in our IRC channels and make
> snarky comments, be aware!
> 
> Thanks for the fish,
> Ihar
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] spec review day next week Tuesday 2018-06-05

2018-06-04 Thread melanie witt

On Wed, 30 May 2018 12:22:20 -0700, Melanie Witt wrote:

Howdy all,

This cycle, we have our spec freeze later than usual at milestone r-2
June 7 because of the review runways system we've been trying out. We
wanted to allow more time for spec approvals as blueprints were
completed via runways.

So, ahead of the spec freeze, let's have a spec review day next week
Tuesday June 5 to ensure we get what spec approvals we can over the line
before the freeze. Please try to make some time on Tuesday to review
some specs and thanks in advance for participating!


Reminder: the spec review day is TODAY Tuesday June 5 (or tomorrow 
depending on your time zone). Please take some time to review some nova 
specs today if you can!


Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Status of Standalone installer (aka All-In-One)

2018-06-04 Thread Emilien Macchi
TL;DR: we made nice progress and you can checkout this demo:
https://asciinema.org/a/185533

We started the discussion back in Dublin during the last PTG. The idea of
Standalone (aka All-In-One, but can be mistaken with all-in-one overcloud)
is to deploy a single node OpenStack where the provisioning happens on the
same node (there is no notion of {under/over}cloud).

A kind of a "packstack" or "devstack" but using TripleO which has can offer:
- composable containerized services
- composable upgrades
- composable roles
- Ansible driven deployment

One of the key features we have been focusing so far are:
- low bar to be able to dev/test TripleO (single machine: VM), with simpler
tooling
- make it fast (being able to deploy OpenStack in minutes)
- being able to make a change in OpenStack (e.g. Keystone) and test the
change immediately

The workflow that we're currently targeting is:
- deploy the system by yourself (centos7 or rhel7)
- deploy the repos, install python-tripleoclient
- run 'openstack tripleo deploy (+ few args)
- (optional) modify your container with a Dockerfile + Ansible
- Test your change

Status:
- tripleoclient was refactored in a way that the undercloud is actually a
special configuration of the standalone deployment (still work in
progress). We basically refactored the containerized undercloud to be more
generic and configurable for standalone.
- we can now deploy a standalone OpenStack with just Keystone +
dependencies - which takes 12 minutes total (demo here:
https://asciinema.org/a/185533 and doc in progress:
http://logs.openstack.org/27/571827/6/check/build-openstack-sphinx-docs/1885304/html/install/containers_deployment/standalone.html
)
- we have an Ansible role to push modifications to containers via a Docker
file: https://github.com/openstack/ansible-role-tripleo-modify-image/

What's next:
- Documentation: as you can see the documentation is still in progress (
https://review.openstack.org/#/c/571827/)
- Continuous Integration: we're working on a new CI
job: tripleo-ci-centos-7-standalone
https://trello.com/c/HInL8pNm/7-upstream-ci-testing
- Working on the standalone configuration interface, still WIP:
https://review.openstack.org/#/c/569535/
- Investigate the use case where a developer wants to prepare the
containers before the deployment

I hope this update was useful, feel free to give feedback or ask any
questions,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Stepping down from core

2018-06-04 Thread Matt Riedemann

On 6/4/2018 3:31 PM, Ihar Hrachyshka wrote:

Hi neutrinos and all,

As some of you've already noticed, the last several months I was
scaling down my involvement in Neutron and, more generally, OpenStack.
I am at a point where I feel confident my disappearance won't disturb
the project, and so I am ready to make it official.

I am stepping down from all administrative roles I so far accumulated
in Neutron and Stable teams. I shifted my focus to another project,
and so I just removed myself from all relevant admin groups to reflect
the change.

It was a nice 4.5 year ride for me. I am very happy with what we
achieved in all these years and a bit sad to leave. The community is
the most brilliant and compassionate and dedicated to openness group
of people I was lucky to work with, and I am reminded daily how
awesome it is.

I am far from leaving the industry, or networking, or the promise of
open source infrastructure, so I am sure we will cross our paths once
in a while with most of you.:)  I also plan to hang out in our IRC
channels and make snarky comments, be aware!

Thanks for the fish,
Ihar


Ihar,

I think we mostly crossed paths over QA and stable maintenance stuff, 
but it was always a pleasure working with you and you were/are an 
extremely valuable contributor to OpenStack. I wish you the best in your 
new endeavors.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Forum Recap - Stein Release Goals

2018-06-04 Thread Matt Riedemann

On 6/4/2018 4:20 PM, Doug Hellmann wrote:

See my comments on the other part of the thread, but I think this is too
optimistic until we add a couple of people to the review team on OSC.

Others from the OSC team who have a better perspective on how much work
is actually left may have a different opinion though?


Yeah that is definitely something I was thinking about in Vancouver.

Would a more realistic goal be to decentralize the OSC code, like the 
previous goal about how tempest plugins were done? Or similar to the 
docs being decentralized? That would spread the review load onto the 
projects that are actually writing CLIs for their resources - which they 
are already doing in their per-project clients, e.g. python-novaclient 
and python-cinderclient.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-04 Thread Matt Riedemann

On 6/4/2018 5:13 PM, Sean McGinnis wrote:

Yes, exactly what I meant by the NOOP. I'm not sure what Cinder would
check here. We don't have to see if placement has been set up or if cell0
has been configured. Maybe once we have the facility in place we would
find some things worth checking, but at present I don't know what that
would be.


Here is an example from the Cinder Queens upgrade release notes:

"RBD/Ceph backends should adjust max_over_subscription_ratio to take 
into account that the driver is no longer reporting volume’s physical 
usage but it’s provisioned size."


I'm assuming you could check if rbd is configured as a storage backend 
and if so, is max_over_subscription_ratio set? If not, is it fatal? Does 
the operator need to configure it before upgrading to Rocky? Or is it 
something they should consider but don't necessary have to do - if that, 
there is a 'WARNING' status for those types of things.


Things that are good candidates for automating are anything that would 
stop the cinder-volume service from starting, or things that require 
data migrations before you can roll forward. In nova we've had blocking 
DB schema migrations for stuff like this which basically mean "you 
haven't run the online data migrations CLI yet so we're not letting you 
go any further until your homework is done".


Like I said, it's not black and white, but chances are good there are 
things that fall into these categories.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] summary of joint leadership meeting from 20 May

2018-06-04 Thread CARVER, PAUL
On Monday, June 04, 2018 18:47, Jay Pipes   wrote:

>Just my two cents, but the OpenStack and Linux foundations seem to be pumping 
>out new "open events" at a pretty regular clip -- >OpenStack Summit, OpenDev, 
>Open Networking Summit, OpenStack Days, OpenInfra Days, OpenNFV summit, the 
>list keeps >growing... at some point, do we think that the industry as a whole 
>is just going to get event overload?

Future tense? I think you could re-write "going to get event overload" into 
past tense and not be wrong.

We may be past the shoe event horizon.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Jeremy Stanley
On 2018-06-04 17:52:28 -0400 (-0400), Doug Hellmann wrote:
[...]
> I am still curious to know which teams have the policy. If it is more
> widespread than I realized, maybe it's reasonable to extend it and use
> it as the basis for a health check after all.
[...]

Not team-wide, but I have a personal policy that I try to avoid
approving a change if both the author and any other core reviews on
that change are from people paid by the same organization which
funds my time (unless I have a very good reason, and then I leave a
clear review comment when approving in such situations). It's not so
much a matter of a lack of trust on anyone's part, as a desire for
me to keep and further improve on that trust I've already built.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-04 Thread Jeremy Stanley
On 2018-06-04 14:28:28 -0700 (-0700), Amy Marrich wrote:
> On Mon, Jun 4, 2018 at 7:29 AM, Zane Bitter  wrote:
> > On 04/06/18 10:19, Amy Marrich wrote:
> > > [...]
> > > I'll read in more detail, but do we want to add rollcall-vote?
> > 
> > Is it used anywhere other than in the governance repo? We certainly could
> > add it, but it didn't seem like a top priority.
> 
> Not sure it is to be honest.:)

The infra-specs repo uses it to solicit Infra Council votes; the
governance-uc, openstack-specs and transparency-policy repos also
use it for similar reasons to the governance repo. But no, it's not
common enough I'd bother to mention it in any sort of documentation
aimed at the general reviewer base.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-04 Thread Eric Fried
There has been much discussion.  We've gotten to a point of an initial
proposal and are ready for more (hopefully smaller, hopefully
conclusive) discussion.

To that end, there will be a HANGOUT tomorrow (TUESDAY, JUNE 5TH) at
1500 UTC.  Be in #openstack-placement to get the link to join.

The strawpeople outlined below and discussed in the referenced etherpad
have been consolidated/distilled into a new etherpad [1] around which
the hangout discussion will be centered.

[1] https://etherpad.openstack.org/p/placement-making-the-(up)grade

Thanks,
efried

On 06/01/2018 01:12 PM, Jay Pipes wrote:
> On 05/31/2018 02:26 PM, Eric Fried wrote:
>>> 1. Make everything perform the pivot on compute node start (which can be
>>>     re-used by a CLI tool for the offline case)
>>> 2. Make everything default to non-nested inventory at first, and provide
>>>     a way to migrate a compute node and its instances one at a time (in
>>>     place) to roll through.
>>
>> I agree that it sure would be nice to do ^ rather than requiring the
>> "slide puzzle" thing.
>>
>> But how would this be accomplished, in light of the current "separation
>> of responsibilities" drawn at the virt driver interface, whereby the
>> virt driver isn't supposed to talk to placement directly, or know
>> anything about allocations?
> FWIW, I don't have a problem with the virt driver "knowing about
> allocations". What I have a problem with is the virt driver *claiming
> resources for an instance*.
> 
> That's what the whole placement claims resources things was all about,
> and I'm not interested in stepping back to the days of long racy claim
> operations by having the compute nodes be responsible for claiming
> resources.
> 
> That said, once the consumer generation microversion lands [1], it
> should be possible to *safely* modify an allocation set for a consumer
> (instance) and move allocation records for an instance from one provider
> to another.
> 
> [1] https://review.openstack.org/#/c/565604/
> 
>> Here's a first pass:
>>
>> The virt driver, via the return value from update_provider_tree, tells
>> the resource tracker that "inventory of resource class A on provider B
>> have moved to provider C" for all applicable AxBxC.  E.g.
>>
>> [ { 'from_resource_provider': ,
>>  'moved_resources': [VGPU: 4],
>>  'to_resource_provider': 
>>    },
>>    { 'from_resource_provider': ,
>>  'moved_resources': [VGPU: 4],
>>  'to_resource_provider': 
>>    },
>>    { 'from_resource_provider': ,
>>  'moved_resources': [
>>  SRIOV_NET_VF: 2,
>>  NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND: 1000,
>>  NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND: 1000,
>>  ],
>>  'to_resource_provider': 
>>    }
>> ]
>>
>> As today, the resource tracker takes the updated provider tree and
>> invokes [1] the report client method update_from_provider_tree [2] to
>> flush the changes to placement.  But now update_from_provider_tree also
>> accepts the return value from update_provider_tree and, for each "move":
>>
>> - Creates provider C (as described in the provider_tree) if it doesn't
>> already exist.
>> - Creates/updates provider C's inventory as described in the
>> provider_tree (without yet updating provider B's inventory).  This ought
>> to create the inventory of resource class A on provider C.
> 
> Unfortunately, right here you'll introduce a race condition. As soon as
> this operation completes, the scheduler will have the ability to throw
> new instances on provider C and consume the inventory from it that you
> intend to give to the existing instance that is consuming from provider B.
> 
>> - Discovers allocations of rc A on rp B and POSTs to move them to rp C*.
> 
> For each consumer of resources on rp B, right?
> 
>> - Updates provider B's inventory.
> 
> Again, this is problematic because the scheduler will have already begun
> to place new instances on B's inventory, which could very well result in
> incorrect resource accounting on the node.
> 
> We basically need to have one giant new REST API call that accepts the
> list of "move instructions" and performs all of the instructions in a
> single transaction. :(
> 
>> (*There's a hole here: if we're splitting a glommed-together inventory
>> across multiple new child providers, as the VGPUs in the example, we
>> don't know which allocations to put where.  The virt driver should know
>> which instances own which specific inventory units, and would be able to
>> report that info within the data structure.  That's getting kinda close
>> to the virt driver mucking with allocations, but maybe it fits well
>> enough into this model to be acceptable?)
> 
> Well, it's not really the virt driver *itself* mucking with the
> allocations. It's more that the virt driver is telling something *else*
> the move instructions that it feels are needed...
> 
>> Note that the return value from update_provider_tree is optional, and
>> only used when the virt driver is indicating a "move" of 

Re: [openstack-dev] [tc] summary of joint leadership meeting from 20 May

2018-06-04 Thread Jay Pipes

On 06/04/2018 05:02 PM, Doug Hellmann wrote:

The most significant point of interest to the contributor
community from this section of the meeting was the apparently
overwhelming interest from companies employing contributors, as
well as 2/3 of the contributors to recent releases who responded
to the survey, to bring the PTG and summit back together as a single
event. This had come up at the meeting in Dublin as well, but in
the time since then the discussions progressed and it looks much
more likely that we will at least try re-combining the two events.


OK, so will we return to having eleventy billion different mid-cycle 
events for each project?


Personally, I've very much enjoyed the separate PTGs because I've 
actually been able to get work done at them; something that was much 
harder when the design summits were part of the overall conference.


In fact I haven't gone to the last two summit events because of what I 
perceive to be a continued trend of the summits being focused on 
marketing, buzzwords and vendor pitches/sales. An extra spoonful of the 
"edge", anyone?



We discussed several reasons, including travel expense, travel visa
difficulties, time away from home and family, and sponsorship of
the events themselves.

There are a few plans under consideration, and no firm decisions
have been made, yet. We discussed a strawman proposal to combine
the summit and PTG in April, in Denver, that would look much like
our older Summit events (from the Folsom/Grizzly time frame) with
a few days of conference and a few days of design summit, with some
overlap in the middle of the week.  The dates, overlap, and
arrangements will depend on venue availability.


Has the option of doing a single conference a year been addressed? Seems 
to me that we (the collective we) could save a lot of money not having 
to put on multiple giant events per year and instead have one.


Just my two cents, but the OpenStack and Linux foundations seem to be 
pumping out new "open events" at a pretty regular clip -- OpenStack 
Summit, OpenDev, Open Networking Summit, OpenStack Days, OpenInfra Days, 
OpenNFV summit, the list keeps growing... at some point, do we think 
that the industry as a whole is just going to get event overload?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Zane Bitter

On 04/06/18 17:52, Doug Hellmann wrote:

Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400:

On 02/06/18 13:23, Doug Hellmann wrote:

Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400:

On 01/06/18 12:18, Doug Hellmann wrote:


[snip]


Is that rule a sign of a healthy team dynamic, that we would want
to spread to the whole community?


Yeah, this part I am pretty unsure about too. For some projects it
probably is. For others it may just be an unnecessary obstacle, although
I don't think it'd actually be *un*healthy for any project, assuming a
big enough and diverse enough team (which should be a goal for the whole
community).


It feels like we would be saying that we don't trust 2 core reviewers
from the same company to put the project's goals or priorities over
their employer's.  And that doesn't feel like an assumption I would
want us to encourage through a tag meant to show the health of the
project.


Another way to look at it would be that the perception of a conflict of
interest can be just as damaging to a community as somebody actually
acting on a conflict of interest, and thus having clearly-defined rules
to manage conflicts of interest helps protect everybody (and especially
the people who could be perceived to have a conflict of interest but
aren't, in fact, acting on it).


That's a reasonable perspective. Thanks for expanding on your original
statement.


Apparently enough people see it the way you described that this is
probably not something we want to actively spread to other projects at
the moment.


I am still curious to know which teams have the policy. If it is more
widespread than I realized, maybe it's reasonable to extend it and use
it as the basis for a health check after all.


At least Nova still does, judging by this comment from Matt Riedemann in 
January:


"For the record, it's not cool for two cores from the same company to be 
the sole +2s on a change contributed by the same company. Pretty 
standard operating procedure."


(on https://review.openstack.org/#/c/523958/18)

When this thread started I looked for somewhere that was documented more 
permanently, but I didn't find it.



The appealing part of the idea to me was that we could stop pretending
that the results of our mindless script are objective - despite the fact
that both the subset of information to rely on and the limits in the
script were chosen by someone, in an essentially arbitrary way - and let
the decision rest on the expertise of those who are closest to the
project (and therefore have the most information), while aligning their
incentives with the needs of users so that they're not being asked to
keep their own score. I'm always on the lookout for opportunities to do
that, so I felt like I had to at least float it.

The alignment goes both ways though, and if we'd be creating an
incentive to extend the coverage of a policy that is already
controversial then this is not the way forward.

cheers,
Zane.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-04 Thread Sean McGinnis

Adding back the openstack-operators list that Matt added.


On 06/04/2018 05:13 PM, Sean McGinnis wrote:

On 06/04/2018 04:17 PM, Doug Hellmann wrote:

Excerpts from Matt Riedemann's message of 2018-06-04 15:38:48 -0500:

On 6/4/2018 1:07 PM, Sean McGinnis wrote:

Python 3 First
==

One of the things brought up in the session was picking things that 
bring
excitement and are obvious benefits to deployers and users of 
OpenStack
services. While this one is maybe not as immediately obvious, I 
think this
is something that will end up helping deployers and also falls into 
the tech

debt reduction category that will help us move quicker long term.

Python 2 is going away soon, so I think we need something to help 
compel folks
to work on making sure we are ready to transition. This will also 
be a good
point to help switch the mindset over to Python 3 being the default 
used
everywhere, with our Python 2 compatibility being just to continue 
legacy

support.

I still don't really know what this goal means - we have python 3
support across the projects for the most part don't we? Based on that,
this doesn't seem like much to take an entire "goal slot" for the 
release.

We still run docs, linters, functional tests, and other jobs under
python 2 by default. Perhaps a better framing would be to call this
"Python 3 by default", because the point is to change all of those jobs
to use Python 3, and to set up all future jobs using Python 3 unless we
specifically need to run them under Python 2.

This seems like a small thing, but when we did it for Oslo we did find
code issues because the linters apply different rules and we did find
documentation build issues. The fixes were all straightforward, so I
don't expect it to mean a lot of work, but it's more than a single patch
per project. I also think using a goal is a good way to start shifting
the mindset of the contributor base into this new perspective.

Yes, that's probably a better way to word it to properly convey the goal.
Basically, all things running under Python3, project code and tooling, as
the default unless specifically geared towards Python2.


Cold Upgrade Support


The other suggestion in the Forum session related to upgrades was 
the addition
of "upgrade check" CLIs for each project, and I was tempted to 
suggest that as
my second strawman choice. For some projects that would be a very 
minimal or
NOOP check, so it would probably be easy to complete the goal. But 
ultimately
what I think would bring the most value would be the work on 
supporting cold
upgrade, even if it will be more of a stretch for some projects to 
accomplish.

I think you might be mixing two concepts here.
Not so much mixing as discussing the two and the reason why I 
personally thought

the one was a better goal, if you read through what was said about it.


The cold upgrade support, per my understanding, is about getting the
assert:supports-upgrade tag:

https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html 



Which to me basically means the project runs a grenade job. There was
discussion in the room about grenade not being a great tool for all
projects, but no one is working on a replacement for that, so I don't
think it's really justification at this point for *not* making it a 
goal.


The "upgrade check" CLIs is a different thing though, which is more
about automating as much of the upgrade release notes as possible. See
the nova docs for examples on how we have used it:

https://docs.openstack.org/nova/latest/cli/nova-status.html

I'm not sure what projects you had in mind when you said, "For some
projects that would be a very minimal or NOOP check, so it would
probably be easy to complete the goal." I would expect that projects
aren't meeting the goal if they are noop'ing everything. But what 
can be

automated like this isn't necessarily black and white either.

What I remember from the discussion in the room was that not all
projects are going to have anything to do by hand that would block
an upgrade, but we still want all projects to have the test command.
That means many of those commands could potentially be no-ops,
right? Unless they're all going to do something like verify the
schema has been updated somehow?


Yes, exactly what I meant by the NOOP. I'm not sure what Cinder would
check here. We don't have to see if placement has been set up or if cell0
has been configured. Maybe once we have the facility in place we would
find some things worth checking, but at present I don't know what that
would be.

Which also makes me wonder, should this be an oslo thing that projects
just plug in to for their specific checks?

Upgrades have been a major focus of discussion lately, especially 
as our
operators have been trying to get closer to the latest work 
upstream. This has

been an ongoing challenge.

There has also been a lot of talk about LTS releases. We've landed 
on fast
forward upgrade to get between 

Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Tom Barron

On 04/06/18 17:52 -0400, Doug Hellmann wrote:

Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400:

On 02/06/18 13:23, Doug Hellmann wrote:
> Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400:
>> On 01/06/18 12:18, Doug Hellmann wrote:
>
> [snip]
>
>>> Is that rule a sign of a healthy team dynamic, that we would want
>>> to spread to the whole community?
>>
>> Yeah, this part I am pretty unsure about too. For some projects it
>> probably is. For others it may just be an unnecessary obstacle, although
>> I don't think it'd actually be *un*healthy for any project, assuming a
>> big enough and diverse enough team (which should be a goal for the whole
>> community).
>
> It feels like we would be saying that we don't trust 2 core reviewers
> from the same company to put the project's goals or priorities over
> their employer's.  And that doesn't feel like an assumption I would
> want us to encourage through a tag meant to show the health of the
> project.

Another way to look at it would be that the perception of a conflict of
interest can be just as damaging to a community as somebody actually
acting on a conflict of interest, and thus having clearly-defined rules
to manage conflicts of interest helps protect everybody (and especially
the people who could be perceived to have a conflict of interest but
aren't, in fact, acting on it).


That's a reasonable perspective. Thanks for expanding on your original
statement.


Apparently enough people see it the way you described that this is
probably not something we want to actively spread to other projects at
the moment.


I am still curious to know which teams have the policy. If it is more
widespread than I realized, maybe it's reasonable to extend it and use
it as the basis for a health check after all.


Just some data.  Manila has the policy (except for very trivial or 
urgent commits, where one +2 +W can be sufficient).


When the project originated NetApp cores and a Mirantis core who was a 
contractor for NetApp predominated.  I doubt that there was any 
perception of biased decisions -- the PTL at the time, Ben 
Swartzlander, is the kind of guy who is quite good at doing what he 
thinks is best for the project and not listening to any folks within 
his own company who might suggest otherwise, not that I have any 
evidence of anything like that either :).  But at some point someone 
suggested that our +2 +W rule, already in place, be augmented with a 
requirement that the two +2s come from different affiliations and the 
rule was adopted.


So far that seems to work OK though affiliations have shifted and 
NetApp cores are no longer quantitatively dominant in the project. 
There are three companies with two cores and so far as I can see they 
don't tend to vote together more than any other two cores, on the one 
hand, but on the other hand it isn't hard to get another core +2 if a 
change is ready to be merged.


None of this is intended as an argument that this rule be expanded to 
other projects, it's just data as I said.


-- Tom



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-04 Thread Sean McGinnis

On 06/04/2018 04:17 PM, Doug Hellmann wrote:

Excerpts from Matt Riedemann's message of 2018-06-04 15:38:48 -0500:

On 6/4/2018 1:07 PM, Sean McGinnis wrote:

Python 3 First
==

One of the things brought up in the session was picking things that bring
excitement and are obvious benefits to deployers and users of OpenStack
services. While this one is maybe not as immediately obvious, I think this
is something that will end up helping deployers and also falls into the tech
debt reduction category that will help us move quicker long term.

Python 2 is going away soon, so I think we need something to help compel folks
to work on making sure we are ready to transition. This will also be a good
point to help switch the mindset over to Python 3 being the default used
everywhere, with our Python 2 compatibility being just to continue legacy
support.

I still don't really know what this goal means - we have python 3
support across the projects for the most part don't we? Based on that,
this doesn't seem like much to take an entire "goal slot" for the release.

We still run docs, linters, functional tests, and other jobs under
python 2 by default. Perhaps a better framing would be to call this
"Python 3 by default", because the point is to change all of those jobs
to use Python 3, and to set up all future jobs using Python 3 unless we
specifically need to run them under Python 2.

This seems like a small thing, but when we did it for Oslo we did find
code issues because the linters apply different rules and we did find
documentation build issues. The fixes were all straightforward, so I
don't expect it to mean a lot of work, but it's more than a single patch
per project. I also think using a goal is a good way to start shifting
the mindset of the contributor base into this new perspective.

Yes, that's probably a better way to word it to properly convey the goal.
Basically, all things running under Python3, project code and tooling, as
the default unless specifically geared towards Python2.


Cold Upgrade Support


The other suggestion in the Forum session related to upgrades was the addition
of "upgrade check" CLIs for each project, and I was tempted to suggest that as
my second strawman choice. For some projects that would be a very minimal or
NOOP check, so it would probably be easy to complete the goal. But ultimately
what I think would bring the most value would be the work on supporting cold
upgrade, even if it will be more of a stretch for some projects to accomplish.

I think you might be mixing two concepts here.
Not so much mixing as discussing the two and the reason why I personally 
thought

the one was a better goal, if you read through what was said about it.


The cold upgrade support, per my understanding, is about getting the
assert:supports-upgrade tag:

https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html

Which to me basically means the project runs a grenade job. There was
discussion in the room about grenade not being a great tool for all
projects, but no one is working on a replacement for that, so I don't
think it's really justification at this point for *not* making it a goal.

The "upgrade check" CLIs is a different thing though, which is more
about automating as much of the upgrade release notes as possible. See
the nova docs for examples on how we have used it:

https://docs.openstack.org/nova/latest/cli/nova-status.html

I'm not sure what projects you had in mind when you said, "For some
projects that would be a very minimal or NOOP check, so it would
probably be easy to complete the goal." I would expect that projects
aren't meeting the goal if they are noop'ing everything. But what can be
automated like this isn't necessarily black and white either.

What I remember from the discussion in the room was that not all
projects are going to have anything to do by hand that would block
an upgrade, but we still want all projects to have the test command.
That means many of those commands could potentially be no-ops,
right? Unless they're all going to do something like verify the
schema has been updated somehow?


Yes, exactly what I meant by the NOOP. I'm not sure what Cinder would
check here. We don't have to see if placement has been set up or if cell0
has been configured. Maybe once we have the facility in place we would
find some things worth checking, but at present I don't know what that
would be.

Which also makes me wonder, should this be an oslo thing that projects
just plug in to for their specific checks?


Upgrades have been a major focus of discussion lately, especially as our
operators have been trying to get closer to the latest work upstream. This has
been an ongoing challenge.

There has also been a lot of talk about LTS releases. We've landed on fast
forward upgrade to get between several releases, but I think improving upgrades
eases the way both for easier and more frequent upgrades and also getting to
the point 

Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Sean McGinnis

I am still curious to know which teams have the policy. If it is more

widespread than I realized, maybe it's reasonable to extend it and use
it as the basis for a health check after all.



I think it's been an unwritten "guideline" in Cinder, but not a hard
rule.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Doug Hellmann
Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400:
> On 02/06/18 13:23, Doug Hellmann wrote:
> > Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400:
> >> On 01/06/18 12:18, Doug Hellmann wrote:
> > 
> > [snip]
> > 
> >>> Is that rule a sign of a healthy team dynamic, that we would want
> >>> to spread to the whole community?
> >>
> >> Yeah, this part I am pretty unsure about too. For some projects it
> >> probably is. For others it may just be an unnecessary obstacle, although
> >> I don't think it'd actually be *un*healthy for any project, assuming a
> >> big enough and diverse enough team (which should be a goal for the whole
> >> community).
> > 
> > It feels like we would be saying that we don't trust 2 core reviewers
> > from the same company to put the project's goals or priorities over
> > their employer's.  And that doesn't feel like an assumption I would
> > want us to encourage through a tag meant to show the health of the
> > project.
> 
> Another way to look at it would be that the perception of a conflict of 
> interest can be just as damaging to a community as somebody actually 
> acting on a conflict of interest, and thus having clearly-defined rules 
> to manage conflicts of interest helps protect everybody (and especially 
> the people who could be perceived to have a conflict of interest but 
> aren't, in fact, acting on it).

That's a reasonable perspective. Thanks for expanding on your original
statement.

> Apparently enough people see it the way you described that this is 
> probably not something we want to actively spread to other projects at 
> the moment.

I am still curious to know which teams have the policy. If it is more
widespread than I realized, maybe it's reasonable to extend it and use
it as the basis for a health check after all.

> The appealing part of the idea to me was that we could stop pretending 
> that the results of our mindless script are objective - despite the fact 
> that both the subset of information to rely on and the limits in the 
> script were chosen by someone, in an essentially arbitrary way - and let 
> the decision rest on the expertise of those who are closest to the 
> project (and therefore have the most information), while aligning their 
> incentives with the needs of users so that they're not being asked to 
> keep their own score. I'm always on the lookout for opportunities to do 
> that, so I felt like I had to at least float it.
> 
> The alignment goes both ways though, and if we'd be creating an 
> incentive to extend the coverage of a policy that is already 
> controversial then this is not the way forward.
> 
> cheers,
> Zane.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Zane Bitter

On 02/06/18 13:23, Doug Hellmann wrote:

Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400:

On 01/06/18 12:18, Doug Hellmann wrote:


[snip]


Is that rule a sign of a healthy team dynamic, that we would want
to spread to the whole community?


Yeah, this part I am pretty unsure about too. For some projects it
probably is. For others it may just be an unnecessary obstacle, although
I don't think it'd actually be *un*healthy for any project, assuming a
big enough and diverse enough team (which should be a goal for the whole
community).


It feels like we would be saying that we don't trust 2 core reviewers
from the same company to put the project's goals or priorities over
their employer's.  And that doesn't feel like an assumption I would
want us to encourage through a tag meant to show the health of the
project.


Another way to look at it would be that the perception of a conflict of 
interest can be just as damaging to a community as somebody actually 
acting on a conflict of interest, and thus having clearly-defined rules 
to manage conflicts of interest helps protect everybody (and especially 
the people who could be perceived to have a conflict of interest but 
aren't, in fact, acting on it).


Apparently enough people see it the way you described that this is 
probably not something we want to actively spread to other projects at 
the moment.


The appealing part of the idea to me was that we could stop pretending 
that the results of our mindless script are objective - despite the fact 
that both the subset of information to rely on and the limits in the 
script were chosen by someone, in an essentially arbitrary way - and let 
the decision rest on the expertise of those who are closest to the 
project (and therefore have the most information), while aligning their 
incentives with the needs of users so that they're not being asked to 
keep their own score. I'm always on the lookout for opportunities to do 
that, so I felt like I had to at least float it.


The alignment goes both ways though, and if we'd be creating an 
incentive to extend the coverage of a policy that is already 
controversial then this is not the way forward.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-04 Thread Amy Marrich
Zane,

Not sure it is to be honest.:)

Amy (spotz)

On Mon, Jun 4, 2018 at 7:29 AM, Zane Bitter  wrote:

> On 04/06/18 10:19, Amy Marrich wrote:
>
>> Zane,
>>
>> I'll read in more detail, but do we want to add rollcall-vote?
>>
>
> Is it used anywhere other than in the governance repo? We certainly could
> add it, but it didn't seem like a top priority.
>
> - ZB
>
> Amy (spotz)
>>
>>
>> On Mon, Jun 4, 2018 at 7:13 AM, Zane Bitter > zbit...@redhat.com>> wrote:
>>
>> On 31/05/18 14:35, Julia Kreger wrote:
>>
>> Back to the topic of nitpicking!
>>
>> I virtually sat down with Doug today and we hammered out the
>> positive
>> aspects that we feel like are the things that we as a community
>> want
>> to see as part of reviews coming out of this effort. The
>> principles
>> change[1] in governance has been updated as a result.
>>
>> I think we are at a point where we have to state high level
>> principles, and then also update guidelines or other context
>> providing
>> documentation to re-enforce some of items covered in this
>> discussion... not just to educate new contributors, but to serve
>> as a
>> checkpoint for existing reviewers when making the decision as to
>> how
>> to vote change set. The question then becomes where would such
>> guidelines or documentation best fit?
>>
>>
>> I think the contributor guide is the logical place for it. Kendall
>> pointed out this existing section:
>>
>> https://docs.openstack.org/contributors/code-and-documentati
>> on/using-gerrit.html#reviewing-changes
>> > ion/using-gerrit.html#reviewing-changes>
>>
>> It could go in there, or perhaps we separate out the parts about
>> when to use which review scores into a separate page from the
>> mechanics of how to use Gerrit.
>>
>> Should we explicitly detail the
>> cause/effect that occurs? Should we convey contributor
>> perceptions, or
>> maybe even just link to this thread as there has been a massive
>> amount
>> of feedback raising valid cases, points, and frustrations.
>>
>> Personally, I'd lean towards a blended approach, but the question
>> of
>> where is one I'm unsure of. Thoughts?
>>
>>
>> Let's crowdsource a set of heuristics that reviewers and
>> contributors should keep in mind when they're reviewing or having
>> their changes reviewed. I made a start on collecting ideas from this
>> and past threads, as well as my own reviewing experience, into a
>> document that I've presumptuously titled "How to Review Changes the
>> OpenStack Way" (but might be more accurately called "The Frank
>> Sinatra Guide to Code Review" at the moment):
>>
>> https://etherpad.openstack.org/p/review-the-openstack-way
>> 
>>
>> It's in an etherpad to make it easier for everyone to add their
>> suggestions and comments (folks in #openstack-tc have made some
>> tweaks already). After a suitable interval has passed to collect
>> feedback, I'll turn this into a contributor guide change.
>>
>> Have at it!
>>
>> cheers,
>> Zane.
>>
>>
>> -Julia
>>
>> [1]: https://review.openstack.org/#/c/570940/
>> 
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Forum Recap - Stein Release Goals

2018-06-04 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2018-06-04 15:26:28 -0500:
> On 5/31/2018 3:59 PM, Sean McGinnis wrote:
> > We were also able to already identify some possible goals for the T cycle:
> > 
> > - Move all CLIs to python-openstackclient
> 
> My understanding was this is something we could do for Stein provided 
> some heavy refactoring in the SDK and OSC got done first in Rocky. Or is 
> that being too aggressive?
> 

See my comments on the other part of the thread, but I think this is too
optimistic until we add a couple of people to the review team on OSC.

Others from the OSC team who have a better perspective on how much work
is actually left may have a different opinion though?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-04 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2018-06-04 15:38:48 -0500:
> On 6/4/2018 1:07 PM, Sean McGinnis wrote:
> > Python 3 First
> > ==
> > 
> > One of the things brought up in the session was picking things that bring
> > excitement and are obvious benefits to deployers and users of OpenStack
> > services. While this one is maybe not as immediately obvious, I think this
> > is something that will end up helping deployers and also falls into the tech
> > debt reduction category that will help us move quicker long term.
> > 
> > Python 2 is going away soon, so I think we need something to help compel 
> > folks
> > to work on making sure we are ready to transition. This will also be a good
> > point to help switch the mindset over to Python 3 being the default used
> > everywhere, with our Python 2 compatibility being just to continue legacy
> > support.
> 
> I still don't really know what this goal means - we have python 3 
> support across the projects for the most part don't we? Based on that, 
> this doesn't seem like much to take an entire "goal slot" for the release.

We still run docs, linters, functional tests, and other jobs under
python 2 by default. Perhaps a better framing would be to call this
"Python 3 by default", because the point is to change all of those jobs
to use Python 3, and to set up all future jobs using Python 3 unless we
specifically need to run them under Python 2.

This seems like a small thing, but when we did it for Oslo we did find
code issues because the linters apply different rules and we did find
documentation build issues. The fixes were all straightforward, so I
don't expect it to mean a lot of work, but it's more than a single patch
per project. I also think using a goal is a good way to start shifting
the mindset of the contributor base into this new perspective.

> > 
> > Cold Upgrade Support
> > 
> > 
> > The other suggestion in the Forum session related to upgrades was the 
> > addition
> > of "upgrade check" CLIs for each project, and I was tempted to suggest that 
> > as
> > my second strawman choice. For some projects that would be a very minimal or
> > NOOP check, so it would probably be easy to complete the goal. But 
> > ultimately
> > what I think would bring the most value would be the work on supporting cold
> > upgrade, even if it will be more of a stretch for some projects to 
> > accomplish.
> 
> I think you might be mixing two concepts here.
> 
> The cold upgrade support, per my understanding, is about getting the 
> assert:supports-upgrade tag:
> 
> https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html
> 
> Which to me basically means the project runs a grenade job. There was 
> discussion in the room about grenade not being a great tool for all 
> projects, but no one is working on a replacement for that, so I don't 
> think it's really justification at this point for *not* making it a goal.
> 
> The "upgrade check" CLIs is a different thing though, which is more 
> about automating as much of the upgrade release notes as possible. See 
> the nova docs for examples on how we have used it:
> 
> https://docs.openstack.org/nova/latest/cli/nova-status.html
> 
> I'm not sure what projects you had in mind when you said, "For some 
> projects that would be a very minimal or NOOP check, so it would 
> probably be easy to complete the goal." I would expect that projects 
> aren't meeting the goal if they are noop'ing everything. But what can be 
> automated like this isn't necessarily black and white either.

What I remember from the discussion in the room was that not all
projects are going to have anything to do by hand that would block
an upgrade, but we still want all projects to have the test command.
That means many of those commands could potentially be no-ops,
right? Unless they're all going to do something like verify the
schema has been updated somehow?

> 
> > 
> > Upgrades have been a major focus of discussion lately, especially as our
> > operators have been trying to get closer to the latest work upstream. This 
> > has
> > been an ongoing challenge.
> > 
> > There has also been a lot of talk about LTS releases. We've landed on fast
> > forward upgrade to get between several releases, but I think improving 
> > upgrades
> > eases the way both for easier and more frequent upgrades and also getting to
> > the point some day where maybe we can think about upgrading over several
> > releases to be able to do something like an LTS to LTS upgrade.
> > 
> > Neither one of these upgrade goals really has a clearly defined plan that
> > projects can pick up now and start working on, but I think with those 
> > involved
> > in these areas we should be able to come up with a perscriptive plan for
> > projects to follow.
> > 
> > And it would really move our fast forward upgrade story forward.
> 
> Agreed. In the FFU Forum session at the summit I mentioned the 
> 'nova-status upgrade check' CLI 

Re: [openstack-dev] [tripleo][puppet] Hello all, puppet modules

2018-06-04 Thread Arnaud Morin
Hey,

OVH is also using them as well as some custom ansible playbooks to manage the 
deployment. But as for red had, the configuration part is handled by puppet.
We are also doing some upstream contribution from time to time.

For us, the puppet modules are very stable and works very fine.

Cheers,

-- 
Arnaud Morin

On 31.05.18 - 15:36, Tim Bell wrote:
> CERN use these puppet modules too and contributes any missing functionality 
> we need upstream.
> 
> Tim
> 
> From: Alex Schultz 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Thursday, 31 May 2018 at 16:24
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] [tripleo][puppet] Hello all, puppet modules
> 
> 
> 
> On Wed, May 30, 2018 at 3:18 PM, Remo Mattei mailto:r...@rm.ht>> 
> wrote:
> Hello all,
> I have talked to several people about this and I would love to get this 
> finalized once and for all. I have checked the OpenStack puppet modules which 
> are mostly developed by the Red Hat team, as of right now, TripleO is using a 
> combo of Ansible and puppet to deploy but in the next couple of releases, the 
> plan is to move away from the puppet option.
> 
> 
> So the OpenStack puppet modules are maintained by others other than Red Hat, 
> however we have been a major contributor since TripleO has relied on them for 
> some time.  That being said, as TripleO has migrated to containers built with 
> Kolla, we've adapted our deployment mechanism to include Ansible and we 
> really only use puppet for configuration generation.  Our goal for TripleO is 
> to eventually be fully containerized which isn't something the puppet modules 
> support today and I'm not sure is on the road map.
> 
> 
> So consequently, what will be the plan of TripleO and the puppet modules?
> 
> 
> As TripleO moves forward, we may continue to support deployments via puppet 
> modules but the amount of testing that we'll be including upstream will 
> mostly exercise external Ansible integrations (example, ceph-ansible, 
> openshift-ansible, etc) and Kolla containers.  As of Queens, most of the 
> services deployed via TripleO are deployed via containers and not on 
> baremetal via puppet. We no longer support deploying OpenStack services on 
> baremetal via the puppet modules and will likely be removing this support in 
> the code in Stein.  The end goal will likely be moving away from puppet 
> modules within TripleO if we can solve the backwards compatibility and 
> configuration generation via other mechanism.  We will likely recommend 
> leveraging external Ansible role calls rather than including puppet modules 
> and using those to deploy services that are not inherently supported by 
> TripleO.  I can't really give a time frame as we are still working out the 
> details, but it is likely that over the next several cycles we'll see a 
> reduction in the dependence of puppet in TripleO and an increase in 
> leveraging available Ansible roles.
> 
> 
> From the Puppet OpenStack standpoint, others are stepping up to continue to 
> ensure the modules are available and I know I'll keep an eye on them for as 
> long as TripleO leverages some of the functionality.  The Puppet OpenStack 
> modules are very stable but I'm not sure without additional community folks 
> stepping up that there will be support for newer functionality being added by 
> the various OpenStack projects.  I'm sure others can chime in here on their 
> usage/plans for the Puppet OpenStack modules.
> 
> 
> Hope that helps.
> 
> 
> Thanks,
> -Alex
> 
> 
> Thanks
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] summary of joint leadership meeting from 20 May

2018-06-04 Thread Doug Hellmann
On 20 May, 2018 the OpenStack foundation staff, board, technical
committee, and user committee held a joint leadership meeting at
the summit venue in Vancouver to discuss current events and issues
related to the OpenStack community.  Alan Clark, Melvin Hillsman,
and I chaired the meeting together, though Alan did most of the
coordination work during the meeting.

Because the board was present, I want to give the disclaimer that
this summary is from my perspective and based on my notes. It does
not in any way reflect an official summary of the meeting.

I will also give a further disclaimer that some of these notes may
be out of order because the actual agenda [1] was changed on-site
to accommodate some participants who could not be present for the
entire day.

We opened the day by welcoming and introducing new members of the
3 groups. Ruan HE is a new board member from Tencent. Graham Hayes,
Mohammed Naser, and Zane Bitter are the 3 newly elected TC members
this term. Amy Marrich and Yih Leong Sun are newly elected to the
UC.

In particular, the discussion of fixing a long-standing issue with
a mistake in the bylaws was moved to the start of the day. The
problem has to do with section 3.b.i of the appendix that
describes the Technical Committee, which reads "An Individual Member
is an ATC who has..." but should read "An ATC is an Individual
Member who has..."[2]  Everyone agreed that is clearly a mistake, but
because of where it appears in the bylaws actually fixing it will
require a vote of the foundation membership. The board approved a
resolution authorizing the secretary to include a fix on the ballot
for the next board elections. There may be other bylaws changes at
the same time to address the expansion of the foundation into other
strategic areas, since the current bylaws do not currently cover
the governance structure for any projects other than OpenStack
itself.  None of those other changes have been discussed in detail,
yet.

Next, the foundation executive staff gave an update on several
foundation- and event-related topics. I didn't take a lot of notes
during this section, but a few things stood out for me:

1. They are going to change the user survey to be an annual event,
   in part to avoid survey fatigue.
2. The call for proposals for the Berlin Summit is open already.
3. After Berlin, the next summit will be in Denver, but downtown
   at the convention center, not the site of the PTG.

During this section of the meeting Kandan Kathirvel of AT mentioned
a desire to lower the cost of platinum membership because platinum
members are already contributing significant developer resources.
This was not discussed at any real length, but it may come up again
at a regular board meeting, where a change like that could be
considered formally.

After the foundation update, Melvin and Dims gave an update on
OpenLab [3], a project to test the integration of various cloud ecosystem
tools, including running Kubernetes on OpenStack and various cloud
management libraries that communicate with OpenStack.

Next, I gave a presentation discussing contribution levels in individual
projects to highlight the impact an individual contributor can have on
OpenStack [4].

The purpose of raising this topic was to get input into why the
community's "help wanted" areas are not seeing significant
contributions. We spent a good amount of time talking about the
issue, and several ideas were put forward. These ranged from us not
emphasizing the importance and value of mentoring enough to not
explaining to contributing companies why these gaps in the community
were important from a business perspective. At the end of the
discussion we had volunteers from the board (Allison Randal, Alan
Clark, Prakash Ramchandran, Chris Price, and Johan Christenson) and
TC (Sean, Graham, Dims, and Julia) ready to work on reframing the
contribution gaps in terms of "job descriptions" that explain in
more detail what is needed and what benefit those roles will provide.
As mentioned in this week's TC update, Dims has started working on
a template already.

Next, Melvin and Matt Van Winkle gave a presentation on the work
the user committee has been doing [5]. They covered the status of the
UC-led working groups and both short and long term goals.

Next, the foundation staff covered their plans for in-person meetings
during 2019. The most significant point of interest to the contributor
community from this section of the meeting was the apparently
overwhelming interest from companies employing contributors, as
well as 2/3 of the contributors to recent releases who responded
to the survey, to bring the PTG and summit back together as a single
event. This had come up at the meeting in Dublin as well, but in
the time since then the discussions progressed and it looks much
more likely that we will at least try re-combining the two events.
We discussed several reasons, including travel expense, travel visa
difficulties, time away from home and family, 

Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-04 Thread Matt Riedemann
+openstack-operators since we need to have more operator feedback in our 
community-wide goals decisions.


+Melvin as my elected user committee person for the same reasons as 
adding operators into the discussion.


On 6/4/2018 3:38 PM, Matt Riedemann wrote:

On 6/4/2018 1:07 PM, Sean McGinnis wrote:

Python 3 First
==

One of the things brought up in the session was picking things that bring
excitement and are obvious benefits to deployers and users of OpenStack
services. While this one is maybe not as immediately obvious, I think 
this
is something that will end up helping deployers and also falls into 
the tech

debt reduction category that will help us move quicker long term.

Python 2 is going away soon, so I think we need something to help 
compel folks
to work on making sure we are ready to transition. This will also be a 
good

point to help switch the mindset over to Python 3 being the default used
everywhere, with our Python 2 compatibility being just to continue legacy
support.


I still don't really know what this goal means - we have python 3 
support across the projects for the most part don't we? Based on that, 
this doesn't seem like much to take an entire "goal slot" for the release.




Cold Upgrade Support


The other suggestion in the Forum session related to upgrades was the 
addition
of "upgrade check" CLIs for each project, and I was tempted to suggest 
that as
my second strawman choice. For some projects that would be a very 
minimal or
NOOP check, so it would probably be easy to complete the goal. But 
ultimately
what I think would bring the most value would be the work on 
supporting cold
upgrade, even if it will be more of a stretch for some projects to 
accomplish.


I think you might be mixing two concepts here.

The cold upgrade support, per my understanding, is about getting the 
assert:supports-upgrade tag:


https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html 



Which to me basically means the project runs a grenade job. There was 
discussion in the room about grenade not being a great tool for all 
projects, but no one is working on a replacement for that, so I don't 
think it's really justification at this point for *not* making it a goal.


The "upgrade check" CLIs is a different thing though, which is more 
about automating as much of the upgrade release notes as possible. See 
the nova docs for examples on how we have used it:


https://docs.openstack.org/nova/latest/cli/nova-status.html

I'm not sure what projects you had in mind when you said, "For some 
projects that would be a very minimal or NOOP check, so it would 
probably be easy to complete the goal." I would expect that projects 
aren't meeting the goal if they are noop'ing everything. But what can be 
automated like this isn't necessarily black and white either.




Upgrades have been a major focus of discussion lately, especially as our
operators have been trying to get closer to the latest work upstream. 
This has

been an ongoing challenge.

There has also been a lot of talk about LTS releases. We've landed on 
fast
forward upgrade to get between several releases, but I think improving 
upgrades
eases the way both for easier and more frequent upgrades and also 
getting to

the point some day where maybe we can think about upgrading over several
releases to be able to do something like an LTS to LTS upgrade.

Neither one of these upgrade goals really has a clearly defined plan that
projects can pick up now and start working on, but I think with those 
involved

in these areas we should be able to come up with a perscriptive plan for
projects to follow.

And it would really move our fast forward upgrade story forward.


Agreed. In the FFU Forum session at the summit I mentioned the 
'nova-status upgrade check' CLI and a lot of people in the room had 
never heard of it because they are still on Mitaka before we added that 
CLI (new in Ocata). But they sounded really interested in it and said 
they wished other projects were doing that to help ease upgrades so they 
won't be stuck on older unmaintained releases for so long. So anything 
we can do to improve upgrades, including our testing for them, will help 
make FFU better.




Next Steps
==

I'm hoping with a strawman proposal we have a basis for debating the 
merits of
these and getting closer to being able to officially select Stein 
goals. We
still have some time, but I would like to avoid making late-cycle 
selections so

teams can start planning ahead for what will need to be done in Stein.

Please feel free to promote other ideas for goals. That would be a 
good way for
us to weigh the pro's and con's between these and whatever else you 
have in
mind. Then hopefully we can come to some consensus and work towards 
clearly
defining what needs to be done and getting things well documented for 
teams to

pick up as soon as they wrap up Rocky (or sooner).


I still want to lobby for a 

Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-04 Thread Matt Riedemann

On 6/4/2018 1:07 PM, Sean McGinnis wrote:

Python 3 First
==

One of the things brought up in the session was picking things that bring
excitement and are obvious benefits to deployers and users of OpenStack
services. While this one is maybe not as immediately obvious, I think this
is something that will end up helping deployers and also falls into the tech
debt reduction category that will help us move quicker long term.

Python 2 is going away soon, so I think we need something to help compel folks
to work on making sure we are ready to transition. This will also be a good
point to help switch the mindset over to Python 3 being the default used
everywhere, with our Python 2 compatibility being just to continue legacy
support.


I still don't really know what this goal means - we have python 3 
support across the projects for the most part don't we? Based on that, 
this doesn't seem like much to take an entire "goal slot" for the release.




Cold Upgrade Support


The other suggestion in the Forum session related to upgrades was the addition
of "upgrade check" CLIs for each project, and I was tempted to suggest that as
my second strawman choice. For some projects that would be a very minimal or
NOOP check, so it would probably be easy to complete the goal. But ultimately
what I think would bring the most value would be the work on supporting cold
upgrade, even if it will be more of a stretch for some projects to accomplish.


I think you might be mixing two concepts here.

The cold upgrade support, per my understanding, is about getting the 
assert:supports-upgrade tag:


https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html

Which to me basically means the project runs a grenade job. There was 
discussion in the room about grenade not being a great tool for all 
projects, but no one is working on a replacement for that, so I don't 
think it's really justification at this point for *not* making it a goal.


The "upgrade check" CLIs is a different thing though, which is more 
about automating as much of the upgrade release notes as possible. See 
the nova docs for examples on how we have used it:


https://docs.openstack.org/nova/latest/cli/nova-status.html

I'm not sure what projects you had in mind when you said, "For some 
projects that would be a very minimal or NOOP check, so it would 
probably be easy to complete the goal." I would expect that projects 
aren't meeting the goal if they are noop'ing everything. But what can be 
automated like this isn't necessarily black and white either.




Upgrades have been a major focus of discussion lately, especially as our
operators have been trying to get closer to the latest work upstream. This has
been an ongoing challenge.

There has also been a lot of talk about LTS releases. We've landed on fast
forward upgrade to get between several releases, but I think improving upgrades
eases the way both for easier and more frequent upgrades and also getting to
the point some day where maybe we can think about upgrading over several
releases to be able to do something like an LTS to LTS upgrade.

Neither one of these upgrade goals really has a clearly defined plan that
projects can pick up now and start working on, but I think with those involved
in these areas we should be able to come up with a perscriptive plan for
projects to follow.

And it would really move our fast forward upgrade story forward.


Agreed. In the FFU Forum session at the summit I mentioned the 
'nova-status upgrade check' CLI and a lot of people in the room had 
never heard of it because they are still on Mitaka before we added that 
CLI (new in Ocata). But they sounded really interested in it and said 
they wished other projects were doing that to help ease upgrades so they 
won't be stuck on older unmaintained releases for so long. So anything 
we can do to improve upgrades, including our testing for them, will help 
make FFU better.




Next Steps
==

I'm hoping with a strawman proposal we have a basis for debating the merits of
these and getting closer to being able to officially select Stein goals. We
still have some time, but I would like to avoid making late-cycle selections so
teams can start planning ahead for what will need to be done in Stein.

Please feel free to promote other ideas for goals. That would be a good way for
us to weigh the pro's and con's between these and whatever else you have in
mind. Then hopefully we can come to some consensus and work towards clearly
defining what needs to be done and getting things well documented for teams to
pick up as soon as they wrap up Rocky (or sooner).


I still want to lobby for a push to move off the old per-project CLIs 
and close the gap on using python-openstackclient CLI for everything, 
but I'm unclear on what the roadmap is for the major refactor with the 
SDK Monty was talking about in Vancouver. From a new user perspective, 
the 2000 individual CLIs to get 

[openstack-dev] [neutron][stable] Stepping down from core

2018-06-04 Thread Ihar Hrachyshka
Hi neutrinos and all,

As some of you've already noticed, the last several months I was
scaling down my involvement in Neutron and, more generally, OpenStack.
I am at a point where I feel confident my disappearance won't disturb
the project, and so I am ready to make it official.

I am stepping down from all administrative roles I so far accumulated
in Neutron and Stable teams. I shifted my focus to another project,
and so I just removed myself from all relevant admin groups to reflect
the change.

It was a nice 4.5 year ride for me. I am very happy with what we
achieved in all these years and a bit sad to leave. The community is
the most brilliant and compassionate and dedicated to openness group
of people I was lucky to work with, and I am reminded daily how
awesome it is.

I am far from leaving the industry, or networking, or the promise of
open source infrastructure, so I am sure we will cross our paths once
in a while with most of you. :) I also plan to hang out in our IRC
channels and make snarky comments, be aware!

Thanks for the fish,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Forum Recap - Stein Release Goals

2018-06-04 Thread Matt Riedemann

On 5/31/2018 3:59 PM, Sean McGinnis wrote:

We were also able to already identify some possible goals for the T cycle:

- Move all CLIs to python-openstackclient


My understanding was this is something we could do for Stein provided 
some heavy refactoring in the SDK and OSC got done first in Rocky. Or is 
that being too aggressive?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Bug Day June 7th @ 1:00 PM to 2:00 PM UTC

2018-06-04 Thread Michael Turek

Hey all,

The first Thursday of the month is approaching which means it's time for 
a bug day yet again!


As we discussed last time, we will shorten the call to an hour. Below is 
a proposed agenda, location, and time [0]. If you'd like to adjust or 
propose topics, please let me know.


Thanks!
Mike Turek 

[0] https://etherpad.openstack.org/p/ironic-bug-day-june-2018


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glace] Cores review open changes in glance-specs, please

2018-06-04 Thread Erno Kuvaja
Hi all,

This week is the deadline week for Glance specs for Rocky!

I'd like to get ack (in form of review in gerrit) for open specs [0]
and I will start Workflow -+1 them as appropriate during the week. Thu
meeting will be last call and anything hanging after that will be
considered again for S cycle.

Thanks,
Erno

[0] https://review.openstack.org/#/q/project:openstack/glance-specs+status:open

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ...

2018-06-04 Thread Sean McGinnis
> 
> Our CI has been chugging along since June 2nd (not really related to
> the timing of your e-mail, it just so happened that we fixed another
> small problem there).  You can see the logs at
> 
>   http://logs.ci-openstack.storpool.com/
> 

Thanks Peter.

It looks like the reason the report run doesn't show Storpool reporting is a
due to a mismatch on name. The officially list account is "StorPool CI"
according to https://wiki.openstack.org/wiki/ThirdPartySystems/StorPool_CI

But it appears on looking into this that the real CI account is "StorPool
distributed storage CI". Is that correct? If so, can you update the wiki with
the correct account name?

Thanks,
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ...

2018-06-04 Thread Jay S Bryant

Peter,

Thank you for the update.  We are investigating why this shows up as 
failing in our CI tracking list.


I will hold off on the associated patch.

Thank you for the quick response!

Jay


On 6/4/2018 2:07 PM, Peter Penchev wrote:

On Sat, Jun 02, 2018 at 06:13:23PM -0500, Jay S Bryant wrote:

All,

This note is to make everyone aware that I have submitted patches for a
number of drivers that have not run 3rd party CI in 60 or more days.  The
following is a list of the drivers, how long since their CI last ran and
links to the patches which mark them as unsupported drivers:

  * DataCore CI – 99 Days - https://review.openstack.org/571533
  * Dell EMC CorpHD CI – 121 Days - https://review.openstack.org/571555
  * HGST Solutions CI – 306 Days - https://review.openstack.org/571560
  * IBM GPFS CI – 212 Days - https://review.openstack.org/571590
  * Itri Disco CI – 110 Days - https://review.openstack.org/571592
  * Nimble Storage CI – 78 Days - https://review.openstack.org/571599
  * StorPool – Unknown - https://review.openstack.org/571935
  * Vedams –HPMSA – 442 Days - https://review.openstack.org/571940
  * Brocade OpenStack – CI – 261 Days - https://review.openstack.org/571943

All of these drivers will be marked unsupported for the Rocky release and
will be removed in the Stein release if the 3rd Party CI is not returned to
a working state.

If your driver is on the list and you have questions please respond to this
thread and we can discuss what needs to be done to return support for your
driver.

Thank you for your attention to this matter!

Hi,

Thanks for taking care of Cinder by culling the herd.

The StorPool CI is, understandably, on your list, since we had some
problems with the CI infrastructure (not the driver itself) during
the month of May.  However, we fixed them on May 31st and our CI had
several successful runs then, quickly followed by a slew of failures
because of the "pip version 1" strip()/split() problem in devstack.

Our CI has been chugging along since June 2nd (not really related to
the timing of your e-mail, it just so happened that we fixed another
small problem there).  You can see the logs at

   http://logs.ci-openstack.storpool.com/

I am thinking of scheduling a rerun for all the changes that our CI
failed on (and that have not been merged or abandoned yet); this may
happen in the next couple of days.

So, thanks again for your work on this, and hopefully this message will
serve as a "we're still alive" sign.  If there is anything more we
should do, like reschedule the failed builds, please let us know.

Best regards,
Peter




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] test storyboard environment

2018-06-04 Thread Lance Bragstad
Hi all,

The StoryBoard team was nice enough to migrate existing content for all
keystone-related launchpad projects to a dev environment [0]. This gives
us the opportunity to use  StoryBoard with real content.

Log in and check it out. I'm curious to know what the rest of the team
thinks.

[0] https://storyboard-dev.openstack.org/#!/project_group/46



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Doug Hellmann
Excerpts from Ed Leafe's message of 2018-06-04 13:41:36 -0500:
> On Jun 4, 2018, at 7:05 AM, Jay S Bryant  wrote:
> 
> >> Do we have that problem? I honestly don't know how much pressure other
> >> folks are feeling. My impression is that we've mostly become good at
> >> finding the necessary compromises, but my experience doesn't cover all
> >> of our teams.
> > In my experience this hasn't been a problem for quite some time.  In the 
> > past, at least for Cinder, there were some minor cases of this but as 
> > projects have matured this has been less of an issue.
> 
> Those rules were added because we wanted to avoid the appearance of one 
> company implementing features that would only be beneficial to it. This arose 
> from concerns in the early days when Rackspace was the dominant contributor: 
> many of the other companies involved in OpenStack were worried that they 
> would be investing their workers in a project that would only benefit 
> Rackspace. As far as I know, there were never specific cases where Rackspace 
> or any other company tried to push features in that no one else supported..
> 
> So even if now it doesn't seem that there is a problem, and we could remove 
> these restrictions without ill effect, it just seems prudent to keep them. If 
> a project is so small that the majority of its contributors/cores are from 
> one company, maybe it should be an internal project for that company, and not 
> a community project.
> 
> -- Ed Leafe

Where was the rule added, though? I am aware of some individual teams
with the rule, but AFAIK it was never a global rule. It's certainly not
in any of the projects for which I am currently a core reviewer.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ...

2018-06-04 Thread Peter Penchev
On Sat, Jun 02, 2018 at 06:13:23PM -0500, Jay S Bryant wrote:
> All,
> 
> This note is to make everyone aware that I have submitted patches for a
> number of drivers that have not run 3rd party CI in 60 or more days.  The
> following is a list of the drivers, how long since their CI last ran and
> links to the patches which mark them as unsupported drivers:
> 
>  * DataCore CI – 99 Days - https://review.openstack.org/571533
>  * Dell EMC CorpHD CI – 121 Days - https://review.openstack.org/571555
>  * HGST Solutions CI – 306 Days - https://review.openstack.org/571560
>  * IBM GPFS CI – 212 Days - https://review.openstack.org/571590
>  * Itri Disco CI – 110 Days - https://review.openstack.org/571592
>  * Nimble Storage CI – 78 Days - https://review.openstack.org/571599
>  * StorPool – Unknown - https://review.openstack.org/571935
>  * Vedams –HPMSA – 442 Days - https://review.openstack.org/571940
>  * Brocade OpenStack – CI – 261 Days - https://review.openstack.org/571943
> 
> All of these drivers will be marked unsupported for the Rocky release and
> will be removed in the Stein release if the 3rd Party CI is not returned to
> a working state.
> 
> If your driver is on the list and you have questions please respond to this
> thread and we can discuss what needs to be done to return support for your
> driver.
> 
> Thank you for your attention to this matter!

Hi,

Thanks for taking care of Cinder by culling the herd.

The StorPool CI is, understandably, on your list, since we had some
problems with the CI infrastructure (not the driver itself) during
the month of May.  However, we fixed them on May 31st and our CI had
several successful runs then, quickly followed by a slew of failures
because of the "pip version 1" strip()/split() problem in devstack.

Our CI has been chugging along since June 2nd (not really related to
the timing of your e-mail, it just so happened that we fixed another
small problem there).  You can see the logs at

  http://logs.ci-openstack.storpool.com/

I am thinking of scheduling a rerun for all the changes that our CI
failed on (and that have not been merged or abandoned yet); this may
happen in the next couple of days.

So, thanks again for your work on this, and hopefully this message will
serve as a "we're still alive" sign.  If there is anything more we
should do, like reschedule the failed builds, please let us know.

Best regards,
Peter

-- 
Peter Pentchev  roam@{ringlet.net,debian.org,FreeBSD.org} p...@storpool.com
PGP key:http://people.FreeBSD.org/~roam/roam.key.asc
Key fingerprint 2EE7 A7A5 17FC 124C F115  C354 651E EFB0 2527 DF13


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][daisycloud][freezer][fuel][solum][tatu][trove] pycrypto is dead and insecure, you should migrate part 2

2018-06-04 Thread Matthew Thode
On 18-05-13 12:22:06, Matthew Thode wrote:
> This is a reminder to the projects called out that they are using old,
> unmaintained and probably insecure libraries (it's been dead since
> 2014).  Please migrate off to use the cryptography library.  We'd like
> to drop pycrypto from requirements for rocky.
> 
> See also, the bug, which has most of you cc'd already.
> 
> https://bugs.launchpad.net/openstack-requirements/+bug/1749574
> 

++-+--+---+
| Repository | Filename 
   | Line | Text
  |
++-+--+---+
| daisycloud-core| code/daisy/requirements.txt  
   |   17 | pycrypto>=2.6 # Public Domain   
  |
| freezer| requirements.txt 
   |   21 | pycrypto>=2.6 # Public Domain   
  |
| fuel-dev-tools | contrib/fuel-setup/requirements.txt  
   |5 | pycrypto==2.6.1 
  |
| fuel-web   | nailgun/requirements.txt 
   |   24 | pycrypto>=2.6.1 
  |
| solum  | requirements.txt 
   |   24 | pycrypto # Public Domain
  |
| tatu   | requirements.txt 
   |7 | pycrypto>=2.6.1 
  |
| tatu   | test-requirements.txt
   |7 | pycrypto>=2.6.1 
  |
| trove  | 
integration/scripts/files/requirements/fedora-requirements.txt  |   30 | 
pycrypto>=2.6  # Public Domain|
| trove  | 
integration/scripts/files/requirements/ubuntu-requirements.txt  |   29 | 
pycrypto>=2.6  # Public Domain|
| trove  | requirements.txt 
   |   47 | pycrypto>=2.6 # Public Domain   
  |
++-+--+---+

In order by name, notes follow.

daisycloud-core - looks like AES / random functions are used
freezer - looks like AES / random functions are used
solum   - looks like AES / RSA functions are used
trove   - has a review!!! https://review.openstack.org/#/c/560292/

The following projects are not tracked so we won't wait on them.
fuel-dev-tools, fuel-web, tatu

so it looks like progress is being made, so we have that going for us,
which is nice.  What can I do to help move this forward?

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-04 Thread Ivan Kolodyazhny
Hi Sean,

These goals look reasonable for me.

On Mon, Jun 4, 2018 at 9:07 PM, Sean McGinnis  wrote:

> Hi everyone,
>
> This is to continue the discussion of the goal selection for the Stein
> release.
> I had previously sent out a recap of our discussion at the Forum here:
>
> http://lists.openstack.org/pipermail/openstack-dev/2018-May/130999.html
>
> Now we need to actually narrow things down and pick a couple goals.
>
> Strawman
> 
>
> Just to set a starting point for debate, I propose the following two as
> goals
> for Stein:
>
> - Cold Upgade Support
> - Python 3 first
>
> As a reminder of other ideas, here is the link to the backlog of goal ideas
> we've kept so far:
>
> https://etherpad.openstack.org/p/community-goals
>
> Feel free to add more to that list, and if you have been involved in any
> of the
> things that have been completed there, please remove things you don't think
> should still be there.
>
> This is by no means an exhaustive list of what we could or should do for
> goals.
>
> With that, I'll go over the choices that I've proposed for the strawman.
>
> Python 3 First
> ==
>
> One of the things brought up in the session was picking things that bring
> excitement and are obvious benefits to deployers and users of OpenStack
> services. While this one is maybe not as immediately obvious, I think this
> is something that will end up helping deployers and also falls into the
> tech
> debt reduction category that will help us move quicker long term.
>
> Python 2 is going away soon, so I think we need something to help compel
> folks
> to work on making sure we are ready to transition. This will also be a good
> point to help switch the mindset over to Python 3 being the default used
> everywhere, with our Python 2 compatibility being just to continue legacy
> support.
>
I hope we'll have Ubuntu 18.04 LTS on our gates for this activity soon. It
becomes
important not only for developers but for operators and vendors too.



> Cold Upgrade Support
> 
>
> The other suggestion in the Forum session related to upgrades was the
> addition
> of "upgrade check" CLIs for each project, and I was tempted to suggest
> that as
> my second strawman choice. For some projects that would be a very minimal
> or
> NOOP check, so it would probably be easy to complete the goal. But
> ultimately
> what I think would bring the most value would be the work on supporting
> cold
> upgrade, even if it will be more of a stretch for some projects to
> accomplish.
>
> Upgrades have been a major focus of discussion lately, especially as our
> operators have been trying to get closer to the latest work upstream. This
> has
> been an ongoing challenge.
>
> A big +1 from my side on it.

There has also been a lot of talk about LTS releases. We've landed on fast
> forward upgrade to get between several releases, but I think improving
> upgrades
> eases the way both for easier and more frequent upgrades and also getting
> to
> the point some day where maybe we can think about upgrading over several
> releases to be able to do something like an LTS to LTS upgrade.
>
> Neither one of these upgrade goals really has a clearly defined plan that
> projects can pick up now and start working on, but I think with those
> involved
> in these areas we should be able to come up with a perscriptive plan for
> projects to follow.
>
> And it would really move our fast forward upgrade story forward.
>
> Next Steps
> ==
>
> I'm hoping with a strawman proposal we have a basis for debating the
> merits of
> these and getting closer to being able to officially select Stein goals. We
> still have some time, but I would like to avoid making late-cycle
> selections so
> teams can start planning ahead for what will need to be done in Stein.
>
> Please feel free to promote other ideas for goals. That would be a good
> way for
> us to weigh the pro's and con's between these and whatever else you have in
> mind. Then hopefully we can come to some consensus and work towards clearly
> defining what needs to be done and getting things well documented for
> teams to
> pick up as soon as they wrap up Rocky (or sooner).
>
> ---
> Sean (smcginnis)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Ed Leafe
On Jun 4, 2018, at 7:05 AM, Jay S Bryant  wrote:

>> Do we have that problem? I honestly don't know how much pressure other
>> folks are feeling. My impression is that we've mostly become good at
>> finding the necessary compromises, but my experience doesn't cover all
>> of our teams.
> In my experience this hasn't been a problem for quite some time.  In the 
> past, at least for Cinder, there were some minor cases of this but as 
> projects have matured this has been less of an issue.

Those rules were added because we wanted to avoid the appearance of one company 
implementing features that would only be beneficial to it. This arose from 
concerns in the early days when Rackspace was the dominant contributor: many of 
the other companies involved in OpenStack were worried that they would be 
investing their workers in a project that would only benefit Rackspace. As far 
as I know, there were never specific cases where Rackspace or any other company 
tried to push features in that no one else supported..

So even if now it doesn't seem that there is a problem, and we could remove 
these restrictions without ill effect, it just seems prudent to keep them. If a 
project is so small that the majority of its contributors/cores are from one 
company, maybe it should be an internal project for that company, and not a 
community project.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-04 Thread Eric Fried
Sundar-

We've been discussing the upgrade path on another thread [1] and are
working toward a solution [2][3] that would not require downtime or
special scripts (other than whatever's normally required for an upgrade).

We still hope to have all of that ready for Rocky, but if you're
concerned about timing, this work should make it a viable option for you
to start out modeling everything in the compute RP as you say, and then
move it over later.

Thanks,
Eric

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130783.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131045.html
[3] https://etherpad.openstack.org/p/placement-migrate-operations

On 06/04/2018 12:49 PM, Nadathur, Sundar wrote:
> Hi,
>  Cyborg needs to create RCs and traits for accelerators. The
> original plan was to do that with nested RPs. To avoid rushing the Nova
> developers, I had proposed that Cyborg could start by applying the
> traits to the compute node RP, and accept the resulting caveats for
> Rocky, till we get nested RP support. That proposal did not find many
> takers, and Cyborg has essentially been in waiting mode.
> 
> Since it is June already, and there is a risk of not delivering anything
> meaningful in Rocky, I am reviving my older proposal, which is
> summarized as below:
> 
>   * Cyborg shall create the RCs and traits as per spec
> (https://review.openstack.org/#/c/554717/), both in Rocky and
> beyond. Only the RPs will change post-Rocky.
>   * In Rocky:
>   o Cyborg will not create nested RPs. It shall apply the device
> traits to the compute node RP.
>   o Cyborg will document the resulting caveat, i.e., all devices in
> the same host should have the same traits. In particular, we
> cannot have a GPU and a FPGA, or 2 FPGAs of different types, in
> the same host.
>   o Cyborg will document that upgrades to post-Rocky releases will
> require operator intervention (as described below).
>   *  For upgrade to post-Rocky world with nested RPs:
>   o The operator needs to stop all running instances that use an
> accelerator.
>   o The operator needs to run a script that removes the Cyborg
> traits and the inventory for Cyborg RCs from compute node RPs.
>   o The operator can then perform the upgrade. The new Cyborg
> agent/driver(s) shall created nested RPs and publish
> inventory/traits as specified.
> 
> IMHO, it is acceptable for Cyborg to do this because it is new and we
> can set expectations for the (lack of) upgrade plan. The alternative is
> that potentially no meaningful use cases get addressed in Rocky for Cyborg.
> 
> Please LMK what you think.
> 
> Regards,
> Sundar
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TC] Stein Goal Selection

2018-06-04 Thread Sean McGinnis
Hi everyone,

This is to continue the discussion of the goal selection for the Stein release.
I had previously sent out a recap of our discussion at the Forum here:

http://lists.openstack.org/pipermail/openstack-dev/2018-May/130999.html

Now we need to actually narrow things down and pick a couple goals.

Strawman


Just to set a starting point for debate, I propose the following two as goals
for Stein:

- Cold Upgade Support
- Python 3 first

As a reminder of other ideas, here is the link to the backlog of goal ideas
we've kept so far:

https://etherpad.openstack.org/p/community-goals

Feel free to add more to that list, and if you have been involved in any of the
things that have been completed there, please remove things you don't think
should still be there.

This is by no means an exhaustive list of what we could or should do for goals.

With that, I'll go over the choices that I've proposed for the strawman.

Python 3 First
==

One of the things brought up in the session was picking things that bring
excitement and are obvious benefits to deployers and users of OpenStack
services. While this one is maybe not as immediately obvious, I think this
is something that will end up helping deployers and also falls into the tech
debt reduction category that will help us move quicker long term.

Python 2 is going away soon, so I think we need something to help compel folks
to work on making sure we are ready to transition. This will also be a good
point to help switch the mindset over to Python 3 being the default used
everywhere, with our Python 2 compatibility being just to continue legacy
support.

Cold Upgrade Support


The other suggestion in the Forum session related to upgrades was the addition
of "upgrade check" CLIs for each project, and I was tempted to suggest that as
my second strawman choice. For some projects that would be a very minimal or
NOOP check, so it would probably be easy to complete the goal. But ultimately
what I think would bring the most value would be the work on supporting cold
upgrade, even if it will be more of a stretch for some projects to accomplish.

Upgrades have been a major focus of discussion lately, especially as our
operators have been trying to get closer to the latest work upstream. This has
been an ongoing challenge.

There has also been a lot of talk about LTS releases. We've landed on fast
forward upgrade to get between several releases, but I think improving upgrades
eases the way both for easier and more frequent upgrades and also getting to
the point some day where maybe we can think about upgrading over several
releases to be able to do something like an LTS to LTS upgrade.

Neither one of these upgrade goals really has a clearly defined plan that
projects can pick up now and start working on, but I think with those involved
in these areas we should be able to come up with a perscriptive plan for
projects to follow.

And it would really move our fast forward upgrade story forward.

Next Steps
==

I'm hoping with a strawman proposal we have a basis for debating the merits of
these and getting closer to being able to officially select Stein goals. We
still have some time, but I would like to avoid making late-cycle selections so
teams can start planning ahead for what will need to be done in Stein.

Please feel free to promote other ideas for goals. That would be a good way for
us to weigh the pro's and con's between these and whatever else you have in
mind. Then hopefully we can come to some consensus and work towards clearly
defining what needs to be done and getting things well documented for teams to
pick up as soon as they wrap up Rocky (or sooner).

---
Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Technical Committee Update, 4 June

2018-06-04 Thread Davanum Srinivas
On Mon, Jun 4, 2018 at 1:16 PM, Jeremy Stanley  wrote:
> On 2018-06-04 11:30:17 -0400 (-0400), Doug Hellmann wrote:
> [...]
>> Jeremy has revived the thread about adding a secret/key store to
>> our base services via the mailing list. We discussed the topic
>> extensively in the most recent TC office hour, as well. I think we
>> are close to agreeing that although saying a "castellan supported"
>> database is insufficient for all desirable use cases, it is sufficient
>> for a useful number of use cases and would be a reasonable first
>> step. Jeremy, please correct me if I have misremembered that outcome.
>>
>> * http://lists.openstack.org/pipermail/openstack-dev/2018-May/130567.html
>> * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-05-31-15.00.html
> [...]
>
> Basically correct (I think we said "a Castellan-compatible key
> store"). I intend to have a change up for review to append this to
> the base services list in the next day or so as the mailing list
> discussion didn't highlight any new concerns and indicated that the
> previous blockers we identified have been resolved in the
> intervening year.

+100 Jeremy!

> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-04 Thread Nadathur, Sundar

Hi,
 Cyborg needs to create RCs and traits for accelerators. The 
original plan was to do that with nested RPs. To avoid rushing the Nova 
developers, I had proposed that Cyborg could start by applying the 
traits to the compute node RP, and accept the resulting caveats for 
Rocky, till we get nested RP support. That proposal did not find many 
takers, and Cyborg has essentially been in waiting mode.


Since it is June already, and there is a risk of not delivering anything 
meaningful in Rocky, I am reviving my older proposal, which is 
summarized as below:


 * Cyborg shall create the RCs and traits as per spec
   (https://review.openstack.org/#/c/554717/), both in Rocky and
   beyond. Only the RPs will change post-Rocky.
 * In Rocky:
 o Cyborg will not create nested RPs. It shall apply the device
   traits to the compute node RP.
 o Cyborg will document the resulting caveat, i.e., all devices in
   the same host should have the same traits. In particular, we
   cannot have a GPU and a FPGA, or 2 FPGAs of different types, in
   the same host.
 o Cyborg will document that upgrades to post-Rocky releases will
   require operator intervention (as described below).
 *   For upgrade to post-Rocky world with nested RPs:
 o The operator needs to stop all running instances that use an
   accelerator.
 o The operator needs to run a script that removes the Cyborg
   traits and the inventory for Cyborg RCs from compute node RPs.
 o The operator can then perform the upgrade. The new Cyborg
   agent/driver(s) shall created nested RPs and publish
   inventory/traits as specified.

IMHO, it is acceptable for Cyborg to do this because it is new and we 
can set expectations for the (lack of) upgrade plan. The alternative is 
that potentially no meaningful use cases get addressed in Rocky for Cyborg.


Please LMK what you think.

Regards,
Sundar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [tripleo] multi-region Keystone via stretched Galera cluster

2018-06-04 Thread Michael Bayer
Hey list -

as mentioned in the May 11 Keystone meeting, I am working on one of
the canonical approaches to producing a multi-region Keystone service,
which is by having overcloud-specific keystone services interact with
a Galera database that is running masters across multiple overclouds.
  The work here is to be integrated into tripleo and at [1] we discuss
the production of a multi-region Keystone service, deployable across
multiple tripleo overclouds.

As the spec is being discussed I continue to work on the proof of
concept [2] which in its master branch is being developed to deploy
the second galera DB as well as haproxy/vip/keystone completely from
tripleo heat templates; the changes being patched here are to be
proposed as changes to tripleo itself once this version of the POC is
working.

The "standard_tripleo_version" branch is the previous iteration, which
provides a fully working proof of concept that adds a second Galera
instance to a pair of already deployed overclouds.

Comments on the review welcome.

[1] https://review.openstack.org/#/c/566448/

[2] https://github.com/zzzeek/stretch_cluster

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Technical Committee Update, 4 June

2018-06-04 Thread Jeremy Stanley
On 2018-06-04 11:30:17 -0400 (-0400), Doug Hellmann wrote:
[...]
> Jeremy has revived the thread about adding a secret/key store to
> our base services via the mailing list. We discussed the topic
> extensively in the most recent TC office hour, as well. I think we
> are close to agreeing that although saying a "castellan supported"
> database is insufficient for all desirable use cases, it is sufficient
> for a useful number of use cases and would be a reasonable first
> step. Jeremy, please correct me if I have misremembered that outcome.
> 
> * http://lists.openstack.org/pipermail/openstack-dev/2018-May/130567.html
> * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-05-31-15.00.html
[...]

Basically correct (I think we said "a Castellan-compatible key
store"). I intend to have a change up for review to append this to
the base services list in the next day or so as the mailing list
discussion didn't highlight any new concerns and indicated that the
previous blockers we identified have been resolved in the
intervening year.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] how shall we track status updates?

2018-06-04 Thread Sean McGinnis

On 06/04/2018 10:32 AM, Doug Hellmann wrote:

During the retrospective at the forum we talked about having each group
working on an initiative send regular status updates. I would like to
start doing that this week, and would like to talk about logistics

Should we send emails directly to this list, or the TC list?

How often should we post updates?

Doug


Sending to openstack-dev would have the nice benefit of raising awareness,
but at the risk of adding to the noise level and being ignored. I'm not sure
which would be best, but either seems acceptable to me.

Monthly or every other week seems like enough. I don't think anything is
urgent or critical enough for weekly updates.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Deprecation of nova.image.download.modules extension point

2018-06-04 Thread Matt Riedemann

+openstack-operators to see if others have the same use case

On 5/31/2018 5:14 PM, Moore, Curt wrote:
We recently upgraded from Liberty to Pike and looking ahead to the code 
in Queens, noticed the image download deprecation notice with 
instructions to post here if this interface was in use.  As such, I’d 
like to explain our use case and see if there is a better way of 
accomplishing our goal or lobby for the "un-deprecation" of this 
extension point.


Thanks for speaking up - this is much easier *before* code is removed.



As with many installations, we are using Ceph for both our Glance image 
store and VM instance disks.  In a normal workflow when both Glance and 
libvirt are configured to use Ceph, libvirt reacts to the direct_url 
field on the Glance image and performs an in-place clone of the RAW disk 
image from the images pool into the vms pool all within Ceph.  The 
snapshot creation process is very fast and is thinly provisioned as it’s 
a COW snapshot.


This underlying workflow itself works great, the issue is with 
performance of the VM’s disk within Ceph, especially as the number of 
nodes within the cluster grows.  We have found, especially with Windows 
VMs (largely as a result of I/O for the Windows pagefile), that the 
performance of the Ceph cluster as a whole takes a very large hit in 
keeping up with all of this I/O thrashing, especially when Windows is 
booting.  This is not the case with Linux VMs as they do not use swap as 
frequently as do Windows nodes with their pagefiles.  Windows can be run 
without a pagefile but that leads to other odditites within Windows.


I should also mention that in our case, the nodes themselves are 
ephemeral and we do not care about live migration, etc., we just want 
raw performance.


As an aside on our Ceph setup without getting into too many details, we 
have very fast SSD based Ceph nodes for this pool (separate crush root, 
SSDs for both OSD and journals, 2 replicas), interconnected on the same 
switch backplane, each with bonded 10GB uplinks to the switch.  Our Nova 
nodes are within the same datacenter (also have bonded 10GB uplinks to 
their switches) but are distributed across different switches.  We could 
move the Nova nodes to the same switch as the Ceph nodes but that is a 
larger logistical challenge to rearrange many servers to make space.


Back to our use case, in order to isolate this heavy I/O, a subset of 
our compute nodes have a local SSD and are set to use qcow2 images 
instead of rbd so that libvirt will pull the image down from Glance into 
the node’s local image cache and run the VM from the local SSD.  This 
allows Windows VMs to boot and perform their initial cloudbase-init 
setup/reboot within ~20 sec vs 4-5 min, regardless of overall Ceph 
cluster load.  Additionally, this prevents us from "wasting" IOPS and 
instead keep them local to the Nova node, reclaiming the network 
bandwidth and Ceph IOPS for use by Cinder volumes.  This is essentially 
the use case outlined here in the "Do designate some non-Ceph compute 
hosts with low-latency local storage" section:


https://ceph.com/planet/the-dos-and-donts-for-ceph-for-openstack/

The challenge is that transferring the Glance image transfer is 
_glacially slow_ when using the Glance HTTP API (~30 min for a 50GB 
Windows image (It’s Windows, it’s huge with all of the necessary tools 
installed)).  If libvirt can instead perform an RBD export on the image 
using the image download functionality, it is able to download the same 
image in ~30 sec.  We have code that is performing the direct download 
from Glance over RBD and it works great in our use case which is very 
similar to the code in this older patch:


https://review.openstack.org/#/c/44321/


It looks like at the time this had general approval (i.e. it wasn't 
considered crazy) but was blocked simply due to the Havana feature 
freeze. That's good to know.




We could look at attaching an additional ephemeral disk to the instance 
and have cloudbase-init use it as the pagefile but it appears that if 
libvirt is using rbd for its images_type, _all_ disks must then come 
from Ceph, there is no way at present to allow the VM image to run from 
Ceph and have an ephemeral disk mapped in from node-local storage.  Even 
still, this would have the effect of "wasting" Ceph IOPS for the VM disk 
itself which could be better used for other purposes.


When you mentioned the swap above I was thinking similar to this, 
attaching a swap device but as you've pointed out, all disks local to 
the compute host are going to use the same image type backend, so you 
can't have the root disk and swap/ephemeral disks using different image 
backends.




Based on what I have explained about our use case, is there a 
better/different way to accomplish the same goal without using the 
deprecated image download functionality?  If not, can we work to 
"un-deprecate" the download extension point? Should I work to get the 
code for this 

[openstack-dev] [tc] how shall we track status updates?

2018-06-04 Thread Doug Hellmann
During the retrospective at the forum we talked about having each group
working on an initiative send regular status updates. I would like to
start doing that this week, and would like to talk about logistics

Should we send emails directly to this list, or the TC list?

How often should we post updates?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Update, 4 June

2018-06-04 Thread Doug Hellmann
This is the weekly summary of work being done by the Technical
Committee members. The full list of active items is managed in the
wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker

We also track TC objectives for the cycle using StoryBoard
at:https://storyboard.openstack.org/#!/project/923

== Recent Activity ==

Project updates:

* Import ansible-role-tripleo-modify-image https://review.openstack.org/568727
* retire tripleo-incubator https://review.openstack.org/#/c/565843/
* charms: add Glance Simplestreams Sync charm 
https://review.openstack.org/566958
* PowerVMStackers following stable policy https://review.openstack.org/562591

Other approved changes:

* Include a rationale for tracking base services 
https://review.openstack.org/#/c/568941/
* Note that the old incubation/graduation process is obsolete 
https://review.openstack.org/#/c/569164/
* Provide more detail about the expecations we place on goal champions 
https://review.openstack.org/#/c/564060/

Office hour logs from this week:

* http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-05-30-01.00.html
* http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-05-31-15.00.html

The board agreed that we should resolve the long-standing issue
with the section of the bylaws that describes the TC electorate.
I will provide more detail about that in the summary from the
joint leadership session, which I intend to send separately in
the next day or two.

== Ongoing Discussions ==

Zane summarized the Forum discussion about the Adjutant team's
application as a comment on the review. The work to update the
scope/mission statement for the project is ongoing.

* https://review.openstack.org/#/c/553643/

We discussed the Python 2 deprecation timeline at the Forum. I have
prepared a resolution describing the outcome, and discussion is
continuing on the review. Based on the recent feedback, I need to
update the resolution to add an explicit deadline for supporting
Python 3. Graham also needs to update the PTI documentation change
to differentiate between old and new projects.

* http://lists.openstack.org/pipermail/openstack-dev/2018-May/130824.html
* https://review.openstack.org/571011 python 2 deprecation timeline
* https://review.openstack.org/#/c/561922/ PTI documentation change

There are two separate discussions about project team affiliation
diversity. Zane's proposal to update the new project requirements
has some discussion in gerrit, and Mohammed's thread on the mailing
list about changing the way we apply the current diversity tag has
a couple of competing proposals under consideration. TC members,
please review both and provide your input.

* https://review.openstack.org/#/c/567944/
* http://lists.openstack.org/pipermail/openstack-dev/2018-May/130776.html
  and
  http://lists.openstack.org/pipermail/openstack-dev/2018-June/131029.html

Jeremy has revived the thread about adding a secret/key store to
our base services via the mailing list. We discussed the topic
extensively in the most recent TC office hour, as well. I think we
are close to agreeing that although saying a "castellan supported"
database is insufficient for all desirable use cases, it is sufficient
for a useful number of use cases and would be a reasonable first
step. Jeremy, please correct me if I have misremembered that outcome.

* http://lists.openstack.org/pipermail/openstack-dev/2018-May/130567.html
* http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-05-31-15.00.html

The operators who have agreed to take over managing the content in
the Operations Manual have decided to move the content back from
the wiki into gerrit. They plan to establish a SIG to "own" the
repository to ensure the content can be published to docs.openstack.org
again.

* http://lists.openstack.org/pipermail/openstack-operators/2018-May/015318.html

Mohammed and Emilien are working with the StarlingX team to import
their repositories following the plan we discussed at the Forum.

* http://lists.openstack.org/pipermail/openstack-operators/2018-May/015318.html
* http://lists.openstack.org/pipermail/openstack-dev/2018-May/130913.html

Zane has started a discussion about the terms of service for hosted
projects. James Blair started a separate thread to discuss the
future of the infrastructure team as it starts to support multiple
foundation project areas. TC members, these are both important
threads, so please check in and provide your feedback.

* http://lists.openstack.org/pipermail/openstack-dev/2018-May/130807.html
* http://lists.openstack.org/pipermail/openstack-dev/2018-May/130896.html

Based on feedback from the joint leadership meeting at the summit,
Dims has started working on a template for describing roles we need
filled in various areas. The next step is to convert some of the
existing requests for help into the new format and get more feedback
about the content.

* https://etherpad.openstack.org/p/job-description-template

== TC member actions/focus/discussions for the 

Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-04 Thread Zane Bitter

On 04/06/18 10:19, Amy Marrich wrote:

Zane,

I'll read in more detail, but do we want to add rollcall-vote?


Is it used anywhere other than in the governance repo? We certainly 
could add it, but it didn't seem like a top priority.


- ZB


Amy (spotz)

On Mon, Jun 4, 2018 at 7:13 AM, Zane Bitter > wrote:


On 31/05/18 14:35, Julia Kreger wrote:

Back to the topic of nitpicking!

I virtually sat down with Doug today and we hammered out the
positive
aspects that we feel like are the things that we as a community want
to see as part of reviews coming out of this effort. The principles
change[1] in governance has been updated as a result.

I think we are at a point where we have to state high level
principles, and then also update guidelines or other context
providing
documentation to re-enforce some of items covered in this
discussion... not just to educate new contributors, but to serve
as a
checkpoint for existing reviewers when making the decision as to how
to vote change set. The question then becomes where would such
guidelines or documentation best fit?


I think the contributor guide is the logical place for it. Kendall
pointed out this existing section:


https://docs.openstack.org/contributors/code-and-documentation/using-gerrit.html#reviewing-changes



It could go in there, or perhaps we separate out the parts about
when to use which review scores into a separate page from the
mechanics of how to use Gerrit.

Should we explicitly detail the
cause/effect that occurs? Should we convey contributor
perceptions, or
maybe even just link to this thread as there has been a massive
amount
of feedback raising valid cases, points, and frustrations.

Personally, I'd lean towards a blended approach, but the question of
where is one I'm unsure of. Thoughts?


Let's crowdsource a set of heuristics that reviewers and
contributors should keep in mind when they're reviewing or having
their changes reviewed. I made a start on collecting ideas from this
and past threads, as well as my own reviewing experience, into a
document that I've presumptuously titled "How to Review Changes the
OpenStack Way" (but might be more accurately called "The Frank
Sinatra Guide to Code Review" at the moment):

https://etherpad.openstack.org/p/review-the-openstack-way


It's in an etherpad to make it easier for everyone to add their
suggestions and comments (folks in #openstack-tc have made some
tweaks already). After a suitable interval has passed to collect
feedback, I'll turn this into a contributor guide change.

Have at it!

cheers,
Zane.


-Julia

[1]: https://review.openstack.org/#/c/570940/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-helm] OSH Storyboard Migration

2018-06-04 Thread MCEUEN, MATT
OpenStack-Helm team,

Heads-up:  we are targeting migration of OpenStack-Helm into Storyboard for 
this Friday, 6/8!  We've been discussing this for a while, and will sync on it 
in tomorrow's team meeting to ensure there are no surprises as we move.

Following this move, please use Storyboard instead of Launchpad for 
OpenStack-Helm.

Thanks,
Matt McEuen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-04 Thread Amy Marrich
Zane,

I'll read in more detail, but do we want to add rollcall-vote?

Amy (spotz)

On Mon, Jun 4, 2018 at 7:13 AM, Zane Bitter  wrote:

> On 31/05/18 14:35, Julia Kreger wrote:
>
>> Back to the topic of nitpicking!
>>
>> I virtually sat down with Doug today and we hammered out the positive
>> aspects that we feel like are the things that we as a community want
>> to see as part of reviews coming out of this effort. The principles
>> change[1] in governance has been updated as a result.
>>
>> I think we are at a point where we have to state high level
>> principles, and then also update guidelines or other context providing
>> documentation to re-enforce some of items covered in this
>> discussion... not just to educate new contributors, but to serve as a
>> checkpoint for existing reviewers when making the decision as to how
>> to vote change set. The question then becomes where would such
>> guidelines or documentation best fit?
>>
>
> I think the contributor guide is the logical place for it. Kendall pointed
> out this existing section:
>
> https://docs.openstack.org/contributors/code-and-documentati
> on/using-gerrit.html#reviewing-changes
>
> It could go in there, or perhaps we separate out the parts about when to
> use which review scores into a separate page from the mechanics of how to
> use Gerrit.
>
> Should we explicitly detail the
>> cause/effect that occurs? Should we convey contributor perceptions, or
>> maybe even just link to this thread as there has been a massive amount
>> of feedback raising valid cases, points, and frustrations.
>>
>> Personally, I'd lean towards a blended approach, but the question of
>> where is one I'm unsure of. Thoughts?
>>
>
> Let's crowdsource a set of heuristics that reviewers and contributors
> should keep in mind when they're reviewing or having their changes
> reviewed. I made a start on collecting ideas from this and past threads, as
> well as my own reviewing experience, into a document that I've
> presumptuously titled "How to Review Changes the OpenStack Way" (but might
> be more accurately called "The Frank Sinatra Guide to Code Review" at the
> moment):
>
> https://etherpad.openstack.org/p/review-the-openstack-way
>
> It's in an etherpad to make it easier for everyone to add their
> suggestions and comments (folks in #openstack-tc have made some tweaks
> already). After a suitable interval has passed to collect feedback, I'll
> turn this into a contributor guide change.
>
> Have at it!
>
> cheers,
> Zane.
>
>
> -Julia
>>
>> [1]: https://review.openstack.org/#/c/570940/
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project

2018-06-04 Thread Doug Hellmann
Excerpts from Omer Anson's message of 2018-06-04 16:53:54 +0300:
> Sure. No worries. The project is still active :)
> 
> I tagged and branched out Queens.

The build failed, see the other email thread for details.

Doug

> 
> Thanks,
> Omer.
> 
> On Mon, 4 Jun 2018 at 15:25, Sean McGinnis  wrote:
> 
> > On Sun, Jun 03, 2018 at 09:10:26AM +0300, Omer Anson wrote:
> > > Hi,
> > >
> > > If the issue is just the tagging, I'll tag the releases today/tomorrow. I
> > > figured that since Dragonflow has an independent release cycle, and we
> > have
> > > very little manpower, regular tagging makes less sense and would save us
> > a
> > > little time.
> > >
> > > Thanks,
> > > Omer
> > >
> >
> > Thanks Omer. I part of the concern, and the thing that caught our
> > attention,
> > was that although the project is using the independent release model it
> > still
> > had stable branches created up until queens.
> >
> > So this was mostly a check to make sure nothing else has changed and that
> > the
> > project should still be considered "active".
> >
> > Since it has been quite a while since the last official release from the
> > project, it would be good if you proposed a release to make sure nothing
> > has
> > broken with the release process for DragonFlow and to make all of the code
> > changes since the last release available to potential consumers.
> >
> > Sean
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][release][dragonflow] Release of openstack/dragonflow failed

2018-06-04 Thread Doug Hellmann
Excerpts from zuul's message of 2018-06-04 14:01:19 +:
> Build failed.
> 
> - release-openstack-python 
> http://logs.openstack.org/3b/3b7ca98ce56d1e71efe95eaa10d0884487411307/release/release-openstack-python/c381399/
>  : FAILURE in 3m 46s
> - announce-release announce-release : SKIPPED
> - propose-update-constraints propose-update-constraints : SKIPPED
> 

It looks like Dragonflow has some extra dependencies that are not
available under the current release job.

http://logs.openstack.org/3b/3b7ca98ce56d1e71efe95eaa10d0884487411307/release/release-openstack-python/c381399/job-output.txt.gz#_2018-06-04_14_00_35_073390

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-04 Thread Zane Bitter

On 31/05/18 14:35, Julia Kreger wrote:

Back to the topic of nitpicking!

I virtually sat down with Doug today and we hammered out the positive
aspects that we feel like are the things that we as a community want
to see as part of reviews coming out of this effort. The principles
change[1] in governance has been updated as a result.

I think we are at a point where we have to state high level
principles, and then also update guidelines or other context providing
documentation to re-enforce some of items covered in this
discussion... not just to educate new contributors, but to serve as a
checkpoint for existing reviewers when making the decision as to how
to vote change set. The question then becomes where would such
guidelines or documentation best fit?


I think the contributor guide is the logical place for it. Kendall 
pointed out this existing section:


https://docs.openstack.org/contributors/code-and-documentation/using-gerrit.html#reviewing-changes

It could go in there, or perhaps we separate out the parts about when to 
use which review scores into a separate page from the mechanics of how 
to use Gerrit.



Should we explicitly detail the
cause/effect that occurs? Should we convey contributor perceptions, or
maybe even just link to this thread as there has been a massive amount
of feedback raising valid cases, points, and frustrations.

Personally, I'd lean towards a blended approach, but the question of
where is one I'm unsure of. Thoughts?


Let's crowdsource a set of heuristics that reviewers and contributors 
should keep in mind when they're reviewing or having their changes 
reviewed. I made a start on collecting ideas from this and past threads, 
as well as my own reviewing experience, into a document that I've 
presumptuously titled "How to Review Changes the OpenStack Way" (but 
might be more accurately called "The Frank Sinatra Guide to Code Review" 
at the moment):


https://etherpad.openstack.org/p/review-the-openstack-way

It's in an etherpad to make it easier for everyone to add their 
suggestions and comments (folks in #openstack-tc have made some tweaks 
already). After a suitable interval has passed to collect feedback, I'll 
turn this into a contributor guide change.


Have at it!

cheers,
Zane.


-Julia

[1]: https://review.openstack.org/#/c/570940/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [masakari] weekly meeting time changed

2018-06-04 Thread Sam P
Hi All,

Gentle reminder about  next Masakari meeting time.
Form next meeting (5th June), meeting will be start at 0300UTC.
Please find more details at [1].

[1] http://eavesdrop.openstack.org/#Masakari_Team_Meeting
--- Regards,
Sampath
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project

2018-06-04 Thread Omer Anson
Sure. No worries. The project is still active :)

I tagged and branched out Queens.

Thanks,
Omer.

On Mon, 4 Jun 2018 at 15:25, Sean McGinnis  wrote:

> On Sun, Jun 03, 2018 at 09:10:26AM +0300, Omer Anson wrote:
> > Hi,
> >
> > If the issue is just the tagging, I'll tag the releases today/tomorrow. I
> > figured that since Dragonflow has an independent release cycle, and we
> have
> > very little manpower, regular tagging makes less sense and would save us
> a
> > little time.
> >
> > Thanks,
> > Omer
> >
>
> Thanks Omer. I part of the concern, and the thing that caught our
> attention,
> was that although the project is using the independent release model it
> still
> had stable branches created up until queens.
>
> So this was mostly a check to make sure nothing else has changed and that
> the
> project should still be considered "active".
>
> Since it has been quite a while since the last official release from the
> project, it would be good if you proposed a release to make sure nothing
> has
> broken with the release process for DragonFlow and to make all of the code
> changes since the last release available to potential consumers.
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Bug deputy report May 28 - June 3

2018-06-04 Thread Boden Russell
Last week we had a total of 14 bugs come in [1]; 2 of which are RFEs.
Only 1 defect is high priority [2] and is already in progress.

There are still a few bugs under discussion/investigation:

- 1774257 "neutron-openvswitch-agent RuntimeError: Switch connection
timeout" could use some input from folks skilled with OVS and affects
multiple people.
- 1773551 "Error loading interface driver
'neutron.agent.linux.interface.BridgeInterfaceDriver'" is still waiting
for input from the submitter.
- 1773282 "errors occured when create vpnservice with flavor_id:Flavors
plugin not Found" still under investigation and could use an eye from
the VPNaaS team.



[1] https://bugs.launchpad.net/neutron/+bugs?orderby=-datecreated=0
[2] https://bugs.launchpad.net/neutron/+bug/1774006


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][EM] Summary of forum session(s) on extended maintenance

2018-06-04 Thread Thierry Carrez

Hi!

We had a double session on extended maintenance at the Forum in 
Vancouver, here is a late summary of it. Feel free to add to it if you 
remember extra things.


The first part of the session was to present the Extended Maintenance 
process as implemented after the discussion at the PTG in Dublin, and 
answer questions around it.


The process was generally well received, with question on how to sign up 
(no real sign up required, just start helping and join 
#openstack-stable). There were also a number of questions around the 
need to maintain all releases up to an old maintained release, with 
explanation of the FFU process and the need to avoid regressions from 
release to release.


The second part of the session was taking a step back and discuss 
extended maintenance in the context of release cycles and upgrade pain. 
A summary of the Dublin discussion was given. Some questions were raised 
on the need for fast-forward upgrades (vs. skip-level upgrades), as well 
as a bit of a brainstorm around how to encourage people to gather around 
popular EM releases (a wiki page was considered a good trade-off).


The EM process mandates that no releases would be tagged after the end 
of the 18-month official "maintenance" period. There was a standing 
question on the need to still release libraries (since tests of HEAD 
changes are by default run against released versions of libraries). The 
consensus in the room was that when extended maintenance starts, we 
should switch to testing stable/$foo HEAD changes against stable/$foo 
HEAD of libraries. This should be first done when Ocata switches to 
extended maintenance in August.


The discussion then switched to how to further ease upgrade pain, with 
reports of progress on the Upgrades SIG on better documenting the Fast 
Forward Upgrade process. We discussed how minimal cold upgrades 
capabilities should be considered the minimum to be considered an 
official OpenStack component, and whether we could use the Goals 
mechanism to push it. We also discussed testing database migrations with 
real production data (what turbo-hipster did) and the challenges to 
share deidentified data to that purpose.


Cheers,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [placement] cinder + placement forum session etherpad

2018-06-04 Thread Jay S Bryant



On 6/1/2018 7:28 PM, Chris Dent wrote:

On Wed, 9 May 2018, Chris Dent wrote:


I've started an etherpad for the forum session in Vancouver devoted
to discussing the possibility of tracking and allocation resources
in Cinder using the Placement service. This is not a done deal.
Instead the session is to discuss if it could work and how to make
it happen if it seems like a good idea.

The etherpad is at

   https://etherpad.openstack.org/p/YVR-cinder-placement


The session went well. Some of the members of the cinder team who
might have had more questions had not been able to be at summit so
we were unable to get their input.

We clarified some of the things that cinder wants to be able to
accomplish (run multiple schedulers in active-active and avoid race
conditions) and the fact that this is what placement is built for.
We also made it clear that placement itself can be highly available
(and scalable) because of its nature as a dead-simple web app over a
database.

The next steps are for the cinder team to talk amongst themselves
and socialize the capabilities of placement (with the help of
placement people) and see if it will be suitable. It is unlikely
there will be much visible progress in this area before Stein.

Chris,

Thanks for this update.  I have it on the agenda for the Cinder team to 
discuss this further.  We ran out of time in last week's meeting but 
will hopefully get some time to discuss it this week.  We will keep you 
updated as to how things progress on our end and pull in the placement 
guys as necessary.


Jay


See the etherpad for a bit more detail.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project

2018-06-04 Thread Sean McGinnis
On Sun, Jun 03, 2018 at 09:10:26AM +0300, Omer Anson wrote:
> Hi,
> 
> If the issue is just the tagging, I'll tag the releases today/tomorrow. I
> figured that since Dragonflow has an independent release cycle, and we have
> very little manpower, regular tagging makes less sense and would save us a
> little time.
> 
> Thanks,
> Omer
> 

Thanks Omer. I part of the concern, and the thing that caught our attention,
was that although the project is using the independent release model it still
had stable branches created up until queens.

So this was mostly a check to make sure nothing else has changed and that the
project should still be considered "active".

Since it has been quite a while since the last official release from the
project, it would be good if you proposed a release to make sure nothing has
broken with the release process for DragonFlow and to make all of the code
changes since the last release available to potential consumers.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api][graphql] Feature branch creation please (PTL/Core)

2018-06-04 Thread Doug Hellmann

> On Jun 4, 2018, at 7:57 AM, Gilles Dubreuil  wrote:
> 
> Hi,
> 
> Can someone from the core team request infra to create a feature branch for 
> the Proof of Concept we agreed to do during API SIG forum session [1] a 
> Vancouver?
> 
> Thanks,
> Gilles
> 
> [1] https://etherpad.openstack.org/p/YVR18-API-SIG-forum

You can do this through the releases repo now. See the README for instructions. 

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Jay S Bryant



On 6/2/2018 2:08 PM, Doug Hellmann wrote:

Excerpts from Jeremy Stanley's message of 2018-06-02 18:51:47 +:

On 2018-06-02 13:23:24 -0400 (-0400), Doug Hellmann wrote:
[...]

It feels like we would be saying that we don't trust 2 core reviewers
from the same company to put the project's goals or priorities over
their employer's.  And that doesn't feel like an assumption I would
want us to encourage through a tag meant to show the health of the
project.

[...]

That's one way of putting it. On the other hand, if we ostensibly
have that sort of guideline (say, two core reviewers shouldn't be
the only ones to review a change submitted by someone else from
their same organization if the team is large and diverse enough to
support such a pattern) then it gives our reviewers a better
argument to push back on their management _if_ they're being
strongly urged to review/approve certain patches. At least then they
can say, "this really isn't going to fly because we have to get a
reviewer from another organization to agree it's in the best
interests of the project" rather than "fire me if you want but I'm
not approving that change, no matter how much your product launch is
going to be delayed."

Do we have that problem? I honestly don't know how much pressure other
folks are feeling. My impression is that we've mostly become good at
finding the necessary compromises, but my experience doesn't cover all
of our teams.
In my experience this hasn't been a problem for quite some time.  In the 
past, at least for Cinder, there were some minor cases of this but as 
projects have matured this has been less of an issue.

While I'd like to think a lot of us have the ability to push back on
those sorts of adverse influences directly, I have a feeling not
everyone can comfortably do so. On the other hand, it might also
just be easy enough to give one of your fellow reviewers in another
org a heads up that maybe they should take a look at that patch over
there and provide some quick feedback...

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][api][graphql] Feature branch creation please (PTL/Core)

2018-06-04 Thread Gilles Dubreuil

Hi,

Can someone from the core team request infra to create a feature branch 
for the Proof of Concept we agreed to do during API SIG forum session 
[1] a Vancouver?


Thanks,
Gilles

[1] https://etherpad.openstack.org/p/YVR18-API-SIG-forum

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca] Nominating Doug Szumski as Monasca core

2018-06-04 Thread Bedyk, Witold
Hello Monasca team,

I would like to nominate Doug Szumski (dougsz) for Monasca core team.

He actively contributes to the project and works on adding Monasca to 
kolla-ansible. He has good project overview which he shares in his reviews.

Best greetings
Witek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Thierry Carrez

amrith.ku...@gmail.com wrote:

-Original Message-
From: Doug Hellmann 
Sent: Saturday, June 2, 2018 4:26 PM
To: openstack-dev 
Subject: Re: [openstack-dev] [tc] Organizational diversity tag

Excerpts from amrith.kumar's message of 2018-06-02 15:06:27 -0400:

Every project on the one-way-trip to inactivity starts with what some
people will wishfully call a 'transient period' of reduced activity.
Once the transient nature is no longer the case (either it becomes
active or the transient becomes permanent) the normal process of
eviction can begin. As the guy who came up with the maintenance-mode
tag, so as to apply it to Trove, I believe that both the diversity tag
and the maintenance mode tag have a good reason to exist, and should
both be retained independent of each other.

The logic always was, and should remain, that diversity is a measure
of wide multi-organizational support for a project; not measured in
the total volume of commits but the fraction of commits. There was
much discussion about the knobs in the diversity tag measurement when
Flavio made the changes some years back. I'm sorry I didn't attend the
session in Vancouver but I'll try and tune in to a TC office hours
session and maybe get a rundown of what precipitated this decision to

move away from the diversity tag.

We're talking about how to improve reporting on diversity, not stop doing it.


Why not just automate the thing that we have right now and have something kick 
a review automatically if the diversity in a team changes (per current formula)?


That is what we did: get the thing we have right now to propose changes. 
But we always had a quick human pass to check that what the script 
proposed corresponded to a reality. Lately (with lower activity in a 
number of teams), more and more automatically-proposed changes did not 
match a reality anymore, to the point where a majority of the proposed 
changes need to be dropped.


Example: a low-activity single-vendor project team suddenly loses the 
tag because one person pushes a patch to fix zuul jobs and another 
pushes a doc build fix.


Example 2: a team with 3 core reveiwers flaps between diverse 
affiliation and single-vendor depending on who does the core reviewing 
on its 3 patches per month.


Hence the suggestion to either improve our metrics to better support 
low-activity teams, or switch to a more qualitative/prose report instead 
of quantitative/tags.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project

2018-06-04 Thread Thierry Carrez

Omer Anson wrote:
If the issue is just the tagging, I'll tag the releases today/tomorrow. 
I figured that since Dragonflow has an independent release cycle, and we 
have very little manpower, regular tagging makes less sense and would 
save us a little time.

Thanks Omer!

For tagging, I suggest you use a change proposed to the 
openstack/releases repository, so that we can test that the release will 
work. Don't hesitate to ping us on #openstack-releases or read the doc at:


http://git.openstack.org/cgit/openstack/releases/tree/README.rst

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [queens][ceilometer][gnocchi] no network resource, after installation openstack queens

2018-06-04 Thread KiYoun Sung
Hello.

I installed Openstack Queens version using Openstack-ansible.
and I set gnocchi, ceilometer for metering.

After installation,
I got metric from instance, image, swift, etc.
but, there is no metric by network

I did by gnocchi cli, like this
   $ gnocchi resource-type list
the network resource-type is exist.
but,
   "$ gnocchi resource list" is empty.
I made a external network and created floating ip.
but, there is no network resource.

neutron.conf has [oslo_messaging_notifications] field below:
[oslo_messaging_notifications]
notification_topics = notifications
driver = messagingv2
transport_url = rabbit://neutron:@172.29.238.44:5671//neutron

How can I get network resoruce(especially floating ip)?
What is problem?

Thank you.
Best regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer

2018-06-04 Thread Ha Quang, Duong
Hi,

+1 from me, thanks for your works.

Duong

> -Original Message-
> From: Borne Mace [mailto:borne.m...@oracle.com]
> Sent: Friday, June 01, 2018 12:02 AM
> To: openstack-dev 
> Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli
> core reviewer
> 
> Greetings all,
> 
> I would like to propose the addition of Steve Noyes to the kolla-cli core
> reviewer team.  Consider this nomination as my personal +1.
> 
> Steve has a long history with the kolla-cli and should be considered its co-
> creator as probably half or more of the existing code was due to his efforts.
> He has now been working diligently since it was pushed upstream to improve
> the stability and testability of the cli and has the second most commits on 
> the
> project.
> 
> The kolla core team consists of 19 people, and the kolla-cli team of 2, for a
> total of 21.  Steve therefore requires a minimum of 11 votes (so just 10 more
> after my +1), with no veto -2 votes within a 7 day voting window to end on
> June 6th.  Voting will be closed immediately on a veto or in the case of a
> unanimous vote.
> 
> As I'm not sure how active all of the 19 kolla cores are, your attention and
> timely vote is much appreciated.
> 
> Thanks!
> 
> -- Borne
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Containerized Undercloud by default

2018-06-04 Thread Emilien Macchi
On Thu, May 31, 2018 at 9:13 PM, Emilien Macchi  wrote:
>
> - all multinode scenarios - current blocked by 1774297 as well but also
>> https://review.openstack.org/#/c/571566/
>>
>
This part is done and ready for review (CI team + others):
https://review.openstack.org/#/c/571529/

Thanks!
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer

2018-06-04 Thread zhubingbing


+1






At 2018-06-01 01:02:27, "Borne Mace"  wrote:
>Greetings all,
>
>I would like to propose the addition of Steve Noyes to the kolla-cli 
>core reviewer team.  Consider this nomination as my personal +1.
>
>Steve has a long history with the kolla-cli and should be considered its 
>co-creator as probably half or more of the existing code was due to his 
>efforts.  He has now been working diligently since it was pushed 
>upstream to improve the stability and testability of the cli and has the 
>second most commits on the project.
>
>The kolla core team consists of 19 people, and the kolla-cli team of 2, 
>for a total of 21.  Steve therefore requires a minimum of 11 votes (so 
>just 10 more after my +1), with no veto -2 votes within a 7 day voting 
>window to end on June 6th.  Voting will be closed immediately on a veto 
>or in the case of a unanimous vote.
>
>As I'm not sure how active all of the 19 kolla cores are, your attention 
>and timely vote is much appreciated.
>
>Thanks!
>
>-- Borne
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev