Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-14 Thread Lance Bragstad
Ok - yeah, I'm not sure what the history behind that is either...

I'm mainly curious if that's something we can/should keep or if we are
opposed to dropping 'os' and 'api' from the convention (e.g.
load-balancer:loadbalancer:post as opposed to
os_load-balancer_api:loadbalancer:post) and just sticking with the
service-type?

On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson  wrote:

> I don't know for sure, but I assume it is short for "OpenStack" and
> prefixing OpenStack policies vs. third party plugin policies for
> documentation purposes.
>
> I am guilty of borrowing this from existing code examples[0].
>
> [0]
> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html
>
> Michael
> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad 
> wrote:
> >
> >
> >
> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson 
> wrote:
> >>
> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
> >> which maps to the "os--api::" format.
> >
> >
> > Thanks for explaining the justification, Michael.
> >
> > I'm curious if anyone has context on the "os-" part of the format? I've
> seen that pattern in a couple different projects. Does anyone know about
> its origin? Was it something we converted to our policy names because of
> API names/paths?
> >
> >>
> >>
> >> I selected it as it uses the service-type[1], references the API
> >> resource, and then the method. So it maps well to the API reference[2]
> >> for the service.
> >>
> >> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html
> >> [1] https://service-types.openstack.org/
> >> [2]
> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
> >>
> >> Michael
> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
> >> >
> >> > So +1
> >> >
> >> >
> >> >
> >> > Tim
> >> >
> >> >
> >> >
> >> > From: Lance Bragstad 
> >> > Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> >> > Date: Wednesday, 12 September 2018 at 20:43
> >> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>, OpenStack Operators <
> openstack-operat...@lists.openstack.org>
> >> > Subject: [openstack-dev] [all] Consistent policy names
> >> >
> >> >
> >> >
> >> > The topic of having consistent policy names has popped up a few times
> this week. Ultimately, if we are to move forward with this, we'll need a
> convention. To help with that a little bit I started an etherpad [0] that
> includes links to policy references, basic conventions *within* that
> service, and some examples of each. I got through quite a few projects this
> morning, but there are still a couple left.
> >> >
> >> >
> >> >
> >> > The idea is to look at what we do today and see what conventions we
> can come up with to move towards, which should also help us determine how
> much each convention is going to impact services (e.g. picking a convention
> that will cause 70% of services to rename policies).
> >> >
> >> >
> >> >
> >> > Please have a look and we can discuss conventions in this thread. If
> we come to agreement, I'll start working on some documentation in
> oslo.policy so that it's somewhat official because starting to renaming
> policies.
> >> >
> >> >
> >> >
> >> > [0] https://etherpad.openstack.org/p/consistent-policy-names
> >> >
> >> > ___
> >> > OpenStack-operators mailing list
> >> > openstack-operat...@lists.openstack.org
> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-operators mailing list
> > openstack-operat...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Optimize the query of the octavia database

2018-09-14 Thread Jeff Yang
Ok, Thank you very much for your work.

Adam Harwell  于2018年9月15日周六 上午8:26写道:

> It's high priority for me as well, so we should be able to get something
> done very soon, I think. Look for something early next week maybe?
>
> Thanks,
> --Adam
>
> On Thu, Sep 13, 2018, 21:18 Jeff Yang  wrote:
>
>> Thanks:
>> I found the correlative patch in neutron-lbaas:
>> https://review.openstack.org/#/c/568361/
>>
>> The bug was marked high level by our QA team. I need to fix it as
>> soon as possible.
>>  Does Michael Johnson have any good suggestion? I am willing to
>> complete the
>>  repair work of this bug. If your patch still takes a while to
>> prepare.
>>
>> Michael Johnson  于2018年9月14日周五 上午7:56写道:
>>
>>> This is a known regression in the Octavia API performance. It has an
>>> existing story[0] that is under development. You are correct, that
>>> star join is the root of the problem.
>>> Look for a patch soon.
>>>
>>> [0] https://storyboard.openstack.org/#!/story/2002933
>>>
>>> Michael
>>> On Thu, Sep 13, 2018 at 10:32 AM Erik Olof Gunnar Andersson
>>>  wrote:
>>> >
>>> > This was solved in neutron-lbaas recently, maybe we could adopt the
>>> same method for Octavia?
>>> >
>>> > Sent from my iPhone
>>> >
>>> > On Sep 13, 2018, at 4:54 AM, Jeff Yang 
>>> wrote:
>>> >
>>> > Hi, All
>>> >
>>> > As octavia resources increase, I found that running the "openstack
>>> loadbalancer list" command takes longer and longer. Sometimes a 504 error
>>> is reported.
>>> >
>>> > By reading the code, I found that octavia will performs complex left
>>> outer join queries when acquiring resources such as loadbalancer, listener,
>>> pool, etc. in order to only make one trip to the database.
>>> > Reference code: http://paste.openstack.org/show/730022 Line 133
>>> > Generated SQL statements: http://paste.openstack.org/show/730021
>>> >
>>> > So, I suggest that adjust the query strategy to provide different join
>>> queries for different resources.
>>> >
>>> > https://storyboard.openstack.org/#!/story/2003751
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][uc]Community Wide Long Term Goals

2018-09-14 Thread Zhipeng Huang
Hi,

Based upon the discussion we had at the TC session in the afternoon, I'm
starting to draft a patch to add long term goal mechanism into governance.
It is by no means a complete solution at the moment (still have not thought
through the execution method yet to make sure the outcome), but feel free
to provide your feedback at https://review.openstack.org/#/c/602799/ .

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Optimize the query of the octavia database

2018-09-14 Thread Adam Harwell
It's high priority for me as well, so we should be able to get something
done very soon, I think. Look for something early next week maybe?

Thanks,
--Adam

On Thu, Sep 13, 2018, 21:18 Jeff Yang  wrote:

> Thanks:
> I found the correlative patch in neutron-lbaas:
> https://review.openstack.org/#/c/568361/
>
> The bug was marked high level by our QA team. I need to fix it as soon
> as possible.
>  Does Michael Johnson have any good suggestion? I am willing to
> complete the
>  repair work of this bug. If your patch still takes a while to prepare.
>
> Michael Johnson  于2018年9月14日周五 上午7:56写道:
>
>> This is a known regression in the Octavia API performance. It has an
>> existing story[0] that is under development. You are correct, that
>> star join is the root of the problem.
>> Look for a patch soon.
>>
>> [0] https://storyboard.openstack.org/#!/story/2002933
>>
>> Michael
>> On Thu, Sep 13, 2018 at 10:32 AM Erik Olof Gunnar Andersson
>>  wrote:
>> >
>> > This was solved in neutron-lbaas recently, maybe we could adopt the
>> same method for Octavia?
>> >
>> > Sent from my iPhone
>> >
>> > On Sep 13, 2018, at 4:54 AM, Jeff Yang  wrote:
>> >
>> > Hi, All
>> >
>> > As octavia resources increase, I found that running the "openstack
>> loadbalancer list" command takes longer and longer. Sometimes a 504 error
>> is reported.
>> >
>> > By reading the code, I found that octavia will performs complex left
>> outer join queries when acquiring resources such as loadbalancer, listener,
>> pool, etc. in order to only make one trip to the database.
>> > Reference code: http://paste.openstack.org/show/730022 Line 133
>> > Generated SQL statements: http://paste.openstack.org/show/730021
>> >
>> > So, I suggest that adjust the query strategy to provide different join
>> queries for different resources.
>> >
>> > https://storyboard.openstack.org/#!/story/2003751
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc]Question for candidates about global reachout

2018-09-14 Thread Rico Lin
>
>
> For the candidates who are running for tc seats, please reply to this
> email to indicate if you are open to use certain social media app in
> certain region (like Wechat in China, Line in Japan, etc.), in order to
> reach out to the OpenStack developers in that region and help them to
> connect to the upstream community as well as answering questions or other
> activities that will help. (sorry for the long sentence ... )
>

We definitely need to reach to developers from each location in global. And
a way to expose technical community to some place more close to developer
and not creating to much burden to all. For me, if we can have channels for
broadcast our key information cross entire community (like what's next
TC/PTL election, what mission is been proposed, who people can talk to when
certain issue happens, who you can talk to when you got great idea, and
most importantly where are the right place you should go to) expose to all
and maybe encourge community leaders to join. A list of channels is not
hard to setup, but it will bring big different IMO and we can always adjust
what channel we have. What we can limit here is make sure always help the
new joiner to find the right place to engage.

Once we got connected to local developers and community, it's easier for TC
to guide all IMO. Will this work? Not sure! So why not we try and find
out!:)

>
>
Rico and I already sign up for Wechat communication for sure :)
>
Good to have you! Let's do it!!

BTW nice dicsussion today, thanks all who is there in TC room to share.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend

2018-09-14 Thread Matt Riedemann
tl;dr: I'm proposing a new parameter to the server stop (and suspend?) 
APIs to control if nova shelve offloads the server.


Long form: This came up during the public cloud WG session this week 
based on a couple of feature requests [1][2]. When a user stops/suspends 
a server, the hypervisor frees up resources on the host but nova 
continues to track those resources as being used on the host so the 
scheduler can't put more servers there. What operators would like to do 
is that when a user stops a server, nova actually shelve offloads the 
server from the host so they can schedule new servers on that host. On 
start/resume of the server, nova would find a new host for the server. 
This also came up in Vancouver where operators would like to free up 
limited expensive resources like GPUs when the server is stopped. This 
is also the behavior in AWS.


The problem with shelve is that it's great for operators but users just 
don't use it, maybe because they don't know what it is and stop works 
just fine. So how do you get users to opt into shelving their server?


I've proposed a high-level blueprint [3] where we'd add a new 
(microversioned) parameter to the stop API with three options:


* auto
* offload
* retain

Naming is obviously up for debate. The point is we would default to auto 
and if auto is used, the API checks a config option to determine the 
behavior - offload or retain. By default we would retain for backward 
compatibility. For users that don't care, they get auto and it's fine. 
For users that do care, they either (1) don't opt into the microversion 
or (2) specify the specific behavior they want. I don't think we need to 
expose what the cloud's configuration for auto is because again, if you 
don't care then it doesn't matter and if you do care, you can opt out of 
this.


"How do we get users to use the new microversion?" I'm glad you asked.

Well, nova CLI defaults to using the latest available microversion 
negotiated between the client and the server, so by default, anyone 
using "nova stop" would get the 'auto' behavior (assuming the client and 
server are new enough to support it). Long-term, openstack client plans 
on doing the same version negotiation.


As for the server status changes, if the server is stopped and shelved, 
the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I 
believe this is fine especially if a user is not being specific and 
doesn't care about the actual backend behavior. On start, the API would 
allow starting (unshelving) shelved offloaded (rather than just stopped) 
instances. Trying to hide shelved servers as stopped in the API would be 
overly complex IMO so I don't want to try and mask that.


It is possible that a user that stopped and shelved their server could 
hit a NoValidHost when starting (unshelving) the server, but that really 
shouldn't happen in a cloud that's configuring nova to shelve by default 
because if they are doing this, their SLA needs to reflect they have the 
capacity to unshelve the server. If you can't honor that SLA, don't 
shelve by default.


So, what are the general feelings on this before I go off and start 
writing up a spec?


[1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681
[2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679
[3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ptl] reduced PTL availability next week Sep 17

2018-09-14 Thread melanie witt

Hey all,

This is just a heads up that I'll be off-site in Boston for work next 
week, so I won't be available on IRC (but I will be replying 
asynchronously to IRC messages and emails when I can).


Gibi will be running the nova meeting on Thursday Sep 20 at 1400 UTC.

I'm going to work on the PTG session summaries for the ML and 
documenting Stein cycle themes next week. I'm thinking to document the 
themes as part of the cycle priorities doc [1].


We've updated the PTG etherpad [2] with action items and agreements for 
all of the topics we covered. Please take a look at the etherpad to find 
what actions and agreements relevant to your topics of interest.


We'll also kick off runways for Stein [3] next week. So, please feel 
free to start adding approved, ready-for-review items to the queue. And 
nova-core can start populating runways.


If you have any questions about PTG topics or runways, just ask us in 
#openstack-nova on IRC or send a mail to the dev mailing list.


Cheers,
-melanie

[1] 
https://specs.openstack.org/openstack/nova-specs/priorities/stein-priorities.html

[2] https://etherpad.openstack.org/p/nova-ptg-stein
[3] https://etherpad.openstack.org/p/nova-runways-stein


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-sigs][Openstack-operators][all]Expose SIGs/WGs as single window for Users/Ops scenario

2018-09-14 Thread Rico Lin
The Idea has been raising around (from me or from Matt's ML), so I would
like to give people more update on this (in terms of what I have been
raising, what people have been feedbacks, and what init idea I can collect
or I have as actions.

*Why are we doing this?*
The basic concept for this is to allow users/ops get a single window
for important scenario/user cases or issues (here's an example [1])into
traceable tasks in single story/place and ask developers be responsible (by
changing the mission of government policy) to co-work on that task.
SIGs/WGs are so desired to get feedbacks or use cases, so as to project
teams (not gonna speak for all projects/SIGs/WGs but we like to collect for
more idea for sure). And the project team got a central place to develop
for specific user requirements (Edge, NFV, Self-healing, K8s). One more
idea on this is that we can also use SIGs and WGs as a place for
cross-project docs and those documents can be some more general information
on how a user can plan for that area (again Edge, NFV, Self-healing, K8s).
There also needs clear information to Users/Ops about what's the dependency
cross projects which involved. Also, a potential way to expose more
projects. From this step, we can plan to cross-project gating ( in projects
gate or periodic) implementation

*So what's triggering and feedback:*

   - This idea has been raising as a topic in K8S SIG, Self-healing SIG
   session. Feedback from K8s-sig and Self-healing-sig are generally looking
   forward to this. SIGs appears are desired to get use cases and user issues
   (I didn't so through this idea to rest SIGs/WGs yet, but place leave
   feedback if you're in that group). Most because this can value up SIGs/WGs
   on what they're interesting on.
   - This idea has been raising as a topic in Ops-meetup session
   Most of ops think it will be super if actually anyone willing to handle
   their own issues. The concerns about this are that we have to make some
   structure or guidelines to avoid a crazy number of useless issues (maybe
   like setup template for issues). Another feedback from an operator is
   that he concerns about ops should just try to go through everything in
   detail by themselves and contact to teams by themselves. IMO it depends on
   teams to set template and say you must have some specific information or
   even figure out which project should be in charge of which failed.
   - This idea has been raising as a topic in TC session
   Public cloud WGs also got this idea as well (and they done a good job!),
   appears it's a very preferred way for them. What happens to them is public
   cloud WG collect bunch number of use cases, but would like to see immediate
   actions or a traceable way to keep tracing those task.
   Doug: It might be hard to push developers to SIGs/WGs, but SIGs/WGs can
   always raise the cross-project forum. Also, it's important to let people
   know that who they can talk to.
   Melvin: Make it easier for everyone, and give a visibility. How can we
   possible to make one thing done is very important.
   Thierry: Have a way to expose the top priority which is important for
   OpenStack.

   - Also, raise to some PTLs and UCs. Generally good, Amy (super cute UC
   member) do ask the concern about there are manual works to bind tasks to
   cross bug tracing platform (like if you like to create a story in
   Self-healing SIG, and said it's relative to Heat, and Neutron. you create a
   task for Heat in that story, but you need to create a launchpad bug and
   link it to that story.). That issue might in now still need to be manually
   done, but what we might able to change is to consider migrate most of the
   relative teams to a single channel in long-term. I didn't get the chance to
   reach most of PTLs but do hope this is the place PTLs can also share their
   feedbacks.
   - There are ML in Self-healing-sig [2]
   not like a lot of feedback to this ML, but generally looks good


*What are the actions we can do right away:*

   - Please give feedback to us
   - Give a forum for this topic for all to discuss this (I already add a
   brainstorm in TC etherpad, but it's across projects, UCs, TCs, WGs, SIGs).
   - Set up a cross-committee discuss for restructuring missions to make
   sure teams are responsible for hep on development, SIGs/WGs are responsible
   to trace task as story level and help to trigger cross-project discussion,
   and operators are responsible to follow the structure to send issues and
   provide valuable information.
   - We can also do an experiment on try on SIGs/WGs who and the relative
   projects are willing to join this for a while and see how the outcomes and
   adjust on them.
   - Can we set cross-projects as a goal for a group of projects instead of
   only community goal?
   - Also if this is a nice idea, we can have a guideline for SIGs/WGs to
   like suggest how they can have a cross-project gate, have a way to let
   users/ops to file 

Re: [openstack-dev] [goals][python3] mixed versions?

2018-09-14 Thread Mathieu Gagné
On Fri, Sep 14, 2018 at 4:22 PM, Jim Rollenhagen  
wrote:
> On Thu, Sep 13, 2018 at 9:46 PM, Mathieu Gagné  wrote:
>>
>> On Thu, Sep 13, 2018 at 10:14 PM, Doug Hellmann 
>> wrote:
>> > Excerpts from Mathieu Gagné's message of 2018-09-13 20:09:12 -0400:
>> >> On Thu, Sep 13, 2018 at 7:41 PM, Doug Hellmann 
>> >> wrote:
>> >> > Excerpts from Mathieu Gagné's message of 2018-09-13 14:12:56 -0400:
>> >> >> On Wed, Sep 12, 2018 at 2:04 PM, Doug Hellmann
>> >> >>  wrote:
>> >> >> >
>> >> >> > IIRC, we also talked about not supporting multiple versions of
>> >> >> > python on a given node, so all of the services on a node would
>> >> >> > need
>> >> >> > to be upgraded together.
>> >> >> >
>> >> >>
>> >> >> Will services support both versions at some point for the same
>> >> >> OpenStack release? Or is it already the case?
>> >> >>
>> >> >> I would like to avoid having to upgrade Nova, Neutron and Ceilometer
>> >> >> at the same time since all end up running on a compute node and
>> >> >> sharing the same python version.
>> >> >
>> >> > We need to differentiate between what the upstream community supports
>> >> > and what distros support. In the meeting in Vancouver, we said that
>> >> > the community would support upgrading all of the services on a
>> >> > single node together. Distros may choose to support more complex
>> >> > configurations if they choose, and I'm sure patches related to any
>> >> > bugs would be welcome.
>> >>
>> >> We maintain and build our own packages with virtualenv. We aren't
>> >> bound to distribution packages.
>> >
>> > OK, I should rephrase then. I'm talking about the limits on the
>> > tests that I think are useful and reasonable to run upstream and
>> > for the community to support.
>> >
>> >> > But I don't think we can ask the community
>> >> > to support the infinite number of variations that would occur if
>> >> > we said we would test upgrading some services independently of
>> >> > others (unless I'm mistaken, we don't even do that for services
>> >> > all using the same version of python 2, today).
>> >>
>> >> This contradicts what I heard in fishbowl sessions from core reviewers
>> >> and read on IRC.
>> >> People were under the false impression that you need to upgrade
>> >> OpenStack in lock steps when in fact, it has never been the case.
>> >> You should be able to upgrade services individually.
>> >>
>> >> Has it changed since?
>> >
>> > I know that some deployments do upgrade components separately, and
>> > it works in some configurations.  All we talked about in Vancouver
>> > was how we would test upgrading python 2 to python 3, and given
>> > that the community has never, as far as I know, run upgrade tests
>> > in CI that staggered the upgrades of components on a given node,
>> > there seemed no reason to add those tests just for the python 2 to
>> > 3 case.
>> >
>> > Perhaps someone on the QA team can correct me if I'm wrong about the
>> > history there.
>> >
>>
>> Or maybe it's me that misinterpreted the actual impact of not
>> supported 2 versions of Python at the same time.
>>
>> Lets walk through an actual upgrade scenario.
>>
>> I suppose the migration to Python 3 will happen around Stein and
>> therefore affects people upgrading from Rocky to Stein. At this point,
>> an operator should already be running Ubuntu Bionic which supports
>> both Python 2.7 and 3.6.
>>
>> If that operator is using virtualenv (and not distribution packages),
>> it's only a matter a building new virtualenv using Python 3.6 for
>> Stein instead. This means installing both Python 2.7/3.6 on the same
>> node should be enough to upgrade and switch to Python 3.6 on a per
>> project/service basis.
>>
>> My main use case is with the compute node which has multiple services
>> running. Come to think of it, it's a lot less impactful than I
>> thought.
>>
>> Let me know if I got some details wrong. But if the steps are similar
>> to what I described above, I no longer have concerns or objections.
>
>
>
> The plan is to maintain support for both Python 2 and 3 in the T release
> (and possibly S, if the projects get py3 work done quickly). See
> https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html#python2-deprecation-timeline,
> notably:
>
> 2. All projects must complete the work for Python 3 support by the end of
> the T cycle, unless they are blocked for technical reasons by dependencies
> they rely on.
> 4. Existing projects under TC governance at the time this resolution is
> accepted must not drop support for Python 2 before the beginning of the U
> development cycle (currently anticipated for late 2019).
>
> This gives operators an opportunity to keep the Python 2->3 upgrade
> completely separate from an OpenStack upgrade.
>
> So in short, yes, you'll be fine :)
>
> // jim


Thanks for the clarification. Much appreciated!

>
>>
>>
>> I think the only people who could be concerned is those doing rolling
>> upgrading, which could impact RPC message 

Re: [openstack-dev] [election][tc]Question for candidates about global reachout

2018-09-14 Thread Jeremy Stanley
On 2018-09-14 13:52:50 -0600 (-0600), Zhipeng Huang wrote:
> This is a joint question from mnaser and me :)
> 
> For the candidates who are running for tc seats, please reply to
> this email to indicate if you are open to use certain social media
> app in certain region (like Wechat in China, Line in Japan, etc.),
> in order to reach out to the OpenStack developers in that region
> and help them to connect to the upstream community as well as
> answering questions or other activities that will help. (sorry for
> the long sentence ... )
[...]

I respect that tool choices can make a difference in enabling or
improving our outreach to specific cultures. I'll commit to
personally rejecting presence on proprietary social media services
so as to demonstrate that public work can be done within our
community while relying exclusively on free/libre open source
software. I recognize the existence of the free software movement as
a distinct culture with whom we could do a better job of connecting.
If as a community we promote and embrace non-free tools we will only
continue to alienate them, so I'm happy to serve as an example that
it is possible to be an engaged and effective contributor to our
community without compromising those ideals.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3] mixed versions?

2018-09-14 Thread Jim Rollenhagen
On Thu, Sep 13, 2018 at 9:46 PM, Mathieu Gagné  wrote:

> On Thu, Sep 13, 2018 at 10:14 PM, Doug Hellmann 
> wrote:
> > Excerpts from Mathieu Gagné's message of 2018-09-13 20:09:12 -0400:
> >> On Thu, Sep 13, 2018 at 7:41 PM, Doug Hellmann 
> wrote:
> >> > Excerpts from Mathieu Gagné's message of 2018-09-13 14:12:56 -0400:
> >> >> On Wed, Sep 12, 2018 at 2:04 PM, Doug Hellmann <
> d...@doughellmann.com> wrote:
> >> >> >
> >> >> > IIRC, we also talked about not supporting multiple versions of
> >> >> > python on a given node, so all of the services on a node would need
> >> >> > to be upgraded together.
> >> >> >
> >> >>
> >> >> Will services support both versions at some point for the same
> >> >> OpenStack release? Or is it already the case?
> >> >>
> >> >> I would like to avoid having to upgrade Nova, Neutron and Ceilometer
> >> >> at the same time since all end up running on a compute node and
> >> >> sharing the same python version.
> >> >
> >> > We need to differentiate between what the upstream community supports
> >> > and what distros support. In the meeting in Vancouver, we said that
> >> > the community would support upgrading all of the services on a
> >> > single node together. Distros may choose to support more complex
> >> > configurations if they choose, and I'm sure patches related to any
> >> > bugs would be welcome.
> >>
> >> We maintain and build our own packages with virtualenv. We aren't
> >> bound to distribution packages.
> >
> > OK, I should rephrase then. I'm talking about the limits on the
> > tests that I think are useful and reasonable to run upstream and
> > for the community to support.
> >
> >> > But I don't think we can ask the community
> >> > to support the infinite number of variations that would occur if
> >> > we said we would test upgrading some services independently of
> >> > others (unless I'm mistaken, we don't even do that for services
> >> > all using the same version of python 2, today).
> >>
> >> This contradicts what I heard in fishbowl sessions from core reviewers
> >> and read on IRC.
> >> People were under the false impression that you need to upgrade
> >> OpenStack in lock steps when in fact, it has never been the case.
> >> You should be able to upgrade services individually.
> >>
> >> Has it changed since?
> >
> > I know that some deployments do upgrade components separately, and
> > it works in some configurations.  All we talked about in Vancouver
> > was how we would test upgrading python 2 to python 3, and given
> > that the community has never, as far as I know, run upgrade tests
> > in CI that staggered the upgrades of components on a given node,
> > there seemed no reason to add those tests just for the python 2 to
> > 3 case.
> >
> > Perhaps someone on the QA team can correct me if I'm wrong about the
> > history there.
> >
>
> Or maybe it's me that misinterpreted the actual impact of not
> supported 2 versions of Python at the same time.
>
> Lets walk through an actual upgrade scenario.
>
> I suppose the migration to Python 3 will happen around Stein and
> therefore affects people upgrading from Rocky to Stein. At this point,
> an operator should already be running Ubuntu Bionic which supports
> both Python 2.7 and 3.6.
>
> If that operator is using virtualenv (and not distribution packages),
> it's only a matter a building new virtualenv using Python 3.6 for
> Stein instead. This means installing both Python 2.7/3.6 on the same
> node should be enough to upgrade and switch to Python 3.6 on a per
> project/service basis.
>
> My main use case is with the compute node which has multiple services
> running. Come to think of it, it's a lot less impactful than I
> thought.
>
> Let me know if I got some details wrong. But if the steps are similar
> to what I described above, I no longer have concerns or objections.
>


The plan is to maintain support for both Python 2 and 3 in the T release
(and possibly S, if the projects get py3 work done quickly). See
https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html#python2-deprecation-timeline,
notably:

2. All projects must complete the work for Python 3 support by the end of
the T cycle, unless they are blocked for technical reasons by dependencies
they rely on.
4. Existing projects under TC governance at the time this resolution is
accepted must not drop support for Python 2 before the beginning of the U
development cycle (currently anticipated for late 2019).

This gives operators an opportunity to keep the Python 2->3 upgrade
completely separate from an OpenStack upgrade.

So in short, yes, you'll be fine :)

// jim


>
> I think the only people who could be concerned is those doing rolling
> upgrading, which could impact RPC message encoding as described by
> Thomas. But you are already addressing it so I will just read and see
> where this is going.
>
> Thanks
>
> --
> Mathieu
>
> __
> 

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-14 Thread Michael Johnson
I don't know for sure, but I assume it is short for "OpenStack" and
prefixing OpenStack policies vs. third party plugin policies for
documentation purposes.

I am guilty of borrowing this from existing code examples[0].

[0] 
http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html

Michael
On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad  wrote:
>
>
>
> On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson  wrote:
>>
>> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
>> which maps to the "os--api::" format.
>
>
> Thanks for explaining the justification, Michael.
>
> I'm curious if anyone has context on the "os-" part of the format? I've seen 
> that pattern in a couple different projects. Does anyone know about its 
> origin? Was it something we converted to our policy names because of API 
> names/paths?
>
>>
>>
>> I selected it as it uses the service-type[1], references the API
>> resource, and then the method. So it maps well to the API reference[2]
>> for the service.
>>
>> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html
>> [1] https://service-types.openstack.org/
>> [2] 
>> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
>>
>> Michael
>> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
>> >
>> > So +1
>> >
>> >
>> >
>> > Tim
>> >
>> >
>> >
>> > From: Lance Bragstad 
>> > Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> > 
>> > Date: Wednesday, 12 September 2018 at 20:43
>> > To: "OpenStack Development Mailing List (not for usage questions)" 
>> > , OpenStack Operators 
>> > 
>> > Subject: [openstack-dev] [all] Consistent policy names
>> >
>> >
>> >
>> > The topic of having consistent policy names has popped up a few times this 
>> > week. Ultimately, if we are to move forward with this, we'll need a 
>> > convention. To help with that a little bit I started an etherpad [0] that 
>> > includes links to policy references, basic conventions *within* that 
>> > service, and some examples of each. I got through quite a few projects 
>> > this morning, but there are still a couple left.
>> >
>> >
>> >
>> > The idea is to look at what we do today and see what conventions we can 
>> > come up with to move towards, which should also help us determine how much 
>> > each convention is going to impact services (e.g. picking a convention 
>> > that will cause 70% of services to rename policies).
>> >
>> >
>> >
>> > Please have a look and we can discuss conventions in this thread. If we 
>> > come to agreement, I'll start working on some documentation in oslo.policy 
>> > so that it's somewhat official because starting to renaming policies.
>> >
>> >
>> >
>> > [0] https://etherpad.openstack.org/p/consistent-policy-names
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > openstack-operat...@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][tc]Question for candidates about global reachout

2018-09-14 Thread Zhipeng Huang
This is a joint question from mnaser and me :)

For the candidates who are running for tc seats, please reply to this email
to indicate if you are open to use certain social media app in certain
region (like Wechat in China, Line in Japan, etc.), in order to reach out
to the OpenStack developers in that region and help them to connect to the
upstream community as well as answering questions or other activities that
will help. (sorry for the long sentence ... )

Rico and I already sign up for Wechat communication for sure :)

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc]Global Reachout Proposal

2018-09-14 Thread Zhipeng Huang
Hi all,

Follow up the diversity discussion we had in the tc session this morning
[0], I've proposed a resolution on facilitating technical community in
large to engage in global reachout for OpenStack more efficiently.

Your feedbacks are welcomed. Whether this should be a new resolution or not
at the end of the day, this is a conversation worthy to have.

[0] https://review.openstack.org/602697

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [PTG] Stein Team Photo Files

2018-09-14 Thread Kendall Nelson
Hello!

Here are the photos we took this week of various teams :)

https://www.dropbox.com/sh/2pmvfkstudih2wf/AAAGg7c0bYZcWQwKDOKiSwR7a?dl=0

Enjoy!

-the Kendalls (diablo_rojo & wendallkaters)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it?

2018-09-14 Thread Matt Riedemann

On 3/28/2018 4:35 PM, Jay Pipes wrote:

On 03/28/2018 03:35 PM, Matt Riedemann wrote:

On 3/27/2018 10:37 AM, Jay Pipes wrote:


If we want to actually fix the issue once and for all, we need to 
make availability zones a real thing that has a permanent identifier 
(UUID) and store that permanent identifier in the instance (not the 
instance metadata).


Or we can continue to paper over major architectural weaknesses like 
this.


Stepping back a second from the rest of this thread, what if we do the 
hard fail bug fix thing, which could be backported to stable branches, 
and then we have the option of completely re-doing this with aggregate 
UUIDs as the key rather than the aggregate name? Because I think the 
former could get done in Rocky, but the latter probably not.


I'm fine with that (and was fine with it before, just stating that 
solving the problem long-term requires different thinking)


Best,
-jay


Just FYI for anyone that cared about this thread, we agreed at the Stein 
PTG to resolve the immediate bug [1] by blocking AZ renames while the AZ 
has instances in it. There won't be a microversion for that change and 
we'll be able to backport it (with a release note I suppose).


[1] https://bugs.launchpad.net/nova/+bug/1782539

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-09-14 Thread Jeremy Stanley
On 2018-09-11 16:53:05 + (+), Jeremy Stanley wrote:
> On 2018-08-01 08:40:51 -0700 (-0700), James E. Blair wrote:
> > Monty Taylor  writes:
> > > On 08/01/2018 12:45 AM, Ian Wienand wrote:
> > > > I'd suggest to start, people with an interest in a channel can
> > > > request +r from an IRC admin in #openstack-infra and we track
> > > > it at [2]
> > >
> > > To mitigate the pain caused by +r - we have created a channel
> > > called #openstack-unregistered and have configured the channels
> > > with the +r flag to forward people to it.
> [...]
> > It turns out this was a very popular option, so we've gone ahead
> > and performed this for all channels registered with accessbot.
> [...]
> 
> We rolled this back 5 days ago for all channels and haven't had any
> new reports of in-channel spamming yet. Hopefully this means the
> recent flood is behind us now but definitely let us know (replying
> on this thread or in #openstack-infra on Freenode) if you see any
> signs of resurgence.

And then it was turned back on again a few hours ago after a new
wave of spam cropped up. We'll try to continue to keep an eye on
things.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO

2018-09-14 Thread Alex Schultz
On Fri, Sep 14, 2018 at 10:20 AM, Elõd Illés  wrote:
> Hi,
>
> just a comment: Ocata release is not EOL [1][2] rather in Extended
> Maintenance. Do you really want to EOL TripleO stable/ocata?
>

Yes unless there are any objections.  We've already been keeping this
branch alive on life support but CI has started to fail and we've just
been turning it off jobs as they fail.  We had not planned on extended
maintenance for Ocata (or Pike).  We'll likely consider that starting
with Queens.  We could switch it to extended maintenance but without
the promotion jobs we won't have packages to run CI so it would be
better to just EOL it.

Thanks,
-Alex

> [1] https://releases.openstack.org/
> [2]
> https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html
>
> Cheers,
>
> Előd
>
>
>
> On 2018-09-14 09:20, Juan Antonio Osorio Robles wrote:
>>
>>
>> On 09/14/2018 09:01 AM, Alex Schultz wrote:
>>>
>>> On Fri, Sep 14, 2018 at 6:37 AM, Chandan kumar 
>>> wrote:

 Hello,

 As Ocata release is already EOL on 27-08-2018 [1].
 In TripleO, we are running Ocata jobs in TripleO CI and in promotion
 pipelines.
 Can we drop it all the jobs related to Ocata or do we need to keep some
 jobs
 to support upgrades in CI?

>>> I think unless there are any objections around upgrades, we can drop
>>> the promotion pipelines. It's likely that we'll also want to
>>> officially EOL the tripleo ocata branches.
>>
>> sounds good to me.
>>>
>>> Thanks,
>>> -Alex
>>>
 Links:
 [1.] https://releases.openstack.org/

 Thanks,

 Chandan Kumar


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO

2018-09-14 Thread Elõd Illés

Hi,

just a comment: Ocata release is not EOL [1][2] rather in Extended 
Maintenance. Do you really want to EOL TripleO stable/ocata?


[1] https://releases.openstack.org/
[2] 
https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html


Cheers,

Előd


On 2018-09-14 09:20, Juan Antonio Osorio Robles wrote:


On 09/14/2018 09:01 AM, Alex Schultz wrote:

On Fri, Sep 14, 2018 at 6:37 AM, Chandan kumar  wrote:

Hello,

As Ocata release is already EOL on 27-08-2018 [1].
In TripleO, we are running Ocata jobs in TripleO CI and in promotion pipelines.
Can we drop it all the jobs related to Ocata or do we need to keep some jobs
to support upgrades in CI?


I think unless there are any objections around upgrades, we can drop
the promotion pipelines. It's likely that we'll also want to
officially EOL the tripleo ocata branches.

sounds good to me.

Thanks,
-Alex


Links:
[1.] https://releases.openstack.org/

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO

2018-09-14 Thread Juan Antonio Osorio Robles


On 09/14/2018 09:01 AM, Alex Schultz wrote:
> On Fri, Sep 14, 2018 at 6:37 AM, Chandan kumar  wrote:
>> Hello,
>>
>> As Ocata release is already EOL on 27-08-2018 [1].
>> In TripleO, we are running Ocata jobs in TripleO CI and in promotion 
>> pipelines.
>> Can we drop it all the jobs related to Ocata or do we need to keep some jobs
>> to support upgrades in CI?
>>
> I think unless there are any objections around upgrades, we can drop
> the promotion pipelines. It's likely that we'll also want to
> officially EOL the tripleo ocata branches.
sounds good to me.
> Thanks,
> -Alex
>
>> Links:
>> [1.] https://releases.openstack.org/
>>
>> Thanks,
>>
>> Chandan Kumar
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO

2018-09-14 Thread Alex Schultz
On Fri, Sep 14, 2018 at 6:37 AM, Chandan kumar  wrote:
> Hello,
>
> As Ocata release is already EOL on 27-08-2018 [1].
> In TripleO, we are running Ocata jobs in TripleO CI and in promotion 
> pipelines.
> Can we drop it all the jobs related to Ocata or do we need to keep some jobs
> to support upgrades in CI?
>

I think unless there are any objections around upgrades, we can drop
the promotion pipelines. It's likely that we'll also want to
officially EOL the tripleo ocata branches.

Thanks,
-Alex

> Links:
> [1.] https://releases.openstack.org/
>
> Thanks,
>
> Chandan Kumar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-14 Thread Lance Bragstad
On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson  wrote:

> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
> which maps to the "os--api::" format.
>

Thanks for explaining the justification, Michael.

I'm curious if anyone has context on the "os-" part of the format? I've
seen that pattern in a couple different projects. Does anyone know about
its origin? Was it something we converted to our policy names because of
API names/paths?


>
> I selected it as it uses the service-type[1], references the API
> resource, and then the method. So it maps well to the API reference[2]
> for the service.
>
> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html
> [1] https://service-types.openstack.org/
> [2]
> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
>
> Michael
> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
> >
> > So +1
> >
> >
> >
> > Tim
> >
> >
> >
> > From: Lance Bragstad 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> > Date: Wednesday, 12 September 2018 at 20:43
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>, OpenStack Operators <
> openstack-operat...@lists.openstack.org>
> > Subject: [openstack-dev] [all] Consistent policy names
> >
> >
> >
> > The topic of having consistent policy names has popped up a few times
> this week. Ultimately, if we are to move forward with this, we'll need a
> convention. To help with that a little bit I started an etherpad [0] that
> includes links to policy references, basic conventions *within* that
> service, and some examples of each. I got through quite a few projects this
> morning, but there are still a couple left.
> >
> >
> >
> > The idea is to look at what we do today and see what conventions we can
> come up with to move towards, which should also help us determine how much
> each convention is going to impact services (e.g. picking a convention that
> will cause 70% of services to rename policies).
> >
> >
> >
> > Please have a look and we can discuss conventions in this thread. If we
> come to agreement, I'll start working on some documentation in oslo.policy
> so that it's somewhat official because starting to renaming policies.
> >
> >
> >
> > [0] https://etherpad.openstack.org/p/consistent-policy-names
> >
> > ___
> > OpenStack-operators mailing list
> > openstack-operat...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials)

2018-09-14 Thread Davanum Srinivas
Folks,

Sorry for the top post - Those of you that are still at PTG, please feel
free to drop in to the Clear Creek room today.

Thanks,
Dims

On Thu, Sep 13, 2018 at 2:44 PM Jeremy Stanley  wrote:

> On 2018-09-12 17:50:30 -0600 (-0600), Matt Riedemann wrote:
> [...]
> > Again, I'm not saying TC members should be doing all of the work
> > themselves. That's not realistic, especially when critical parts
> > of any major effort are going to involve developers from projects
> > on which none of the TC members are active contributors (e.g.
> > nova). I want to see TC members herd cats, for lack of a better
> > analogy, and help out technically (with code) where possible.
>
> I can respect that. I think that OpenStack made a mistake in naming
> its community management governance body the "technical" committee.
> I do agree that having TC members engage in activities with tangible
> outcomes is preferable, and that the needs of the users of its
> software should weigh heavily in prioritization decisions, but those
> are not the only problems our community faces nor is it as if there
> are no other responsibilities associated with being a TC member.
>
> > Given the repeated mention of how the "help wanted" list continues
> > to not draw in contributors, I think the recruiting role of the TC
> > should take a back seat to actually stepping in and helping work
> > on those items directly. For example, Sean McGinnis is taking an
> > active role in the operators guide and other related docs that
> > continue to be discussed at every face to face event since those
> > docs were dropped from openstack-manuals (in Pike).
>
> I completely agree that the help wanted list hasn't worked out well
> in practice. It was based on requests from the board of directors to
> provide some means of communicating to their business-focused
> constituency where resources would be most useful to the project.
> We've had a subsequent request to reorient it to be more like a set
> of job descriptions along with clearer business use cases explaining
> the benefit to them of contributing to these efforts. In my opinion
> it's very much the responsibility of the TC to find ways to
> accomplish these sorts of things as well.
>
> > I think it's fair to say that the people generally elected to the
> > TC are those most visible in the community (it's a popularity
> > contest) and those people are generally the most visible because
> > they have the luxury of working upstream the majority of their
> > time. As such, it's their duty to oversee and spend time working
> > on the hard cross-project technical deliverables that operators
> > and users are asking for, rather than think of an infinite number
> > of ways to try and draw *others* to help work on those gaps.
>
> But not everyone who is funded for full-time involvement with the
> community is necessarily "visible" in ways that make them electable.
> Higher-profile involvement in such activities over time is what gets
> them the visibility to be more easily elected to governance
> positions via "popularity contest" mechanics.
>
> > As I think it's the role of a PTL within a given project to have a
> > finger on the pulse of the technical priorities of that project
> > and manage the developers involved (of which the PTL certainly may
> > be one), it's the role of the TC to do the same across openstack
> > as a whole. If a PTL doesn't have the time or willingness to do
> > that within their project, they shouldn't be the PTL. The same
> > goes for TC members IMO.
>
> Completely agree, I think we might just disagree on where to strike
> the balance of purely technical priorities for the TC (as I
> personally think the TC is somewhat incorrectly named).
> --
> Jeremy Stanley
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About microversion setting to enable nested resource provider

2018-09-14 Thread Sylvain Bauza
Le jeu. 13 sept. 2018 à 19:29, Naichuan Sun  a
écrit :

> Hi, Sylvain,
>
>
>
> Thank you very much for the information. It is pity that I can’t attend
> the meeting.
>
> I have a concern about reshaper in multi-type vgpu support.
>
> In the old vgpu support, we only have one vgpu inventory in root resource
> provider, which means we only support one vgpu type. When do reshape,
> placement will send allocations(which include just one vgpu resource
> allocation information) to the driver, if the host have more than one
> pgpu/pgpug(which support different vgpu type), how do we know which
> pgpu/pgpug own the allocation information? Do we need to communicate with
> hypervisor the confirm that?
>

The reshape will actually move the existing allocations for a VGPU resource
class to the inventory for this class that is on the child resource
provider now with the reshape.

Since we agreed on keeping consistent naming, there is no need to guess
which is which. That said, you raise a point that was discussed during the
PTG and we all agreed there was an upgrade impact as multiple vGPUs
shouldn't be allowed until the reshape is done.

Accordingly, see my spec I reproposed for Stein which describes the upgrade
impact https://review.openstack.org/#/c/602474/

Since I'm at the PTG, we have huge time difference between you and me, but
we can discuss on that point next week when I'm back (my mornings match
then your afternoons)

-Sylvain

>
>
> Thank you very much.
>
>
>
> BR.
>
> Naichuan Sun
>
>
>
> *From:* Sylvain Bauza [mailto:sba...@redhat.com]
> *Sent:* Thursday, September 13, 2018 11:47 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] About microversion setting to enable
> nested resource provider
>
>
>
> Hey Naichuan,
>
> FWIW, we discussed on the missing pieces for nested resource providers.
> See the (currently-in-use) etherpad
> https://etherpad.openstack.org/p/nova-ptg-stein and lookup for "closing
> the gap on nested resource providers" (L144 while I speak)
>
>
>
> The fact that we are not able to schedule yet is a critical piece that we
> said we're going to work on it as soon as we can.
>
>
>
> -Sylvain
>
>
>
> On Thu, Sep 13, 2018 at 9:14 AM, Eric Fried  wrote:
>
> There's a patch series in progress for this:
>
> https://review.openstack.org/#/q/topic:use-nested-allocation-candidates
>
> It needs some TLC. I'm sure gibi and tetsuro would welcome some help...
>
> efried
>
>
> On 09/13/2018 08:31 AM, Naichuan Sun wrote:
> > Thank you very much, Jay.
> > Is there somewhere I could set microversion(some configure file?), Or
> just modify the source code to set it?
> >
> > BR.
> > Naichuan Sun
> >
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: Thursday, September 13, 2018 9:19 PM
> > To: Naichuan Sun ; OpenStack Development
> Mailing List (not for usage questions) 
> > Cc: melanie witt ; efr...@us.ibm.com; Sylvain Bauza
> 
> > Subject: Re: About microversion setting to enable nested resource
> provider
> >
> > On 09/13/2018 06:39 AM, Naichuan Sun wrote:
> >> Hi, guys,
> >>
> >> Looks n-rp is disabled by default because microversion matches 1.29 :
> >> https://github.com/openstack/nova/blob/master/nova/api/openstack/place
> >> ment/handlers/allocation_candidate.py#L252
> >>
> >> Anyone know how to set the microversion to enable n-rp in placement?
> >
> > It is the client which must send the 1.29+ placement API microversion
> header to indicate to the placement API server that the client wants to
> receive nested provider information in the allocation candidates response.
> >
> > Currently, nova-scheduler calls the scheduler reportclient's
> > get_allocation_candidates() method:
> >
> >
> https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/manager.py#L138
> >
> > The scheduler reportclient's get_allocation_candidates() method
> currently passes the 1.25 placement API microversion header:
> >
> >
> https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L353
> >
> >
> https://github.com/openstack/nova/blob/0ba34a818414823eda5e693dc2127a534410b5df/nova/scheduler/client/report.py#L53
> >
> > In order to get the nested information returned in the allocation
> candidates response, that would need to be upped to 1.29.
> >
> > Best,
> > -jay
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [kolla] Committing proprietary plugins to OpenStack

2018-09-14 Thread Steven Dake (stdake)
?Shyam,


Our policy, decided long ago, is that we would work with third party components 
(such as plugins) for nova, cinder, neutron, horizon, etc that were proprietary 
as long as the code that merges into Kolla specifically is ASL2.


What is your plugin for?  if its for nova, cinder, neutron, horizon, it is 
covered by this policy pretty much wholesale.  If its a different type of 
system, some debate may be warranted by the core team.


Cheers

-steve


From: Shyam Biradar 
Sent: Wednesday, September 12, 2018 5:01 AM
To: Andreas Jaeger
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Committing proprietary plugins to OpenStack

Yes Andreas, whatever deployment scripts we will be pushing it will be under 
apache license.

[logo]

Shyam Biradar   Software Engineer | DevOps
M  +91 8600266938 | 
shyam.bira...@trilio.io | 
trilio.io



On Wed, Sep 12, 2018 at 5:24 PM, Andreas Jaeger 
mailto:a...@suse.com>> wrote:
On 2018-09-12 13:21, Shyam Biradar wrote:
Hi,

We have a proprietary openstack plugin. We want to commit deployment scripts 
like containers and heat templates to upstream in tripleo and kolla project but 
not actual product code.

Is it possible? Or How can we handle this case? Any thoughts are welcome.

It's first a legal question - is everything you are pushing under the Apache 
license as the rest of the project that you push to?

And then a policy of kolla project, so let me tag them

Andreas
--
 Andreas Jaeger 
aj@{suse.com,opensuse.org} Twitter: 
jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 
Nürnberg,
 Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO

2018-09-14 Thread Chandan kumar
Hello,

As Ocata release is already EOL on 27-08-2018 [1].
In TripleO, we are running Ocata jobs in TripleO CI and in promotion pipelines.
Can we drop it all the jobs related to Ocata or do we need to keep some jobs
to support upgrades in CI?

Links:
[1.] https://releases.openstack.org/

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tacker] vPTG meetup schedule

2018-09-14 Thread Dharmendra Kushwaha
Hi all,



We have planned to have our one-day virtual PTG meetup for Stein on below 
schedule.

Please find the meeting details:



Schedule: 21st September, 7:00UTC to 11:00UTC



Meeting Channel: https://bluejeans.com/553456496

Etherpad link:  https://etherpad.openstack.org/p/Tacker-PTG-Stein



Thanks & Regards

Dharmendra Kushwaha
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] (ex) PTL on vacation

2018-09-14 Thread Tony Breeds
Hi All,
As Stable is no longer a project, I'm no longer a PTL so I don't
really need to do this but ...

I'm going on vacation for 3'ish weeks.  I do plan on checking my email
from time-to-time but really if anything comes up that needs urgent
attention you'll need to ping stable-maint-core.

I'm not fussy about my open changes so if they need fixing and you'd
like them merged while I'm out feel free to upload your own revision.

Have fun, I know I will ;D

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev