Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Alex Xu
Sorry for append another email for something I missed to say.

Alex Xu  于2018年9月29日周六 上午10:01写道:

>
>
> Jay Pipes  于2018年9月29日周六 上午5:51写道:
>
>> On 09/28/2018 04:42 PM, Eric Fried wrote:
>> > On 09/28/2018 09:41 AM, Balázs Gibizer wrote:
>> >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried 
>> wrote:
>> >>> It's time somebody said this.
>> >>>
>> >>> Every time we turn a corner or look under a rug, we find another use
>> >>> case for provider traits in placement. But every time we have to have
>> >>> the argument about whether that use case satisfies the original
>> >>> "intended purpose" of traits.
>> >>>
>> >>> That's only reason I've ever been able to glean: that it (whatever
>> "it"
>> >>> is) wasn't what the architects had in mind when they came up with the
>> >>> idea of traits. We're not even talking about anything that would
>> require
>> >>> changes to the placement API. Just, "Oh, that's not a *capability* -
>> >>> shut it down."
>> >>>
>> >>> Bubble wrap was originally intended as a textured wallpaper and a
>> >>> greenhouse insulator. Can we accept the fact that traits have (many,
>> >>> many) uses beyond marking capabilities, and quit with the arbitrary
>> >>> restrictions?
>> >>
>> >> How far are we willing to go? Does an arbitrary (key: value) pair
>> >> encoded in a trait name like key_`str(value)` (e.g.
>> CURRENT_TEMPERATURE:
>> >> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in
>> >> placement?
>> >
>> > Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but
>> > TEMPERATURE_ is not.
>>
>> That's correct, because you're encoding >1 piece of information into the
>> single string (the fact that it's a temperature *and* the value of that
>> temperature are the two pieces of information encoded into the single
>> string).
>>
>> Now that there's multiple pieces of information encoded in the string
>> the reader of the trait string needs to know how to decode those bits of
>> information, which is exactly what we're trying to avoid doing (because
>> we can see from the ComputeCapabilitiesFilter, the extra_specs mess, and
>> the giant hairball that is the NUMA and CPU pinning "metadata requests"
>> how that turns out).
>>
>
> May I understand the one of Jay's complain is about metadata API
> undiscoverable? That is extra_spec mess and ComputeCapabilitiesFilter mess?
>

If yes, then we resolve the discoverable by the "/Traits" API.


>
> Another complain is about the information in the string. Agree with that
> TEMPERATURE_ is terriable.
> I prefer the way I used in nvdimm proposal now, I don't want to use Trait
> NVDIMM_DEVICE_500GB, NVDIMM_DEVICE_1024GB. I want to put them into the
> different resource provider, and use min_size, max_size limit the
> allocation. And the user will request with resource class like
> RC_NVDIMM_GB=512.
>

TEMPERATURE_ is wrong, as the way using it. But I don't
thing the version of BIOS is wrong, I don't expect the end user to ready
the information from the trait directly, there should document from the
admin to explain more. The version of BIOS should be a thing understand by
the admin, then it is enough.


>
>>
>> > This thread isn't about setting these parameters; it's about getting
>> > us to a point where we can discuss a question just like this one
>> > without running up against: >
>> > "That's a hard no, because you shouldn't encode key/value pairs in
>> traits."
>> >
>> > "Oh, why's that?"
>> >
>> > "Because that's not what we intended when we created traits."
>> >
>> > "But it would work, and the alternatives are way harder."
>> >
>> > "-1"
>> >
>> > "But..."
>> >
>> > "-I
>>
>> I believe I've articulated a number of times why traits should remain
>> unary pieces of information, and not just said "because that's what we
>> intended when we created traits".
>>
>> I'm tough on this because I've seen the garbage code and unmaintainable
>> mess that not having structurally sound data modeling concepts and
>> information interpretation rules leads to in Nova and I don't want to
>> encourage any more of it.
>>
>> -jay
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Alex Xu
Jay Pipes  于2018年9月29日周六 上午5:51写道:

> On 09/28/2018 04:42 PM, Eric Fried wrote:
> > On 09/28/2018 09:41 AM, Balázs Gibizer wrote:
> >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried  wrote:
> >>> It's time somebody said this.
> >>>
> >>> Every time we turn a corner or look under a rug, we find another use
> >>> case for provider traits in placement. But every time we have to have
> >>> the argument about whether that use case satisfies the original
> >>> "intended purpose" of traits.
> >>>
> >>> That's only reason I've ever been able to glean: that it (whatever "it"
> >>> is) wasn't what the architects had in mind when they came up with the
> >>> idea of traits. We're not even talking about anything that would
> require
> >>> changes to the placement API. Just, "Oh, that's not a *capability* -
> >>> shut it down."
> >>>
> >>> Bubble wrap was originally intended as a textured wallpaper and a
> >>> greenhouse insulator. Can we accept the fact that traits have (many,
> >>> many) uses beyond marking capabilities, and quit with the arbitrary
> >>> restrictions?
> >>
> >> How far are we willing to go? Does an arbitrary (key: value) pair
> >> encoded in a trait name like key_`str(value)` (e.g. CURRENT_TEMPERATURE:
> >> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in
> >> placement?
> >
> > Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but
> > TEMPERATURE_ is not.
>
> That's correct, because you're encoding >1 piece of information into the
> single string (the fact that it's a temperature *and* the value of that
> temperature are the two pieces of information encoded into the single
> string).
>
> Now that there's multiple pieces of information encoded in the string
> the reader of the trait string needs to know how to decode those bits of
> information, which is exactly what we're trying to avoid doing (because
> we can see from the ComputeCapabilitiesFilter, the extra_specs mess, and
> the giant hairball that is the NUMA and CPU pinning "metadata requests"
> how that turns out).
>

May I understand the one of Jay's complain is about metadata API
undiscoverable? That is extra_spec mess and ComputeCapabilitiesFilter mess?

Another complain is about the information in the string. Agree with that
TEMPERATURE_ is terriable.
I prefer the way I used in nvdimm proposal now, I don't want to use Trait
NVDIMM_DEVICE_500GB, NVDIMM_DEVICE_1024GB. I want to put them into the
different resource provider, and use min_size, max_size limit the
allocation. And the user will request with resource class like
RC_NVDIMM_GB=512.


>
> > This thread isn't about setting these parameters; it's about getting
> > us to a point where we can discuss a question just like this one
> > without running up against: >
> > "That's a hard no, because you shouldn't encode key/value pairs in
> traits."
> >
> > "Oh, why's that?"
> >
> > "Because that's not what we intended when we created traits."
> >
> > "But it would work, and the alternatives are way harder."
> >
> > "-1"
> >
> > "But..."
> >
> > "-I
>
> I believe I've articulated a number of times why traits should remain
> unary pieces of information, and not just said "because that's what we
> intended when we created traits".
>
> I'm tough on this because I've seen the garbage code and unmaintainable
> mess that not having structurally sound data modeling concepts and
> information interpretation rules leads to in Nova and I don't want to
> encourage any more of it.
>
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Alex Xu
Chris Dent  于2018年9月29日周六 上午1:19写道:

> On Fri, 28 Sep 2018, Jay Pipes wrote:
>
> > On 09/28/2018 09:25 AM, Eric Fried wrote:
> >> It's time somebody said this.
>
> Yes, a useful topic, I think.
>

++, I'm interesting this topic also, since it confuses me for a long time...


>
> >> Every time we turn a corner or look under a rug, we find another use
> >> case for provider traits in placement. But every time we have to have
> >> the argument about whether that use case satisfies the original
> >> "intended purpose" of traits.
> >>
> >> That's only reason I've ever been able to glean: that it (whatever "it"
> >> is) wasn't what the architects had in mind when they came up with the
> >> idea of traits.
> >
> > Don't pussyfoot around things. It's me you're talking about, Eric. You
> could
> > just ask me instead of passive-aggressively posting to the list like
> this.
>
> It's not just you. Ed and I have also expressed some fairly strong
> statement about how traits are "supposed" to be used and I would
> guess that from Eric's perspective all three of us (amongst others)
> have some form of architectural influence. Since it takes a village
> and all that.
>
> > They aren't arbitrary. They are there for a reason: a trait is a boolean
> > capability. It describes something that either a provider is capable of
> > supporting or it isn't.
>
> This is somewhat (maybe even only slightly) different from what I
> think the definition of a trait is, and that nuance may be relevant.
>
> I describe a trait as a "quality that a resource provider has" (the
> car is blue). This contrasts with a resource class which is a
> "quantity that a resource provider has" (the car has 4 doors).
>
>
Yes, this is what I'm thinking when I propose the Trait. Basically, I'm
trying to match two points in the proposal: #1 we need qualitative of
resource, #2 we don't want another metadata API, since metadata API isn't
discoverable and wild place, people put anything to it. Nobody knows what
metadata available in the code except deep into the code.

For #1, just as Chris said.
For #2, You have to create Trait before using it, and we have API to query
traits, make it discoverable in the API. And standard trait make its naming
has rule, then as Jay suggested, we have os-traits library to store all the
standard traits. But we have to have custom trait, since there have
use-case for managing resource out of OpenStack.



> Our implementation is pretty much exactly that ^. We allow
> clients to ask "give me things that have qualities x, y, z, not
> qualities a, b, c, and quanities of G of 5 and H of 7".
>
> Add in aggregates and we have exactly what you say:
>
> > * Does the provider have *capacity* for the requested resources?
> > * Does the provider have the required (or forbidden) *capabilities*?
> > * Does the provider belong to some group?
>
> The nuance of difference is that your description of *capabilities*
> seems more narrow than my description of *qualities* (aka
> characteristics). You've got something fairly specific in mind, as a
> way of constraining the profusion of noise that has happened with
> how various kinds of information about resources of all sorts is
> managed in OpenStack, as you describe in your message.
>
> I do not think it should be placement's job to control that noise.
> It should be placement's job to provide a very strict contract about
> what you can do with a trait:
>
> * create it, if necessary
> * assign it to one or more resource providers
> * ask for providers that either have it
> * ... or do not have it
>
> That's all. Placement _code_ should _never_ be aware of the value of
> a trait (except for the magical MISC_SHARES...). It should never
> become possible to regex on traits or do comparisons
> (required=

++


>
> > If we want to add further constraints to the placement allocation
> candidates
> > request that ask things like:
> >
> > * Does the provider have version 1.22.61821 of BIOS firmware from
> Marvell
> > installed on it?
>
> That's a quality of the provider in a moment.
>
> > * Does the provider support an FPGA that has had an OVS program flashed
> to it
> > in the last 20 days?
>
> If you squint, so is this.
>
> > * Does the provider belong to physical network "corpnet" and also
> support
> > creation of virtual NICs of type either "DIRECT" or "NORMAL"?
>
> And these.
>
> But at least some of them are dynamic rather than some kind of
> platonic ideal associated with the resource provider.
>
> I don't think placement should be concerned about temporal aspects
> of traits. If we can't write a web service that can handle setting
> lots of traits every second of every day, we should go home. If
> clients of placement want to set weird traits, more power to them.
>
> However, if clients of placement (such as nova) which are being the
> orchestrator of resource providers manipulated by multiple systems
> (neutron, cinder, ironic, cyborg, etc) wish to set some constraints
> on how and what traits 

Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Mohammed Naser
On Fri, Sep 28, 2018 at 7:17 PM Chris Dent  wrote:
>
> On Fri, 28 Sep 2018, melanie witt wrote:
>
> > I'm concerned about a lot of repetition here and maintenance headache for
> > operators. That's where the thoughts about whether we should provide
> > something like a key-value construct to API callers where they can instead
> > say:
> >
> > * OWNER=CINDER
> > * RAID=10
> > * NUMA_CELL=0
> >
> > for each resource provider.
> >
> > If I'm off base with my example, please let me know. I'm not a placement
> > expert.
> >
> > Anyway, I hope that gives an idea of what I'm thinking about in this
> > discussion. I agree we need to pick a direction and go with it. I'm just
> > trying to look out for the experience operators are going to be using this
> > and maintaining it in their deployments.
>
> Despite saying "let's never do this" with regard to having formal
> support for key/values in placement, if we did choose to do it (if
> that's what we chose, I'd live with it), when would we do it? We
> have a very long backlog of features that are not yet done. I
> believe (I hope obviously) that we will be able to accelerate
> placement's velocity with it being extracted, but that won't be
> enough to suddenly be able to do quickly do all the things we have
> on the plate.
>
> Are we going to make people wait for some unknown amount of time,
> in the meantime? While there is a grammar that could do some of
> these things?
>
> Unless additional resources come on the scene I don't think is
> either feasible or reasonable for us to considering doing any model
> extending at this time (irrespective of the merit of the idea).
>
> In some kind of weird belief way I'd really prefer we keep the
> grammar placement exposes simple, because my experience with HTTP
> APIs strongly suggests that's very important, and that experience is
> effectively why I am here, but I have no interest in being a
> fundamentalist about it. We should argue about it strongly to make
> sure we get the right result, but it's not a huge deal either way.

Is there a spec up for this should anyone want to implement it?

> --
> Chris Dent   ٩◔̯◔۶   https://anticdent.org/
> freenode: cdent tw: 
> @anticdent__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Chris Dent

On Fri, 28 Sep 2018, melanie witt wrote:

I'm concerned about a lot of repetition here and maintenance headache for 
operators. That's where the thoughts about whether we should provide 
something like a key-value construct to API callers where they can instead 
say:


* OWNER=CINDER
* RAID=10
* NUMA_CELL=0

for each resource provider.

If I'm off base with my example, please let me know. I'm not a placement 
expert.


Anyway, I hope that gives an idea of what I'm thinking about in this 
discussion. I agree we need to pick a direction and go with it. I'm just 
trying to look out for the experience operators are going to be using this 
and maintaining it in their deployments.


Despite saying "let's never do this" with regard to having formal
support for key/values in placement, if we did choose to do it (if
that's what we chose, I'd live with it), when would we do it? We
have a very long backlog of features that are not yet done. I
believe (I hope obviously) that we will be able to accelerate
placement's velocity with it being extracted, but that won't be
enough to suddenly be able to do quickly do all the things we have
on the plate.

Are we going to make people wait for some unknown amount of time,
in the meantime? While there is a grammar that could do some of
these things?

Unless additional resources come on the scene I don't think is
either feasible or reasonable for us to considering doing any model
extending at this time (irrespective of the merit of the idea).

In some kind of weird belief way I'd really prefer we keep the
grammar placement exposes simple, because my experience with HTTP
APIs strongly suggests that's very important, and that experience is
effectively why I am here, but I have no interest in being a
fundamentalist about it. We should argue about it strongly to make
sure we get the right result, but it's not a huge deal either way.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Lance Bragstad
Alright - I've worked up the majority of what we have in this thread and
proposed a documentation patch for oslo.policy [0].

I think we're at the point where we can finish the rest of this discussion
in gerrit if folks are ok with that.

[0] https://review.openstack.org/#/c/606214/

On Fri, Sep 28, 2018 at 3:33 PM Sean McGinnis  wrote:

> On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote:
> > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki 
> wrote:
> >
> > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
> > >  wrote:
> > > >
> > > > Ideally I would like to see it in the form of least specific to most
> > > specific. But more importantly in a way that there is no additional
> > > delimiters between the service type and the resource. Finally, I do not
> > > like the change of plurality depending on action type.
> > > >
> > > > I propose we consider
> > > >
> > > > ::[:]
> > > >
> > > > Example for keystone (note, action names below are strictly examples
> I
> > > am fine with whatever form those actions take):
> > > > identity:projects:create
> > > > identity:projects:delete
> > > > identity:projects:list
> > > > identity:projects:get
> > > >
> > > > It keeps things simple and consistent when you're looking through
> > > overrides / defaults.
> > > > --Morgan
> > > +1 -- I think the ordering if `resource` comes before
> > > `action|subaction` will be more clean.
> > >
> >
>
> Great idea. This is looking better and better.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][neutron-lbaas][octavia] Update on the previously announced deprecation of neutron-lbaas and neutron-lbaas-dashboard

2018-09-28 Thread Michael Johnson
During the Queens release cycle we announced the deprecation of
neutron-lbaas and neutron-lbaas-dashboard[1].

Today we are announcing the expected end date for the neutron-lbaas
and neutron-lbaas-dashboard deprecation cycles.  During September 2019
or the start of the “U” OpenStack release cycle, whichever comes
first, neutron-lbaas and neutron-lbaas-dashboard will be retired. This
means the code will be be removed and will not be released as part of
the "U" OpenStack release per the infrastructure team’s “retiring a
project” process[2].

We continue to maintain a Frequently Asked Questions (FAQ) wiki page
to help answer additional questions you may have about this process:
https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation

For more information or if you have additional questions, please see
the following resources:

The FAQ: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation

The Octavia documentation: https://docs.openstack.org/octavia/latest/

Reach out to us via IRC on the Freenode IRC network, channel #openstack-lbaas

Weekly Meeting: 20:00 UTC on Wednesdays in #openstack-lbaas on the
Freenode IRC network.

Sending email to the OpenStack developer mailing list: openstack-dev
[at] lists [dot] openstack [dot] org. Please prefix the subject with
'[openstack-dev][Octavia]'

Thank you for your support and patience during this transition,

Michael Johnson
Octavia PTL





[1] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126836.html

[2] https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] PTG wrapup

2018-09-28 Thread Ben Nemec

A bit belated, but here goes:

Monday:
Had a good discussion in the Keystone room about oslo.limit with some 
Nova developers. There was quite a bit of discussion around how the 
callbacks should work for resource usage and cleanup, and the Nova 
developers took an action to do some prototyping. Also, there was 
general consensus in the room that user quotas were probably a thing 
that should go away and we didn't want to spend a lot of time trying to 
accommodate them. If you have a different viewpoint on that please let 
someone involved with this know ASAP.


In addition, for the first time the topic of how to migrate from 
project-specific quota code to oslo.limit got some serious discussion. 
The current proposal is to have projects support both methods for a 
cycle to allow migration of the data. A Nova spec is planned to detail 
how that will work.


https://etherpad.openstack.org/p/keystone-stein-unified-limits

In the afternoon there was also a productive discussion in the API sig 
room about the healthcheck middleware. Initially it was a lot of "we 
want this, but no one has time to work on it", but after some more 
digging into the existing oslo.middleware code it was determined that we 
might be able to reuse parts of that to reduce the amount of work needed 
to implement it. This also makes it an easier sell to projects since 
many already include the old healthcheck middleware and this would be an 
extension of it. Graham was going to hack on the implementation in his 
PTG downtime.


https://etherpad.openstack.org/p/api-sig-stein-ptg

Tuesday:
Our scheduled session day. The main points of the discussion were 
(hopefully) captured in 
https://etherpad.openstack.org/p/oslo-stein-ptg-planning


Highlights:
-The oslo.config driver work is continuing. One outcome of the 
discussion was that we decided to continue to defer the question of how 
to handle mutable config with drivers. If somebody asks for it then we 
can revisit.


-There was general agreement to proceed with the simple config 
validator: https://review.openstack.org/#/c/567950/ There is a 
significantly more complex version of that review out there as well, but 
it's so large that nobody has had time to review it. The plan is that 
the added features from that can be added to this simple version in 
easier-to-digest pieces once the base functionality is there.


-The config migration tool is awaiting reviews (I see Doug reviewed it 
today, thanks!), and then will proceed with phase 2 in which it will try 
to handle more complex migrations.


-oslo.upgradecheck is now a thing. If you don't know what that is, see 
Matt Riedemann's email updates on the upgrade checkers goal.


-There was some extensive discussion around how to add parallel 
processing to oslo.privsep. The main outcomes were that we needed to get 
rid of the eventlet dependency from the initial implementation, but we 
think the rest of the code should already deal with concurrent execution 
as expected. However, as we are still lacking deep expertise in 
oslo.privsep since Gus left (help wanted!), it is TBD whether we are 
right. :-)


-A pluggable policy spec and some initial patches are proposed and need 
reviews. One of these days I will have time to do that.


Wednesday:
Had a good discussion about migrating Oslo to Storyboard. As you may 
have noticed, that discussion has continued on the mailing list so check 
out the [storyboard] tagged threads for details on where that stands. If 
you want to kick the tires of the test import you can do so here: 
https://storyboard-dev.openstack.org/#!/story/list?project_group_id=74


Thursday:
Discussion in the TripleO room about integrating the config drivers 
work. It sounded like they had a plan to implement support for them when 
they are available, so \o/.


Friday:
Mostly continued work on oslo.upgradecheck in between some non-Oslo 
discussions.


I think that's it. If I missed anything or you have questions/comments 
feel free to reply.


Thanks.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Jay Pipes

On 09/28/2018 04:42 PM, Eric Fried wrote:

On 09/28/2018 09:41 AM, Balázs Gibizer wrote:

On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried  wrote:

It's time somebody said this.

Every time we turn a corner or look under a rug, we find another use
case for provider traits in placement. But every time we have to have
the argument about whether that use case satisfies the original
"intended purpose" of traits.

That's only reason I've ever been able to glean: that it (whatever "it"
is) wasn't what the architects had in mind when they came up with the
idea of traits. We're not even talking about anything that would require
changes to the placement API. Just, "Oh, that's not a *capability* -
shut it down."

Bubble wrap was originally intended as a textured wallpaper and a
greenhouse insulator. Can we accept the fact that traits have (many,
many) uses beyond marking capabilities, and quit with the arbitrary
restrictions?


How far are we willing to go? Does an arbitrary (key: value) pair
encoded in a trait name like key_`str(value)` (e.g. CURRENT_TEMPERATURE:
85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in
placement?


Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but
TEMPERATURE_ is not.


That's correct, because you're encoding >1 piece of information into the 
single string (the fact that it's a temperature *and* the value of that 
temperature are the two pieces of information encoded into the single 
string).


Now that there's multiple pieces of information encoded in the string 
the reader of the trait string needs to know how to decode those bits of 
information, which is exactly what we're trying to avoid doing (because 
we can see from the ComputeCapabilitiesFilter, the extra_specs mess, and 
the giant hairball that is the NUMA and CPU pinning "metadata requests" 
how that turns out).



This thread isn't about setting these parameters; it's about getting
us to a point where we can discuss a question just like this one
without running up against: >
"That's a hard no, because you shouldn't encode key/value pairs in traits."

"Oh, why's that?"

"Because that's not what we intended when we created traits."

"But it would work, and the alternatives are way harder."

"-1"

"But..."

"-I


I believe I've articulated a number of times why traits should remain 
unary pieces of information, and not just said "because that's what we 
intended when we created traits".


I'm tough on this because I've seen the garbage code and unmaintainable 
mess that not having structurally sound data modeling concepts and 
information interpretation rules leads to in Nova and I don't want to 
encourage any more of it.


-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Airship linux distro support

2018-09-28 Thread James Gu
Hello,

I submitted a spec to enable multiple Linux distro capability in Airship and 
bring in OpenSUSE support in addition to Ubuntu. The spec is at 
https://review.openstack.org/#/c/601187 and has received positive feedback from 
the Airship core team on the direction. We wanted to make the effort known to 
broader audience through the mailing list and sincerely welcome more developers 
to join us, review the spec/code and/or implement the feature, expand to other 
Linux distros such as CentOS etc.

Thanks,


James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [goals][upgrade-checkers] Week R-28 Update

2018-09-28 Thread Matt Riedemann
There isn't really anything to report this week. There are no new 
changes up for review that I'm aware of. If your team has posted changes 
for your project, please update the related task in the story [1].


I'm also waiting for some feedback from glance-minded people about [2].

[1] https://storyboard.openstack.org/#!/story/2003657
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/135025.html


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread melanie witt

On Fri, 28 Sep 2018 15:42:23 -0500, Eric Fried wrote:

On 09/28/2018 09:41 AM, Balázs Gibizer wrote:



On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried  wrote:

It's time somebody said this.

Every time we turn a corner or look under a rug, we find another use
case for provider traits in placement. But every time we have to have
the argument about whether that use case satisfies the original
"intended purpose" of traits.

That's only reason I've ever been able to glean: that it (whatever "it"
is) wasn't what the architects had in mind when they came up with the
idea of traits. We're not even talking about anything that would require
changes to the placement API. Just, "Oh, that's not a *capability* -
shut it down."

Bubble wrap was originally intended as a textured wallpaper and a
greenhouse insulator. Can we accept the fact that traits have (many,
many) uses beyond marking capabilities, and quit with the arbitrary
restrictions?


How far are we willing to go? Does an arbitrary (key: value) pair
encoded in a trait name like key_`str(value)` (e.g. CURRENT_TEMPERATURE:
85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in
placement?


Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but
TEMPERATURE_ is not. This thread isn't about setting
these parameters; it's about getting us to a point where we can discuss
a question just like this one without running up against:

"That's a hard no, because you shouldn't encode key/value pairs in traits."

"Oh, why's that?"

"Because that's not what we intended when we created traits."

"But it would work, and the alternatives are way harder."

"-1"

"But..."

"-1"
I think it's not so much about the intention when traits were created 
and more about what UX callers of the API are left with, if we were to 
recommend representing everything with traits and not providing another 
API for key-value use cases. We need to think about what the maintenance 
of their deployments will look like if traits are the only tool we provide.


I get that we don't want to put artificial restrictions on how API 
callers can and can't use the traits API, but will they be left with a 
manageable experience if that's all that's available?


I don't have time right now to come up with a really great example, but 
I'm thinking along the lines of, can this get out of hand (a la "flavor 
explosion") for an operator using traits to model what their compute 
hosts can do?


Please forgive the oversimplified example I'm going to try to use to 
illustrate my concern:


We all agree we can have traits for resource providers like:

* HAS_SSD
* HAS_GPU
* HAS_WINDOWS

But things get less straightforward when we think of traits like:

* HAS_OWNER_CINDER
* HAS_OWNER_NEUTRON
* HAS_OWNER_CYBORG
* HAS_RAID_0
* HAS_RAID_1
* HAS_RAID_5
* HAS_RAID_6
* HAS_RAID_10
* HAS_NUMA_CELL_0
* HAS_NUMA_CELL_1
* HAS_NUMA_CELL_2
* HAS_NUMA_CELL_3

I'm concerned about a lot of repetition here and maintenance headache 
for operators. That's where the thoughts about whether we should provide 
something like a key-value construct to API callers where they can 
instead say:


* OWNER=CINDER
* RAID=10
* NUMA_CELL=0

for each resource provider.

If I'm off base with my example, please let me know. I'm not a placement 
expert.


Anyway, I hope that gives an idea of what I'm thinking about in this 
discussion. I agree we need to pick a direction and go with it. I'm just 
trying to look out for the experience operators are going to be using 
this and maintaining it in their deployments.


Cheers,
-melanie













__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Zuul job backlog

2018-09-28 Thread Matt Riedemann

On 9/28/2018 3:12 PM, Clark Boylan wrote:

I was asked to write a followup to this as the long Zuul queues have persisted 
through this week. Largely because the situation from last week hasn't changed 
much. We were down the upgraded cloud region while we worked around a network 
configuration bug, then once that was addressed we ran into neutron port 
assignment and deletion issues. We think these are both fixed and we are 
running in this region again as of today.

Other good news is our classification rate is up significantly. We can use that 
information to go through the top identified gate bugs:

Network Connectivity issues to test nodes [2]. This is the current top of the 
list, but I think its impact is relatively small. What is happening here is 
jobs fail to connect to their test nodes early in the pre-run playbook and then 
fail. Zuul will rerun these jobs for us because they failed in the pre-run 
step. Prior to zuulv3 we had nodepool run a ready script before marking test 
nodes as ready, this script would've caught and filtered out these broken 
network nodes early. We now notice them late during the pre-run of a job.

Pip fails to find distribution for package [3]. Earlier in the week we had the in 
region mirror fail in two different regions for unrelated errors. These mirrors 
were fixed and the only other hits for this bug come from Ara which tried to 
install the 'black' package on python3.5 but this package requires python>=3.6.

yum, no more mirrors to try [4]. At first glance this appears to be an 
infrastructure issue because the mirror isn't serving content to yum. On 
further investigation it turned out to be a DNS resolution issue caused by the 
installation of designate in the tripleo jobs. Tripleo is aware of this issue 
and working to correct it.

Stackviz failing on py3 [5]. This is a real bug in stackviz caused by subunit 
data being binary not utf8 encoded strings. I've written a fix for this problem 
athttps://review.openstack.org/606184, but in doing so found that this was a 
known issue back in March and there was already a proposed 
fix,https://review.openstack.org/#/c/555388/3. It would be helpful if the QA 
team could care for this project and get a fix in. Otherwise, we should 
consider disabling stackviz on our tempest jobs (though the output from 
stackviz is often useful).

There are other bugs being tracked by e-r. Some are bugs in the openstack 
software and I'm sure some are also bugs in the infrastructure. I have not yet 
had the time to work through the others though. It would be helpful if project 
teams could prioritize the debugging and fixing of these issues though.

[2]http://status.openstack.org/elastic-recheck/gate.html#1793370
[3]http://status.openstack.org/elastic-recheck/gate.html#1449136
[4]http://status.openstack.org/elastic-recheck/gate.html#1708704
[5]http://status.openstack.org/elastic-recheck/gate.html#1758054


Thanks for the update Clark.

Another thing this week is the logstash indexing is behind by at least 
half a day. That's because workers were hitting OOM errors due to giant 
screen log files that aren't formatted properly so that we only index 
INFO+ level logs, and were instead trying to index the entire file, 
which some of which are 33MB *compressed*. So indexing of those 
identified problematic screen logs has been disabled:


https://review.openstack.org/#/c/606197/

I've reported bugs against each related project.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Eric Fried


On 09/28/2018 09:41 AM, Balázs Gibizer wrote:
> 
> 
> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried  wrote:
>> It's time somebody said this.
>>
>> Every time we turn a corner or look under a rug, we find another use
>> case for provider traits in placement. But every time we have to have
>> the argument about whether that use case satisfies the original
>> "intended purpose" of traits.
>>
>> That's only reason I've ever been able to glean: that it (whatever "it"
>> is) wasn't what the architects had in mind when they came up with the
>> idea of traits. We're not even talking about anything that would require
>> changes to the placement API. Just, "Oh, that's not a *capability* -
>> shut it down."
>>
>> Bubble wrap was originally intended as a textured wallpaper and a
>> greenhouse insulator. Can we accept the fact that traits have (many,
>> many) uses beyond marking capabilities, and quit with the arbitrary
>> restrictions?
> 
> How far are we willing to go? Does an arbitrary (key: value) pair
> encoded in a trait name like key_`str(value)` (e.g. CURRENT_TEMPERATURE:
> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in
> placement?

Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but
TEMPERATURE_ is not. This thread isn't about setting
these parameters; it's about getting us to a point where we can discuss
a question just like this one without running up against:

"That's a hard no, because you shouldn't encode key/value pairs in traits."

"Oh, why's that?"

"Because that's not what we intended when we created traits."

"But it would work, and the alternatives are way harder."

"-1"

"But..."

"-1"

> 
> Cheers,
> gibi
> 
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Eric Fried


On 09/28/2018 12:19 PM, Chris Dent wrote:
> On Fri, 28 Sep 2018, Jay Pipes wrote:
> 
>> On 09/28/2018 09:25 AM, Eric Fried wrote:
>>> It's time somebody said this.
> 
> Yes, a useful topic, I think.
> 
>>> Every time we turn a corner or look under a rug, we find another use
>>> case for provider traits in placement. But every time we have to have
>>> the argument about whether that use case satisfies the original
>>> "intended purpose" of traits.
>>>
>>> That's only reason I've ever been able to glean: that it (whatever "it"
>>> is) wasn't what the architects had in mind when they came up with the
>>> idea of traits.
>>
>> Don't pussyfoot around things. It's me you're talking about, Eric. You
>> could just ask me instead of passive-aggressively posting to the list
>> like this.
> 
> It's not just you. Ed and I have also expressed some fairly strong
> statement about how traits are "supposed" to be used and I would
> guess that from Eric's perspective all three of us (amongst others)
> have some form of architectural influence. Since it takes a village
> and all that.

Correct. I certainly wasn't talking about Jay specifically. I also
wanted people other than placement cores/architects to participate in
the discussion (thanks Julia and Zane).

>> They aren't arbitrary. They are there for a reason: a trait is a
>> boolean capability. It describes something that either a provider is
>> capable of supporting or it isn't.
> 
> This is somewhat (maybe even only slightly) different from what I
> think the definition of a trait is, and that nuance may be relevant.
> 
> I describe a trait as a "quality that a resource provider has" (the
> car is blue). This contrasts with a resource class which is a
> "quantity that a resource provider has" (the car has 4 doors).

Yes, this. I don't want us to go off in the weeds about the reason or
relevance of the choice of name, but "trait" is a superset of
"capability" and easily encompasses "BLUE" or "PHYSNET_PUBLIC" or
"OWNED_BY_NEUTRON" or "XYZ_BITSTREAM" or "PCI_ADDRESS_01_AB_23_CD" or
"RAID5".

> Our implementation is pretty much exactly that ^. We allow
> clients to ask "give me things that have qualities x, y, z, not
> qualities a, b, c, and quanities of G of 5 and H of 7".
> 
> Add in aggregates and we have exactly what you say:
> 
>> * Does the provider have *capacity* for the requested resources?
>> * Does the provider have the required (or forbidden) *capabilities*?
>> * Does the provider belong to some group?
> 
> The nuance of difference is that your description of *capabilities*
> seems more narrow than my description of *qualities* (aka
> characteristics). You've got something fairly specific in mind, as a
> way of constraining the profusion of noise that has happened with
> how various kinds of information about resources of all sorts is
> managed in OpenStack, as you describe in your message.
> 
> I do not think it should be placement's job to control that noise.
> It should be placement's job to provide a very strict contract about
> what you can do with a trait:
> 
> * create it, if necessary
> * assign it to one or more resource providers
> * ask for providers that either have it
> * ... or do not have it
> 
> That's all. Placement _code_ should _never_ be aware of the value of
> a trait (except for the magical MISC_SHARES...). It should never
> become possible to regex on traits or do comparisons
> (required= 
>> If we want to add further constraints to the placement allocation
>> candidates request that ask things like:
>>
>> * Does the provider have version 1.22.61821 of BIOS firmware from
>> Marvell installed on it?
> 
> That's a quality of the provider in a moment.
> 
>> * Does the provider support an FPGA that has had an OVS program
>> flashed to it in the last 20 days?
> 
> If you squint, so is this.
> 
>> * Does the provider belong to physical network "corpnet" and also
>> support creation of virtual NICs of type either "DIRECT" or "NORMAL"?
> 
> And these.
> 
> But at least some of them are dynamic rather than some kind of
> platonic ideal associated with the resource provider.
> 
> I don't think placement should be concerned about temporal aspects
> of traits. If we can't write a web service that can handle setting
> lots of traits every second of every day, we should go home. If
> clients of placement want to set weird traits, more power to them.
> 
> However, if clients of placement (such as nova) which are being the
> orchestrator of resource providers manipulated by multiple systems
> (neutron, cinder, ironic, cyborg, etc) wish to set some constraints
> on how and what traits can do and mean, then that is up to them.
> 
> nova-scheduler is the thing that is doing `GET
> /allocation_candidates` for those multiple system. It presumably
> should have some say in what traits it is willing to express and
> use.

Right, this is where it's getting sticky. I feel like the push-back
comes from people wearing their placement hats saying "you can't 

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Sean McGinnis
On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote:
> On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:
> 
> > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
> >  wrote:
> > >
> > > Ideally I would like to see it in the form of least specific to most
> > specific. But more importantly in a way that there is no additional
> > delimiters between the service type and the resource. Finally, I do not
> > like the change of plurality depending on action type.
> > >
> > > I propose we consider
> > >
> > > ::[:]
> > >
> > > Example for keystone (note, action names below are strictly examples I
> > am fine with whatever form those actions take):
> > > identity:projects:create
> > > identity:projects:delete
> > > identity:projects:list
> > > identity:projects:get
> > >
> > > It keeps things simple and consistent when you're looking through
> > overrides / defaults.
> > > --Morgan
> > +1 -- I think the ordering if `resource` comes before
> > `action|subaction` will be more clean.
> >
> 

Great idea. This is looking better and better.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Zuul job backlog

2018-09-28 Thread Clark Boylan
On Wed, Sep 19, 2018, at 12:11 PM, Clark Boylan wrote:
> Hello everyone,
> 
> You may have noticed there is a large Zuul job backlog and changes are 
> not getting CI reports as quickly as you might expect. There are several 
> factors interacting with each other to make this the case. The short 
> version is that one of our clouds is performing upgrades and has been 
> removed from service, and we have a large number of gate failures which 
> cause things to reset and start over. We have fewer resources than 
> normal and are using them inefficiently. Zuul is operating as expected.
> 
> Continue reading if you'd like to understand the technical details and 
> find out how you can help make this better.
> 
> Zuul gates related projects in shared queues. Changes enter these queues 
> and are ordered in a speculative future state that Zuul assumes will 
> pass because multiple humans have reviewed the changes and said they are 
> good (also they had to pass check testing first). Problems arise when 
> tests fail forcing Zuul to evict changes from the speculative future 
> state, build a new state, then start jobs over again for this new 
> future.
> 
> Typically this doesn't happen often and we merge many changes at a time, 
> quickly pushing code into our repos. Unfortunately, the results are 
> painful when we fail often as we end up rebuilding future states and 
> restarting jobs often. Currently we have the gate and release jobs set 
> to the highest priority as well so they run jobs before other queues. 
> This means the gate can starve other work if it is flaky. We've 
> configured things this way because the gate is not supposed to be flaky 
> since we've reviewed things and already passed check testing. One of the 
> tools we have in place to make this less painful is each gate queue 
> operates on a window that grows and shrinks similar to how TCP 
> slowstart. As changes merge we increase the size of the window and when 
> they fail to merge we decrease it. This reduces the size of the future 
> state that must be rebuilt and retested on failure when things are 
> persistently flaky.
> 
> The best way to make this better is to fix the bugs in our software, 
> whether that is in the CI system itself or the software being tested. 
> The first step in doing that is to identify and track the bugs that we 
> are dealing with. We have a tool called elastic-recheck that does this 
> using indexed logs from the jobs. The idea there is to go through the 
> list of unclassified failures [0] and fingerprint them so that we can 
> track them [1]. With that data available we can then prioritize fixing 
> the bugs that have the biggest impact.
> 
> Unfortunately, right now our classification rate is very poor (only 
> 15%), which makes it difficult to know what exactly is causing these 
> failures. Mriedem and I have quickly scanned the unclassified list, and 
> it appears there is a db migration testing issue causing these tests to 
> timeout across several projects. Mriedem is working to get this 
> classified and tracked which should help, but we will also need to fix 
> the bug. On top of that it appears that Glance has flaky functional 
> tests (both python2 and python3) which are causing resets and should be 
> looked into.
> 
> If you'd like to help, let mriedem or myself know and we'll gladly work 
> with you to get elasticsearch queries added to elastic-recheck. We are 
> likely less help when it comes to fixing functional tests in Glance, but 
> I'm happy to point people in the right direction for that as much as I 
> can. If you can take a few minutes to do this before/after you issue a 
> recheck it does help quite a bit.
> 
> One general thing I've found would be helpful is if projects can clean 
> up the deprecation warnings in their log outputs. The persistent 
> "WARNING you used the old name for a thing" messages make the logs large 
> and much harder to read to find the actual failures.
> 
> As a final note this is largely targeted at the OpenStack Integrated 
> gate (Nova, Glance, Cinder, Keystone, Swift, Neutron) since that appears 
> to be particularly flaky at the moment. The Zuul behavior applies to 
> other gate pipelines (OSA, Tripleo, Airship, etc) as does elastic-
> recheck and related tooling. If you find your particular pipeline is 
> flaky I'm more than happy to help in that context as well.
> 
> [0] http://status.openstack.org/elastic-recheck/data/integrated_gate.html
> [1] http://status.openstack.org/elastic-recheck/gate.html

I was asked to write a followup to this as the long Zuul queues have persisted 
through this week. Largely because the situation from last week hasn't changed 
much. We were down the upgraded cloud region while we worked around a network 
configuration bug, then once that was addressed we ran into neutron port 
assignment and deletion issues. We think these are both fixed and we are 
running in this region again as of today.

Other good news is our 

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Harry Rybacki
On Fri, Sep 28, 2018 at 2:54 PM Lance Bragstad  wrote:
>
>
> On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:
>>
>> On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
>>  wrote:
>> >
>> > Ideally I would like to see it in the form of least specific to most 
>> > specific. But more importantly in a way that there is no additional 
>> > delimiters between the service type and the resource. Finally, I do not 
>> > like the change of plurality depending on action type.
>> >
>> > I propose we consider
>> >
>> > ::[:]
>> >
>> > Example for keystone (note, action names below are strictly examples I am 
>> > fine with whatever form those actions take):
>> > identity:projects:create
>> > identity:projects:delete
>> > identity:projects:list
>> > identity:projects:get
>> >
>> > It keeps things simple and consistent when you're looking through 
>> > overrides / defaults.
>> > --Morgan
>> +1 -- I think the ordering if `resource` comes before
>> `action|subaction` will be more clean.
>
>
> ++
>
> These are excellent points. I especially like being able to omit the 
> convention about plurality. Furthermore, I'd like to add that I think we 
> should make the resource singular (e.g., project instead or projects). For 
> example:
>
> compute:server:list
> compute:server:update
> compute:server:create
> compute:server:delete
> compute:server:action:reboot
> compute:server:action:confirm_resize (or confirm-resize)
>
> Otherwise, someone might mistake compute:servers:get, as "list". This is 
> ultra-nick-picky, but something I thought of when seeing the usage of 
> "get_all" in policy names in favor of "list."
>
> In summary, the new convention based on the most recent feedback should be:
>
> ::[:]
>
> Rules:
>
> service-type is always defined in the service types authority
> resources are always singular
>
++ plurality can be determined by related action. +++ for removing
possible ambiguity.

> Thanks to all for sticking through this tedious discussion. I appreciate it.
>
Thanks for pushing the conversation, Lance!
>>
>>
>> /R
>>
>> Harry
>> >
>> > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad  wrote:
>> >>
>> >> Bumping this thread again and proposing two conventions based on the 
>> >> discussion here. I propose we decide on one of the two following 
>> >> conventions:
>> >>
>> >> ::
>> >>
>> >> or
>> >>
>> >> :_
>> >>
>> >> Where  is the corresponding service type of the project 
>> >> [0], and  is either create, get, list, update, or delete. I think 
>> >> decoupling the method from the policy name should aid in consistency, 
>> >> regardless of the underlying implementation. The HTTP method specifics 
>> >> can still be relayed using oslo.policy's DocumentedRuleDefault object [1].
>> >>
>> >> I think the plurality of the resource should default to what makes sense 
>> >> for the operation being carried out (e.g., list:foobars, create:foobar).
>> >>
>> >> I don't mind the first one because it's clear about what the delimiter is 
>> >> and it doesn't look weird when projects have something like:
>> >>
>> >> :::
>> >>
>> >> If folks are ok with this, I can start working on some documentation that 
>> >> explains the motivation for this. Afterward, we can figure out how we 
>> >> want to track this work.
>> >>
>> >> What color do you want the shed to be?
>> >>
>> >> [0] https://service-types.openstack.org/service-types.json
>> >> [1] 
>> >> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
>> >>
>> >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad  
>> >> wrote:
>> >>>
>> >>>
>> >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann  
>> >>> wrote:
>> 
>>    On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt 
>>   wrote 
>>   > tl;dr+1 consistent names
>>   > I would make the names mirror the API... because the Operator 
>>  setting them knows the API, not the codeIgnore the crazy names in Nova, 
>>  I certainly hate them
>> 
>>  Big +1 on consistent naming  which will help operator as well as 
>>  developer to maintain those.
>> 
>>   >
>>   > Lance Bragstad  wrote:
>>   > > I'm curious if anyone has context on the "os-" part of the format?
>>   >
>>   > My memory of the Nova policy mess...* Nova's policy rules 
>>  traditionally followed the patterns of the code
>>   > ** Yes, horrible, but it happened.* The code used to have the 
>>  OpenStack API and the EC2 API, hence the "os"* API used to expand with 
>>  extensions, so the policy name is often based on extensions** note most 
>>  of the extension code has now gone, including lots of related policies* 
>>  Policy in code was focused on getting us to a place where we could 
>>  rename policy** Whoop whoop by the way, it feels like we are really 
>>  close to something sensible now!
>>   > Lance Bragstad  wrote:
>>   > Thoughts on using create, list, update, and delete as opposed to 
>>  post, get, put, patch, 

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Lance Bragstad
On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:

> On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
>  wrote:
> >
> > Ideally I would like to see it in the form of least specific to most
> specific. But more importantly in a way that there is no additional
> delimiters between the service type and the resource. Finally, I do not
> like the change of plurality depending on action type.
> >
> > I propose we consider
> >
> > ::[:]
> >
> > Example for keystone (note, action names below are strictly examples I
> am fine with whatever form those actions take):
> > identity:projects:create
> > identity:projects:delete
> > identity:projects:list
> > identity:projects:get
> >
> > It keeps things simple and consistent when you're looking through
> overrides / defaults.
> > --Morgan
> +1 -- I think the ordering if `resource` comes before
> `action|subaction` will be more clean.
>

++

These are excellent points. I especially like being able to omit the
convention about plurality. Furthermore, I'd like to add that I think we
should make the resource singular (e.g., project instead or projects). For
example:

compute:server:list
compute:server:update
compute:server:create
compute:server:delete
compute:server:action:reboot
compute:server:action:confirm_resize (or confirm-resize)

Otherwise, someone might mistake compute:servers:get, as "list". This is
ultra-nick-picky, but something I thought of when seeing the usage of
"get_all" in policy names in favor of "list."

In summary, the new convention based on the most recent feedback should be:

*::[:]*

Rules:

   - service-type is always defined in the service types authority
   - resources are always singular

Thanks to all for sticking through this tedious discussion. I appreciate it.


>
> /R
>
> Harry
> >
> > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad 
> wrote:
> >>
> >> Bumping this thread again and proposing two conventions based on the
> discussion here. I propose we decide on one of the two following
> conventions:
> >>
> >> ::
> >>
> >> or
> >>
> >> :_
> >>
> >> Where  is the corresponding service type of the project
> [0], and  is either create, get, list, update, or delete. I think
> decoupling the method from the policy name should aid in consistency,
> regardless of the underlying implementation. The HTTP method specifics can
> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
> >>
> >> I think the plurality of the resource should default to what makes
> sense for the operation being carried out (e.g., list:foobars,
> create:foobar).
> >>
> >> I don't mind the first one because it's clear about what the delimiter
> is and it doesn't look weird when projects have something like:
> >>
> >> :::
> >>
> >> If folks are ok with this, I can start working on some documentation
> that explains the motivation for this. Afterward, we can figure out how we
> want to track this work.
> >>
> >> What color do you want the shed to be?
> >>
> >> [0] https://service-types.openstack.org/service-types.json
> >> [1]
> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
> >>
> >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad 
> wrote:
> >>>
> >>>
> >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann <
> gm...@ghanshyammann.com> wrote:
> 
>    On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
> j...@johngarbutt.com> wrote 
>   > tl;dr+1 consistent names
>   > I would make the names mirror the API... because the Operator
> setting them knows the API, not the codeIgnore the crazy names in Nova, I
> certainly hate them
> 
>  Big +1 on consistent naming  which will help operator as well as
> developer to maintain those.
> 
>   >
>   > Lance Bragstad  wrote:
>   > > I'm curious if anyone has context on the "os-" part of the
> format?
>   >
>   > My memory of the Nova policy mess...* Nova's policy rules
> traditionally followed the patterns of the code
>   > ** Yes, horrible, but it happened.* The code used to have the
> OpenStack API and the EC2 API, hence the "os"* API used to expand with
> extensions, so the policy name is often based on extensions** note most of
> the extension code has now gone, including lots of related policies* Policy
> in code was focused on getting us to a place where we could rename policy**
> Whoop whoop by the way, it feels like we are really close to something
> sensible now!
>   > Lance Bragstad  wrote:
>   > Thoughts on using create, list, update, and delete as opposed to
> post, get, put, patch, and delete in the naming convention?
>   > I could go either way as I think about "list servers" in the
> API.But my preference is for the URL stub and POST, GET, etc.
>   >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad <
> lbrags...@gmail.com> wrote:If we consider dropping "os", should we
> entertain dropping "api", too? Do we have a good reason to keep "api"?I
> wouldn't be opposed to simple service types (e.g 

Re: [openstack-dev] [goal][python3] week 7 update

2018-09-28 Thread Doug Hellmann
Jeremy Stanley  writes:

> On 2018-09-28 13:58:52 -0400 (-0400), William M Edmonds wrote:
>> Doug Hellmann  wrote on 09/26/2018 06:29:11 PM:
>> 
>> > * We do not want to set the override once in testenv, because that
>> >   breaks the more specific versions used in default environments like
>> >   py35 and py36 (at least under older versions of tox).
>> 
>> 
>> I assume that something like
>> https://git.openstack.org/cgit/openstack/nova-powervm/commit/?id=fa64a93c965e6a6692711962ad6584534da81695
>>  should be a perfectly acceptable alternative in at least some cases.
>> Agreed?
>
> I believe the confusion is that ignore_basepython_conflict didn't
> appear in a release of tox until after we started patching projects
> for this effort (in fact it was added to tox in part because we
> discovered the issue in originally attempting to use basepython
> globally).

Right. The scripted patches work with older versions of tox as
well. They also have the benefit of only changing the environments into
which the new setting is injected, which means if you have a
py27-do-something-random environment it isn't going to suddenly start
using python 3 instead of python 2.7.

The thing we care about for the goal is ensuring that the required jobs
run under python 3.  Teams are, as always, completely free to choose
alternative implementations if they are willing to update the patches
(or write alternative ones).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goal][python3] week 7 update

2018-09-28 Thread Ben Nemec



On 9/28/18 1:38 PM, Jeremy Stanley wrote:

On 2018-09-28 13:58:52 -0400 (-0400), William M Edmonds wrote:

Doug Hellmann  wrote on 09/26/2018 06:29:11 PM:


* We do not want to set the override once in testenv, because that
   breaks the more specific versions used in default environments like
   py35 and py36 (at least under older versions of tox).



I assume that something like
https://git.openstack.org/cgit/openstack/nova-powervm/commit/?id=fa64a93c965e6a6692711962ad6584534da81695
  should be a perfectly acceptable alternative in at least some cases.
Agreed?


I believe the confusion is that ignore_basepython_conflict didn't
appear in a release of tox until after we started patching projects
for this effort (in fact it was added to tox in part because we
discovered the issue in originally attempting to use basepython
globally).


Yeah, if you're okay with requiring tox 3.1+ then you can use that 
instead. We've been avoiding it for now in other projects because some 
of the distros aren't shipping tox 3.1 yet and some people prefer not to 
mix distro Python packages and pip ones. At some point I expect we'll 
migrate everything to the new behavior though.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3][heat][manila][qinling][zaqar][magnum][keystone][congress] switching python package jobs

2018-09-28 Thread Doug Hellmann
Doug Hellmann  writes:

> I think we are ready to go ahead and switch all of the python packaging
> jobs to the new set defined in the publish-to-pypi-python3 template
> [1]. We still have some cleanup patches for projects that have not
> completed their zuul migration, but there are only a few and rebasing
> those will be easy enough.
>
> The template adds a new check job that runs when any files related to
> packaging are changed (readme, setup, etc.). Otherwise it switches from
> the python2-based PyPI job to use python3.
>
> I have the patch to switch all official projects ready in [2].
>
> Doug
>
> [1] 
> http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/project-templates.yaml#n218
> [2] https://review.openstack.org/#/c/598323/

This change is now in place. The Ironic team discovered one issue, and
the fix is proposed as https://review.openstack.org/606152

This change has also reopened the question of how to publish some of the
projects for which we do not own names on PyPI.

I registered manila, qinling, and zaqar-ui by uploading Rocky series
releases of those projects and then added openstackci as an owner so we
can upload new packages this cycle.

I asked the owners of the name "heat" to allow us to use it, and they
rejected the request. So, I proposed a change to heat to update the
sdist name to "openstack-heat".

* https://review.openstack.org/606160

We don't own "magnum" but there is already an "openstack-magnum" set up
with old releases, so I have proposed a change to the magnum repo to
change the dist name there, so we can resume using it.

* https://review.openstack.org/606162

I have filed requests with the maintainers of PyPI to claim the names
"keystone" and "congress". That may take some time. Please let me know
if you're willing to simply use "openstack-keystone" and
"openstack-congress" instead. I will take care of configuring PyPI and
proposing the patch to update your setup.cfg (that way you can approve
the change).

* https://github.com/pypa/warehouse/issues/4770
* https://github.com/pypa/warehouse/issues/4771

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goal][python3] week 7 update

2018-09-28 Thread Jeremy Stanley
On 2018-09-28 13:58:52 -0400 (-0400), William M Edmonds wrote:
> Doug Hellmann  wrote on 09/26/2018 06:29:11 PM:
> 
> > * We do not want to set the override once in testenv, because that
> >   breaks the more specific versions used in default environments like
> >   py35 and py36 (at least under older versions of tox).
> 
> 
> I assume that something like
> https://git.openstack.org/cgit/openstack/nova-powervm/commit/?id=fa64a93c965e6a6692711962ad6584534da81695
>  should be a perfectly acceptable alternative in at least some cases.
> Agreed?

I believe the confusion is that ignore_basepython_conflict didn't
appear in a release of tox until after we started patching projects
for this effort (in fact it was added to tox in part because we
discovered the issue in originally attempting to use basepython
globally).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Harry Rybacki
On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
 wrote:
>
> Ideally I would like to see it in the form of least specific to most 
> specific. But more importantly in a way that there is no additional 
> delimiters between the service type and the resource. Finally, I do not like 
> the change of plurality depending on action type.
>
> I propose we consider
>
> ::[:]
>
> Example for keystone (note, action names below are strictly examples I am 
> fine with whatever form those actions take):
> identity:projects:create
> identity:projects:delete
> identity:projects:list
> identity:projects:get
>
> It keeps things simple and consistent when you're looking through overrides / 
> defaults.
> --Morgan
+1 -- I think the ordering if `resource` comes before
`action|subaction` will be more clean.

/R

Harry
>
> On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad  wrote:
>>
>> Bumping this thread again and proposing two conventions based on the 
>> discussion here. I propose we decide on one of the two following conventions:
>>
>> ::
>>
>> or
>>
>> :_
>>
>> Where  is the corresponding service type of the project [0], 
>> and  is either create, get, list, update, or delete. I think 
>> decoupling the method from the policy name should aid in consistency, 
>> regardless of the underlying implementation. The HTTP method specifics can 
>> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
>>
>> I think the plurality of the resource should default to what makes sense for 
>> the operation being carried out (e.g., list:foobars, create:foobar).
>>
>> I don't mind the first one because it's clear about what the delimiter is 
>> and it doesn't look weird when projects have something like:
>>
>> :::
>>
>> If folks are ok with this, I can start working on some documentation that 
>> explains the motivation for this. Afterward, we can figure out how we want 
>> to track this work.
>>
>> What color do you want the shed to be?
>>
>> [0] https://service-types.openstack.org/service-types.json
>> [1] 
>> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
>>
>> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad  wrote:
>>>
>>>
>>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann  
>>> wrote:

   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt 
  wrote 
  > tl;dr+1 consistent names
  > I would make the names mirror the API... because the Operator setting 
 them knows the API, not the codeIgnore the crazy names in Nova, I 
 certainly hate them

 Big +1 on consistent naming  which will help operator as well as developer 
 to maintain those.

  >
  > Lance Bragstad  wrote:
  > > I'm curious if anyone has context on the "os-" part of the format?
  >
  > My memory of the Nova policy mess...* Nova's policy rules traditionally 
 followed the patterns of the code
  > ** Yes, horrible, but it happened.* The code used to have the OpenStack 
 API and the EC2 API, hence the "os"* API used to expand with extensions, 
 so the policy name is often based on extensions** note most of the 
 extension code has now gone, including lots of related policies* Policy in 
 code was focused on getting us to a place where we could rename policy** 
 Whoop whoop by the way, it feels like we are really close to something 
 sensible now!
  > Lance Bragstad  wrote:
  > Thoughts on using create, list, update, and delete as opposed to post, 
 get, put, patch, and delete in the naming convention?
  > I could go either way as I think about "list servers" in the API.But my 
 preference is for the URL stub and POST, GET, etc.
  >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad  
 wrote:If we consider dropping "os", should we entertain dropping "api", 
 too? Do we have a good reason to keep "api"?I wouldn't be opposed to 
 simple service types (e.g "compute" or "loadbalancer").
  > +1The API is known as "compute" in api-ref, so the policy should be for 
 "compute", etc.

 Agree on mapping the policy name with api-ref as much as possible. Other 
 than policy name having 'os-', we have 'os-' in resource name also in nova 
 API url like /os-agents, /os-aggregates etc (almost every resource except 
 servers , flavors).  As we cannot get rid of those from API url, we need 
 to keep the same in policy naming too? or we can have policy name like 
 compute:agents:create/post but that mismatch from api-ref where agents 
 resource url is os-agents.
>>>
>>>
>>> Good question. I think this depends on how the service does policy 
>>> enforcement.
>>>
>>> I know we did something like this in keystone, which required policy names 
>>> and method names to be the same:
>>>
>>>   "identity:list_users": "..."
>>>
>>> Because the initial implementation of policy enforcement used a decorator 
>>> like this:
>>>
>>>   from keystone import controller
>>>
>>>   

Re: [openstack-dev] [all][tc][elections] Stein TC Election Results

2018-09-28 Thread Jay S Bryant

++ To what Jeremy said and congratulations.


On 9/27/2018 7:19 PM, Jeremy Stanley wrote:

On 2018-09-27 20:00:42 -0400 (-0400), Mohammed Naser wrote:
[...]

A big thank you to our election team who oversees all of this as
well :)

[...]

I wholeheartedly concur!

And an even bigger thank you to the 5 candidates who were not
elected this term; please run again in the next election if you're
able, I think every one of you would have made a great choice for a
seat on the OpenStack TC. Our community is really lucky to have so
many qualified people eager to take on governance tasks.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goal][python3] week 7 update

2018-09-28 Thread William M Edmonds

Doug Hellmann  wrote on 09/26/2018 06:29:11 PM:

> * We do not want to set the override once in testenv, because that
>   breaks the more specific versions used in default environments like
>   py35 and py36 (at least under older versions of tox).


I assume that something like
https://git.openstack.org/cgit/openstack/nova-powervm/commit/?id=fa64a93c965e6a6692711962ad6584534da81695
 should be a perfectly acceptable alternative in at least some cases.
Agreed?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Morgan Fainberg
Ideally I would like to see it in the form of least specific to most
specific. But more importantly in a way that there is no additional
delimiters between the service type and the resource. Finally, I do not
like the change of plurality depending on action type.

I propose we consider

*::[:]*

Example for keystone (note, action names below are strictly examples I am
fine with whatever form those actions take):
*identity:projects:create*
*identity:projects:delete*
*identity:projects:list*
*identity:projects:get*

It keeps things simple and consistent when you're looking through overrides
/ defaults.
--Morgan

On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad  wrote:

> Bumping this thread again and proposing two conventions based on the
> discussion here. I propose we decide on one of the two following
> conventions:
>
> *::*
>
> or
>
> *:_*
>
> Where  is the corresponding service type of the project [0],
> and  is either create, get, list, update, or delete. I think
> decoupling the method from the policy name should aid in consistency,
> regardless of the underlying implementation. The HTTP method specifics can
> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
>
> I think the plurality of the resource should default to what makes sense
> for the operation being carried out (e.g., list:foobars, create:foobar).
>
> I don't mind the first one because it's clear about what the delimiter is
> and it doesn't look weird when projects have something like:
>
> :::
>
> If folks are ok with this, I can start working on some documentation that
> explains the motivation for this. Afterward, we can figure out how we want
> to track this work.
>
> What color do you want the shed to be?
>
> [0] https://service-types.openstack.org/service-types.json
> [1]
> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
>
> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad 
> wrote:
>
>>
>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann 
>> wrote:
>>
>>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
>>> j...@johngarbutt.com> wrote 
>>>  > tl;dr+1 consistent names
>>>  > I would make the names mirror the API... because the Operator setting
>>> them knows the API, not the codeIgnore the crazy names in Nova, I certainly
>>> hate them
>>>
>>> Big +1 on consistent naming  which will help operator as well as
>>> developer to maintain those.
>>>
>>>  >
>>>  > Lance Bragstad  wrote:
>>>  > > I'm curious if anyone has context on the "os-" part of the format?
>>>  >
>>>  > My memory of the Nova policy mess...* Nova's policy rules
>>> traditionally followed the patterns of the code
>>>  > ** Yes, horrible, but it happened.* The code used to have the
>>> OpenStack API and the EC2 API, hence the "os"* API used to expand with
>>> extensions, so the policy name is often based on extensions** note most of
>>> the extension code has now gone, including lots of related policies* Policy
>>> in code was focused on getting us to a place where we could rename policy**
>>> Whoop whoop by the way, it feels like we are really close to something
>>> sensible now!
>>>  > Lance Bragstad  wrote:
>>>  > Thoughts on using create, list, update, and delete as opposed to
>>> post, get, put, patch, and delete in the naming convention?
>>>  > I could go either way as I think about "list servers" in the API.But
>>> my preference is for the URL stub and POST, GET, etc.
>>>  >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad 
>>> wrote:If we consider dropping "os", should we entertain dropping "api",
>>> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple
>>> service types (e.g "compute" or "loadbalancer").
>>>  > +1The API is known as "compute" in api-ref, so the policy should be
>>> for "compute", etc.
>>>
>>> Agree on mapping the policy name with api-ref as much as possible. Other
>>> than policy name having 'os-', we have 'os-' in resource name also in nova
>>> API url like /os-agents, /os-aggregates etc (almost every resource except
>>> servers , flavors).  As we cannot get rid of those from API url, we need to
>>> keep the same in policy naming too? or we can have policy name like
>>> compute:agents:create/post but that mismatch from api-ref where agents
>>> resource url is os-agents.
>>>
>>
>> Good question. I think this depends on how the service does policy
>> enforcement.
>>
>> I know we did something like this in keystone, which required policy
>> names and method names to be the same:
>>
>>   "identity:list_users": "..."
>>
>> Because the initial implementation of policy enforcement used a decorator
>> like this:
>>
>>   from keystone import controller
>>
>>   @controller.protected
>>   def list_users(self):
>>   ...
>>
>> Having the policy name the same as the method name made it easier for the
>> decorator implementation to resolve the policy needed to protect the API
>> because it just looked at the name of the wrapped method. The advantage was
>> that 

Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode?

2018-09-28 Thread Matt Riedemann

On 9/21/2018 9:08 AM, Elõd Illés wrote:

Hi,

Here is an etherpad with the teams that have stable:follow-policy tag on 
their repos:


https://etherpad.openstack.org/p/ocata-final-release-before-em

On the links you can find reports about the open and unreleased changes, 
that could be a useful input for the before-EM/final release.
Please have a look at the report (and review the open patches if there 
are) so that a release can be made if necessary.


Thanks,

Előd


I've added nova's ocata-em tracking etherpad to the list.

https://etherpad.openstack.org/p/nova-ocata-em

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Chris Dent

On Fri, 28 Sep 2018, Jay Pipes wrote:


On 09/28/2018 09:25 AM, Eric Fried wrote:

It's time somebody said this.


Yes, a useful topic, I think.


Every time we turn a corner or look under a rug, we find another use
case for provider traits in placement. But every time we have to have
the argument about whether that use case satisfies the original
"intended purpose" of traits.

That's only reason I've ever been able to glean: that it (whatever "it"
is) wasn't what the architects had in mind when they came up with the
idea of traits.


Don't pussyfoot around things. It's me you're talking about, Eric. You could 
just ask me instead of passive-aggressively posting to the list like this.


It's not just you. Ed and I have also expressed some fairly strong
statement about how traits are "supposed" to be used and I would
guess that from Eric's perspective all three of us (amongst others)
have some form of architectural influence. Since it takes a village
and all that.

They aren't arbitrary. They are there for a reason: a trait is a boolean 
capability. It describes something that either a provider is capable of 
supporting or it isn't.


This is somewhat (maybe even only slightly) different from what I
think the definition of a trait is, and that nuance may be relevant.

I describe a trait as a "quality that a resource provider has" (the
car is blue). This contrasts with a resource class which is a
"quantity that a resource provider has" (the car has 4 doors).

Our implementation is pretty much exactly that ^. We allow
clients to ask "give me things that have qualities x, y, z, not
qualities a, b, c, and quanities of G of 5 and H of 7".

Add in aggregates and we have exactly what you say:


* Does the provider have *capacity* for the requested resources?
* Does the provider have the required (or forbidden) *capabilities*?
* Does the provider belong to some group?


The nuance of difference is that your description of *capabilities*
seems more narrow than my description of *qualities* (aka
characteristics). You've got something fairly specific in mind, as a
way of constraining the profusion of noise that has happened with
how various kinds of information about resources of all sorts is
managed in OpenStack, as you describe in your message.

I do not think it should be placement's job to control that noise.
It should be placement's job to provide a very strict contract about
what you can do with a trait:

* create it, if necessary
* assign it to one or more resource providers
* ask for providers that either have it
* ... or do not have it

That's all. Placement _code_ should _never_ be aware of the value of
a trait (except for the magical MISC_SHARES...). It should never
become possible to regex on traits or do comparisons
(required=If we want to add further constraints to the placement allocation candidates 
request that ask things like:


* Does the provider have version 1.22.61821 of BIOS firmware from Marvell 
installed on it?


That's a quality of the provider in a moment.

* Does the provider support an FPGA that has had an OVS program flashed to it 
in the last 20 days?


If you squint, so is this.

* Does the provider belong to physical network "corpnet" and also support 
creation of virtual NICs of type either "DIRECT" or "NORMAL"?


And these.

But at least some of them are dynamic rather than some kind of
platonic ideal associated with the resource provider.

I don't think placement should be concerned about temporal aspects
of traits. If we can't write a web service that can handle setting
lots of traits every second of every day, we should go home. If
clients of placement want to set weird traits, more power to them.

However, if clients of placement (such as nova) which are being the
orchestrator of resource providers manipulated by multiple systems
(neutron, cinder, ironic, cyborg, etc) wish to set some constraints
on how and what traits can do and mean, then that is up to them.

nova-scheduler is the thing that is doing `GET
/allocation_candidates` for those multiple system. It presumably
should have some say in what traits it is willing to express and
use.

But the placement service doesn't and shouldn't care.

Then we should add a data model that allow providers to be decorated with 
key/value (or more complex than key/value) information where we can query for 
those kinds of constraints without needing to encode all sorts of non-binary 
bits of information into a capability string.


Let's never do this, please. The three capabilities (ha!) of
placement that you listed above ("Does the...") are very powerful as
is and have a conceptual integrity that's really quite awesome. I
think keeping it contained and constrained in very "simple" concepts
like that was stroke of genius you (Jay) made and I'd hope we can
keep it clean like that.

If we weren't a multiple-service oriented system, and instead had
some kind of k8s-like etcd-like
keeper-of-all-the-info-about-everything, then sure, having 

Re: [openstack-dev] [all][tc][elections] Stein TC Election Results

2018-09-28 Thread Arkady.Kanevsky
Congrats to newly elected TCs and all people who run.

-Original Message-
From: Doug Hellmann  
Sent: Friday, September 28, 2018 10:29 AM
To: Emmet Hikory; OpenStack Developers
Subject: Re: [openstack-dev] [all][tc][elections] Stein TC Election Results


[EXTERNAL EMAIL] 
Please report any suspicious attachments, links, or requests for sensitive 
information.


Emmet Hikory  writes:

> Please join me in congratulating the 6 newly elected members of the
> Technical Committee (TC):
>
>   - Doug Hellmann (dhellmann)
>   - Julia Kreger (TheJulia)
>   - Jeremy Stanley (fungi)
>   - Jean-Philippe Evrard (evrardjp)
>   - Lance Bragstad (lbragstad)
>   - Ghanshyam Mann (gmann)

Congratulations, everyone! I'm looking forward to serving with all of
you for another term.

> Full Results:
> https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864
>
> Election process details and results are also available here:
> https://governance.openstack.org/election/
>
> Thank you to all of the candidates, having a good group of candidates helps
> engage the community in our democratic process.
>
> Thank you to all who voted and who encouraged others to vote.  Voter turnout
> was significantly up from recent cycles.  We need to ensure your voices are
> heard.

It's particularly good to hear that turnout is up, not just in
percentage but in raw numbers, too. Thank you all for voting!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][stable] Preparing for ocata-em (extended maintenance)

2018-09-28 Thread Matt Riedemann
Per the other thread on this [1] I've created an etherpad [2] to track 
what needs to happen to get nova's stable/ocata branch ready for 
Extended Maintenance [3] which means we need to flush our existing Ocata 
backports that we want in the final Ocata release before tagging the 
branch as ocata-em, after which point we won't do releases from that 
branch anymore.


The etherpad lists each open ocata backport along with any of its 
related backports on newer branches like pike/queens/etc. Since we need 
the backports to go in order, we need to review and merge the changes on 
the newer branches first. With the state of the gate lately, we really 
can't sit on our hands here because it will probably take up to a week 
just to merge all of the changes for each branch.


Once the Ocata backports are flushed through, we'll cut the final 
release and tag the branch as being in extended maintenance.


Do we want to coordinate a review day next week for the 
nova-stable-maint core team, like Tuesday, or just trust that you all 
know who you are and will help out as necessary in getting these reviews 
done? Non-stable cores are also welcome to help review here to make sure 
we're not missing something, which is also a good way to get noticed as 
caring about stable branches and eventually get you on the stable maint 
core team.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/thread.html#134810

[2] https://etherpad.openstack.org/p/nova-ocata-em
[3] 
https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Jay Pipes

On 09/28/2018 09:25 AM, Eric Fried wrote:

It's time somebody said this.

Every time we turn a corner or look under a rug, we find another use
case for provider traits in placement. But every time we have to have
the argument about whether that use case satisfies the original
"intended purpose" of traits.

That's only reason I've ever been able to glean: that it (whatever "it"
is) wasn't what the architects had in mind when they came up with the
idea of traits.


Don't pussyfoot around things. It's me you're talking about, Eric. You 
could just ask me instead of passive-aggressively posting to the list 
like this.



We're not even talking about anything that would require changes to
the placement API. Just, "Oh, that's not a *capability* - shut it
down."
That's precisely the attitude that got the Nova scheduler into the 
unmaintainable and convoluted mess that it is now: "well, who cares if a 
concept was originally intended to describe X, it's just *easier* for us 
to re-use this random piece of data in ways it wasn't intended because 
that way we don't have to change anything about our docs or our API".


And *this* is the kind of stuff you end up with:

https://github.com/openstack/nova/blob/99bf62e42701397690fe2b4987ce4fd7879355b8/nova/scheduler/filters/compute_capabilities_filter.py#L35-L107

Which is a pile of unreadable, unintelligible garbage; nobody knows how 
it works, how it originally was intended to work, or how to really clean 
it up.



Bubble wrap was originally intended as a textured wallpaper and a
greenhouse insulator. Can we accept the fact that traits have (many,
many) uses beyond marking capabilities, and quit with the arbitrary
restrictions?


They aren't arbitrary. They are there for a reason: a trait is a boolean 
capability. It describes something that either a provider is capable of 
supporting or it isn't.


Conceptually, having boolean traits/capabilities is important because it 
allows the user to reason simply about how a provider meets the 
requested constraints for scheduling.


Currently, those constraints include the following:

* Does the provider have *capacity* for the requested resources?
* Does the provider have the required (or forbidden) *capabilities*?
* Does the provider belong to some group?

If we want to add further constraints to the placement allocation 
candidates request that ask things like:


* Does the provider have version 1.22.61821 of BIOS firmware from 
Marvell installed on it?
* Does the provider support an FPGA that has had an OVS program flashed 
to it in the last 20 days?
* Does the provider belong to physical network "corpnet" and also 
support creation of virtual NICs of type either "DIRECT" or "NORMAL"?


Then we should add a data model that allow providers to be decorated 
with key/value (or more complex than key/value) information where we can 
query for those kinds of constraints without needing to encode all sorts 
of non-binary bits of information into a capability string.


Propose such a thing and I'll gladly support it. But I won't support 
bastardizing the simple concept of a boolean capability just because we 
don't want to change the API or database schema.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] update 18-39

2018-09-28 Thread Chris Dent


HTML: https://anticdent.org/placement-update-18-39.html

Welcome to a placement update. This week is mostly focused on specs
and illuminating some of the pressing issues with extraction.

# Most Important

Last week's important tasks remain important:

* Work on specs and setting
  [priorities](https://etherpad.openstack.org/p/nova-ptg-stein-priorities).
* Working towards upgrade tests (see more on that in the extraction
  section below).

# What's Changed

Tetsuro is a core reviewer in placement now. Yay! Welcome.

Mel produced a [summary of the
PTG](http://lists.openstack.org/pipermail/openstack-dev/2018-September/135122.html)
with some good links and plans.

# Questions and Links

No answer to last week's question that I can recall, so here it is
again:

* [Last week], belmoreira showed up in 
[#openstack-placement](http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2018-09-20.log.html#t2018-09-20T14:11:59)
  with some issues with expected resource providers not showing up
  in allocation candidates. This was traced back to `max_unit` for
  `VCPU` being locked at == `total` and hardware which had had SMT
  turned off now reporting fewer CPUs, thus being unable to accept
  existing large flavors. Discussion ensued about ways to
  potentially make `max_unit` more manageable by operators. The
  existing constraint is there for a reason (discussed in IRC) but
  that reason is not universally agreed.

There are two issues with this: The "reason" is not universally
agreed and we didn't resolve that. Also, management of
`max_unit` of any inventory gets more complicated in a world of
complex NUMA topologies.

Eric has raised a question about the [intended purpose of
traits](http://lists.openstack.org/pipermail/openstack-dev/2018-September/135209.html).

# Bugs

* Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 18.
  +1.
* [In progress placement bugs](https://goo.gl/vzGGDQ) 9. -1.

# Specs

* 
  Account for host agg allocation ratio in placement
  (Still in rocky/)

* 
  Add subtree filter for GET /resource_providers

* 
  Resource provider - request group mapping in allocation candidate

* 
  VMware: place instances on resource pool
  (still in rocky/)

* 
  Standardize CPU resource tracking

* 
  Allow overcommit of dedicated CPU
  (Has an alternative which changes allocations to a float)

* 
  List resource providers having inventory

* 
  Bi-directional enforcement of traits

* 
  allow transferring ownership of instance

* 
  Modelling passthrough devices for report to placement

* 
  Propose counting quota usage from placement and API database
  (A bit out of date but may be worth resurrecting)

* 
  Spec: allocation candidates in tree

* 
  [WIP] generic device discovery policy

* 
  Nova Cyborg interaction specification.

* 
  supporting virtual NVDIMM devices

* 
  Spec: Support filtering by forbidden aggregate

* 
  Proposes NUMA topology with RPs

* 
  Support initial allocation ratios

* 
  Count quota based on resource class

# Main Themes

## Making Nested Useful

Work on getting nova's use of nested resource providers happy and
fixing bugs discovered in placement in the process.

* 
* 

## Consumer Generations

gibi is still working hard to drive home support for consumer
generations on the nova side. Because of some dependency management
that stuff is currently in the following topic:

* 

## Extraction

There are few large-ish things in progress with the extraction
process which need some broader attention:

* Matt is working on a [patch to grenade](https://review.openstack.org/604454)
  to deal with upgrading, with a migration of data.

* We have work in progress to tune up the documentation but we are
  not yet publishing documentation. We need to work out a plan for
  this. Presumably we don't want to be publishing docs until we are
  publishing code, but the interdependencies need to be teased out.

* We need to decide how we are going 

Re: [openstack-dev] [python3-first] support in stable branches

2018-09-28 Thread Doug Hellmann
Dariusz Krol  writes:

> Hello,
>
>
> I'm specifically referring to branches mentioned in: 
> https://github.com/openstack/goal-tools/blob/4125c31e74776a7dc6a15d2276ab51ff3e73cd16/goal_tools/python3_first/jobs.py#L54
>  

I'm still not entirely sure what you're saying is happening that you do
not expect to have happening, but I'll take a guess.

The zuul migration portion of the goal work needs to move *all* of the
Zuul settings for a repo into the correct branch because after the
migration the job settings will no longer be in project-config at all
and so zuul won't know which jobs to run on the stable branches if we
haven't imported the settings.

The migration script tries to figure out which jobs apply to which
branches of each repo by looking at the branch specifier settings in
project-config, and then it creates an import patch for each branch with
the relevant jobs. Subsequent steps in the script change the
documentation and release notes jobs and then add new python 3.6 testing
jobs. Those steps only apply to the master branch.

So, if you have a patch importing a python 3 job setting to a stable
branch of a repo where you aren't expecting it (and it isn't supported),
that's most likely because project-config has no branch specifiers for
the job (meaning it should run on all branches). We did find several
cases where that was true because projects added jobs without branch
specifiers after the branches were created, and then back-ported no
patches to the stable branch. See
http://lists.openstack.org/pipermail/openstack-dev/2018-August/133594.html
for details.

Doug

> I hope this helps.
>
>
> Best,
>
> Dariusz Krol
>
>
> On 09/27/2018 06:04 PM, Ben Nemec wrote:
>>
>>
>> On 9/27/18 10:36 AM, Doug Hellmann wrote:
>>> Dariusz Krol  writes:
>>>
 Hello Champions :)


 I work on the Trove project and we are wondering if python3 should be
 supported in previous releases as well?

 Actually this question was asked by Alan Pevec from the stable branch
 maintainers list.

 I saw you added releases up to ocata to support python3 and there are
 already changes on gerrit waiting to be merged but after reading [1] I
 have my doubts about this.
>>>
>>> I'm not sure what you're referring to when you say "added releases up to
>>> ocata" here. Can you link to the patches that you have questions about?
>>
>> Possibly the zuul migration patches for all the stable branches? If 
>> so, those don't change the status of python 3 support on the stable 
>> branches, they just split the zuul configuration to make it easier to 
>> add new python 3 jobs on master without affecting the stable branches.
>>
>>>
 Could you elaborate why it is necessary to support previous releases ?


 Best,

 Dariusz Krol


 [1] https://docs.openstack.org/project-team-guide/stable-branches.html
 __ 

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __ 
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed

2018-09-28 Thread Doug Hellmann
z...@openstack.org writes:

> Build failed.
>
> - release-openstack-python 
> http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/
>  : FAILURE in 3m 57s
> - announce-release announce-release : SKIPPED
> - propose-update-constraints propose-update-constraints : SKIPPED

The error here is

  ERROR: unknown environment 'venv'

It looks like os-log-merger is not set up for the
release-openstack-python job, which expects a specific tox setup.

http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/ara-report/result/7c6fd37c-82d8-48f7-b653-5bdba90cbc31/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][elections] Stein TC Election Results

2018-09-28 Thread Doug Hellmann
Emmet Hikory  writes:

> Please join me in congratulating the 6 newly elected members of the
> Technical Committee (TC):
>
>   - Doug Hellmann (dhellmann)
>   - Julia Kreger (TheJulia)
>   - Jeremy Stanley (fungi)
>   - Jean-Philippe Evrard (evrardjp)
>   - Lance Bragstad (lbragstad)
>   - Ghanshyam Mann (gmann)

Congratulations, everyone! I'm looking forward to serving with all of
you for another term.

> Full Results:
> https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864
>
> Election process details and results are also available here:
> https://governance.openstack.org/election/
>
> Thank you to all of the candidates, having a good group of candidates helps
> engage the community in our democratic process.
>
> Thank you to all who voted and who encouraged others to vote.  Voter turnout
> was significantly up from recent cycles.  We need to ensure your voices are
> heard.

It's particularly good to hear that turnout is up, not just in
percentage but in raw numbers, too. Thank you all for voting!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][castellan] Time for a 1.0 release?

2018-09-28 Thread Ben Nemec



On 9/28/18 3:59 AM, Thierry Carrez wrote:

Ade Lee wrote:

On Tue, 2018-09-25 at 16:30 -0500, Ben Nemec wrote:

Doug pointed out on a recent Oslo release review that castellan is
still
not officially 1.0. Given the age of the project and the fact that
we're
asking people to deploy a Castellan-compatible keystore as one of
the
base services, it's probably time to address that.

To that end, I'm sending this to see if anyone is aware of any
reasons
we shouldn't go ahead and tag a 1.0 of Castellan.



+ 1


+1
Propose it and we can continue the discussion on the review :)



Done: https://review.openstack.org/606108

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][puppet] clearing the gate and landing patches to help CI

2018-09-28 Thread Alex Schultz
Hey Folks,

Currently the tripleo gate is at 21 hours and we're continue to have
timeouts and now scenario001/004 (in queens/pike) appear to be broken.
Additionally we've got some patches in puppet-openstack that we need
to land in order to resolve broken puppet unit tests which is
affecting both projects.

Currently we need to wait for the following to land in puppet:
https://review.openstack.org/#/q/I4875b8bc8b2333046fc3a08b4669774fd26c89cb
https://review.openstack.org/#/c/605350/

In tripleo we currently have not identified the root cause for any of
the timeout failures so I'd for us to work on that before trying to
land anything else because the gate resets are killing us and not
helping anything.  We have landed a few patches that have improved the
situation but we're still hitting issues.

https://bugs.launchpad.net/tripleo/+bug/1795009 is the bug for the
scenario001/004 issues.  It appears that we're ending up with a newer
version of ansible on the system then what the packages provide. Still
working on figuring out where it's coming from.

Please do not approve anything or recheck unless it's to address CI
issues at this time.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv

2018-09-28 Thread Matthew Treinish
On Fri, Sep 28, 2018 at 03:31:10PM +0100, Chris Dent wrote:
> On Fri, 28 Sep 2018, Matthew Treinish wrote:
> 
> > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683
> > 
> > Right above this line it shows that the gabbi-tempest plugin is installed in
> > the venv:
> > 
> > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_650661
> 
> Ah, so it is, thanks. My grepping and visual-grepping failed
> because of the weird linebreaks. Le sigh.
> 
> For curiosity: What's the processing that is making it be installed
> twice? I ask because I'm hoping to (eventually) trim this to as
> small and light as possible. And then even more eventually I hope to
> make it so that if a project chooses the right job and has a gabbits
> directory, they'll get run.

The plugin should only be installed once. From the logs here is the only
place the plugin is being installed in the venv:

http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_01_027151

The rest of the references are just tox printing out the packages installed in
the venv before running a command.

> 
> The part that was confusing for me was that the virtual env that
> lib/tempest (from devstack) uses is not even mentioned in tempest's
> tox.ini, so is using its own directory as far as I could tell.

It should be, devstack should be using the venv-tempest tox job to do venv
prep (like installling the plugins) and run commands (like running
tempest list-plugins for the log). This tox env is defined here:

https://github.com/openstack/tempest/blob/master/tox.ini#L157-L162

It's sort of a hack, devstack is just using tox as venv manager for
setting up tempest. But, then we use tox in the runner (what used to be
devstack-gate) so this made sense.

-Matt Treinish

> 
> > My guess is that the plugin isn't returning any tests that match the regex.
> 
> I'm going to run it without a regex and see what it produces.
> 
> It might be that pre job I'm using to try to get the gabbits in the
> right place is not working as desired.
> 
> A few patchsets ago when I was using the oogly way of doing things
> it was all working.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Zane Bitter

On 28/09/18 9:25 AM, Eric Fried wrote:

It's time somebody said this.

Every time we turn a corner or look under a rug, we find another use
case for provider traits in placement. But every time we have to have
the argument about whether that use case satisfies the original
"intended purpose" of traits.

That's only reason I've ever been able to glean: that it (whatever "it"
is) wasn't what the architects had in mind when they came up with the
idea of traits. We're not even talking about anything that would require
changes to the placement API. Just, "Oh, that's not a *capability* -
shut it down."


So I have no idea what traits or capabilities are (in this context), but 
I have a bit of experience with running a busy project where everyone 
wants to get their pet feature in, so I'd like to offer a couple of 
observations if I may:


* Conceptual integrity *is* important.

* 'Everything we could think of before we had a chance to try it' is not 
an especially compelling concept, and using it in place of one will tend 
to result in a lot of repeated arguments.


Both extremes ('that's how we've always done it' vs. 'free-for-all') are 
probably undesirable. I'd recommend trying to document traits in 
conceptual, rather than historical, terms. What are they good at? What 
are they not good at? Is there a limit to how many there can be while 
still remaining manageable? Are there other potential concepts that 
would map better to certain borderline use cases? That won't make the 
arguments go away, but it should help make them easier to resolve.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Balázs Gibizer



On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried  wrote:

It's time somebody said this.

Every time we turn a corner or look under a rug, we find another use
case for provider traits in placement. But every time we have to have
the argument about whether that use case satisfies the original
"intended purpose" of traits.

That's only reason I've ever been able to glean: that it (whatever 
"it"

is) wasn't what the architects had in mind when they came up with the
idea of traits. We're not even talking about anything that would 
require

changes to the placement API. Just, "Oh, that's not a *capability* -
shut it down."

Bubble wrap was originally intended as a textured wallpaper and a
greenhouse insulator. Can we accept the fact that traits have (many,
many) uses beyond marking capabilities, and quit with the arbitrary
restrictions?


How far are we willing to go? Does an arbitrary (key: value) pair 
encoded in a trait name like key_`str(value)` (e.g. 
CURRENT_TEMPERATURE: 85 encoded as CUSTOM_TEMPERATURE_85) something we 
would be OK to see in placement?


Cheers,
gibi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Sean McGinnis
> On Fri, Sep 28, 2018 at 8:48 AM Lance Bragstad  wrote:
> 
> > Bumping this thread again and proposing two conventions based on the
> > discussion here. I propose we decide on one of the two following
> > conventions:
> >
> > *::*
> >
> > or
> >
> > *:_*
> >
> > Where  is the corresponding service type of the project [0],
> > and  is either create, get, list, update, or delete. I think
> > decoupling the method from the policy name should aid in consistency,
> > regardless of the underlying implementation. The HTTP method specifics can
> > still be relayed using oslo.policy's DocumentedRuleDefault object [1].
> >
> > I think the plurality of the resource should default to what makes sense
> > for the operation being carried out (e.g., list:foobars, create:foobar).
> >
> > I don't mind the first one because it's clear about what the delimiter is
> > and it doesn't look weird when projects have something like:
> >
> > :::
> >

My initial preference was the second format, but you make a good point here
about potential subactions. Either is fine with me - the main thing I would
love to see is consistency in format. But based on this part, I vote for option
2.

> > If folks are ok with this, I can start working on some documentation that
> > explains the motivation for this. Afterward, we can figure out how we want
> > to track this work.
> >

+1 thanks for working on this!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv

2018-09-28 Thread Chris Dent

On Fri, 28 Sep 2018, Matthew Treinish wrote:


http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683


Right above this line it shows that the gabbi-tempest plugin is installed in
the venv:

http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_650661


Ah, so it is, thanks. My grepping and visual-grepping failed
because of the weird linebreaks. Le sigh.

For curiosity: What's the processing that is making it be installed
twice? I ask because I'm hoping to (eventually) trim this to as
small and light as possible. And then even more eventually I hope to
make it so that if a project chooses the right job and has a gabbits
directory, they'll get run.

The part that was confusing for me was that the virtual env that
lib/tempest (from devstack) uses is not even mentioned in tempest's
tox.ini, so is using its own directory as far as I could tell.


My guess is that the plugin isn't returning any tests that match the regex.


I'm going to run it without a regex and see what it produces.

It might be that pre job I'm using to try to get the gabbits in the
right place is not working as desired.

A few patchsets ago when I was using the oogly way of doing things
it was all working.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 24 September 2018

2018-09-28 Thread Colleen Murphy
# Keystone Team Update - Week of 24 September 2018

## News

A theme this week was enhancing keystone's federation implementation to better 
support Edge use cases. We talked about it some on IRC[1] and the mailing 
list[2].

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-09-25.log.html#t2018-09-25T16:37:42
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/135072.html

## Open Specs

Search query: https://bit.ly/2Pi6dGj

In addition to the Stein specs mentioned last week, Adam has been working on an 
untargeted spec for federation enhancements[3].

[3] https://review.openstack.org/313604

## Recently Merged Changes

Search query: https://bit.ly/2pquOwT

We merged 15 changes this week, including lots of bugfixes and improvements to 
our Zuul config.

## Changes that need Attention

Search query: https://bit.ly/2PUk84S

There are 54 changes that are passing CI, not in merge conflict, have no 
negative reviews and aren't proposed by bots.

## Bugs

This week we opened 7 new bugs and closed 7.

Bugs opened (7) 
Bug #1794376 (keystone:High) opened by Lance Bragstad 
https://bugs.launchpad.net/keystone/+bug/1794376
Bug #1794552 (keystone:High) opened by Adam Young 
https://bugs.launchpad.net/keystone/+bug/1794552
Bug #1794864 (keystone:Medium) opened by Lance Bragstad 
https://bugs.launchpad.net/keystone/+bug/1794864 
Bug #1794527 (keystone:Wishlist) opened by Adam Young 
https://bugs.launchpad.net/keystone/+bug/1794527 
Bug #1794112 (keystone:Undecided) opened by fuckubuntu1 
https://bugs.launchpad.net/keystone/+bug/1794112 
Bug #1794726 (keystone:Undecided) opened by Colleen Murphy 
https://bugs.launchpad.net/keystone/+bug/1794726 
Bug #1794179 (keystonemiddleware:Undecided) opened by Tim Burke 
https://bugs.launchpad.net/keystonemiddleware/+bug/1794179 

Bugs closed (3) 
Bug #1794112 (keystone:Undecided) 
https://bugs.launchpad.net/keystone/+bug/1794112 
Bug #973681 (keystonemiddleware:Medium) 
https://bugs.launchpad.net/keystonemiddleware/+bug/973681 
Bug #1473042 (keystonemiddleware:Wishlist) 
https://bugs.launchpad.net/keystonemiddleware/+bug/1473042 

Bugs fixed (4) 
Bug #1750843 (keystone:Low) fixed by Matthew Thode 
https://bugs.launchpad.net/keystone/+bug/1750843 
Bug #1768980 (keystone:Low) fixed by Colleen Murphy 
https://bugs.launchpad.net/keystone/+bug/1768980 
Bug #1473292 (keystone:Wishlist) fixed by Vishakha Agarwal 
https://bugs.launchpad.net/keystone/+bug/1473292 
Bug #1275962 (keystonemiddleware:Wishlist) fixed by no one 
https://bugs.launchpad.net/keystonemiddleware/+bug/127596

## Milestone Outlook

https://releases.openstack.org/stein/schedule.html

Spec proposal freeze deadline is a month, if you would like to see a feature in 
keystone in Stein please propose it now so it can get feedback before the spec 
freeze deadline.

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad: 
https://etherpad.openstack.org/p/keystone-team-newsletter
Dashboard generated using gerrit-dash-creator and 
https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Julia Kreger
Eric,

Very well said, I completely agree with you. We should not hold
ourselves back based upon perceptions of original intended purpose.
Things do change. We have to accept that. We must normalize this fact
in our actions moving forward.

That being said, I'm not entirely sure I'm personally fully aware of
the arbitrary restrictions you speak of. Is there thread or a
discussion out there that I can gain further context with?

Thanks!

-Julia
On Fri, Sep 28, 2018 at 6:25 AM Eric Fried  wrote:
>
> It's time somebody said this.
>
> Every time we turn a corner or look under a rug, we find another use
> case for provider traits in placement. But every time we have to have
> the argument about whether that use case satisfies the original
> "intended purpose" of traits.
>
> That's only reason I've ever been able to glean: that it (whatever "it"
> is) wasn't what the architects had in mind when they came up with the
> idea of traits. We're not even talking about anything that would require
> changes to the placement API. Just, "Oh, that's not a *capability* -
> shut it down."
>
> Bubble wrap was originally intended as a textured wallpaper and a
> greenhouse insulator. Can we accept the fact that traits have (many,
> many) uses beyond marking capabilities, and quit with the arbitrary
> restrictions?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv

2018-09-28 Thread Matthew Treinish
On Fri, Sep 28, 2018 at 02:39:24PM +0100, Chris Dent wrote:
> 
> I'm still trying to figure out how to properly create a "modern" (as
> in zuul v3 oriented) integration test for placement using gabbi and
> tempest. That work is happening at https://review.openstack.org/#/c/601614/
> 
> There was lots of progress made after the last message on this
> topic 
> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134837.html
> but I've reached another interesting impasse.
> 
> From devstack's standpoint, the way to say "I want to use a tempest
> plugin" is to set TEMPEST_PLUGINS to alist of where the plugins are.
> devstack:lib/tempest then does a:
> 
> tox -evenv-tempest -- pip install -c 
> $REQUIREMENTS_DIR/upper-constraints.txt $TEMPEST_PLUGINS
> 
> http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_12_58_138163
> 
> I have this part working as expected.
> 
> However,
> 
> The advice is then to create a new job that has a parent of
> devstack-tempest. That zuul job runs a variety of tox environments,
> depending on the setting of the `tox_envlist` var. If you wish to
> use a `tempest_test_regex` (I do) the preferred tox environment is
> 'all'.
> 
> That venv doesn't have the plugin installed, thus no gabbi tests are
> found:
> 
> http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683

Right above this line it shows that the gabbi-tempest plugin is installed in
the venv:

http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_650661

at version 0.1.1. It's a bit weird because it's line wrapped in my browser.
The devstack logs also shows the plugin:

http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/controller/logs/devstacklog.txt.gz#_2018-09-28_11_13_13_076

All the tempest tox jobs that run tempest (and the tempest-venv command used by
devstack) run inside the same tox venv:

https://github.com/openstack/tempest/blob/master/tox.ini#L52

My guess is that the plugin isn't returning any tests that match the regex.

I'm also a bit alarmed that tempest run is returning 0 there when no tests are
being run. That's definitely a bug because things should fail with no tests
being successfully run.

-Matt Treinish

> 
> How do I get my plugin installed into the right venv while still
> following the guidelines for good zuul behavior?
> 


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3-first] support in stable branches

2018-09-28 Thread Dariusz Krol
Hello,


I'm specifically referring to branches mentioned in: 
https://github.com/openstack/goal-tools/blob/4125c31e74776a7dc6a15d2276ab51ff3e73cd16/goal_tools/python3_first/jobs.py#L54
 



I hope this helps.


Best,

Dariusz Krol


On 09/27/2018 06:04 PM, Ben Nemec wrote:
>
>
> On 9/27/18 10:36 AM, Doug Hellmann wrote:
>> Dariusz Krol  writes:
>>
>>> Hello Champions :)
>>>
>>>
>>> I work on the Trove project and we are wondering if python3 should be
>>> supported in previous releases as well?
>>>
>>> Actually this question was asked by Alan Pevec from the stable branch
>>> maintainers list.
>>>
>>> I saw you added releases up to ocata to support python3 and there are
>>> already changes on gerrit waiting to be merged but after reading [1] I
>>> have my doubts about this.
>>
>> I'm not sure what you're referring to when you say "added releases up to
>> ocata" here. Can you link to the patches that you have questions about?
>
> Possibly the zuul migration patches for all the stable branches? If 
> so, those don't change the status of python 3 support on the stable 
> branches, they just split the zuul configuration to make it easier to 
> add new python 3 jobs on master without affecting the stable branches.
>
>>
>>> Could you elaborate why it is necessary to support previous releases ?
>>>
>>>
>>> Best,
>>>
>>> Dariusz Krol
>>>
>>>
>>> [1] https://docs.openstack.org/project-team-guide/stable-branches.html
>>> __ 
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __ 
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal)

2018-09-28 Thread Julia Kreger
On Fri, Sep 28, 2018 at 5:00 AM Jeremy Stanley  wrote:

>
> If memory serves, the biggest challenge around that solution was
> determining who approves such proposals since they still need
> per-project specs for the project-specific details anyway. Perhaps
> someone who has recently worked on a feature which required
> coordination between several teams (but not a majority of teams like
> our cycle goals process addresses) can comment on what worked for
> them and what improvements they would make on the process they
> followed.
> --

This is definitely the biggest challenge, and I think it is one of
those things that is going to be on case by case basis.

In the case of neutron smartnic support with ironic, the spec is
largely living in ironic-specs, but we are collaborating with neutron
folks. They may have other specs that tie in, but that we don't
necessarily need to be aware of. I also think the prior ironic/neutron
integration work executed that way.  My perception with nova has also
largely been similar with ironic's specs driving some changes in the
nova ironic virt driver because we were evolving ironic, as long as
there is a blueprint or something tracking that piece of work so they
have visibility.  At some point, some spec has to get a green light or
be pushed forward first. Beyond that, it is largely a tracking issue
as long as there is consensus.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Lance Bragstad
Adding the operator list back in.

On Fri, Sep 28, 2018 at 8:48 AM Lance Bragstad  wrote:

> Bumping this thread again and proposing two conventions based on the
> discussion here. I propose we decide on one of the two following
> conventions:
>
> *::*
>
> or
>
> *:_*
>
> Where  is the corresponding service type of the project [0],
> and  is either create, get, list, update, or delete. I think
> decoupling the method from the policy name should aid in consistency,
> regardless of the underlying implementation. The HTTP method specifics can
> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
>
> I think the plurality of the resource should default to what makes sense
> for the operation being carried out (e.g., list:foobars, create:foobar).
>
> I don't mind the first one because it's clear about what the delimiter is
> and it doesn't look weird when projects have something like:
>
> :::
>
> If folks are ok with this, I can start working on some documentation that
> explains the motivation for this. Afterward, we can figure out how we want
> to track this work.
>
> What color do you want the shed to be?
>
> [0] https://service-types.openstack.org/service-types.json
> [1]
> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
>
> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad 
> wrote:
>
>>
>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann 
>> wrote:
>>
>>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
>>> j...@johngarbutt.com> wrote 
>>>  > tl;dr+1 consistent names
>>>  > I would make the names mirror the API... because the Operator setting
>>> them knows the API, not the codeIgnore the crazy names in Nova, I certainly
>>> hate them
>>>
>>> Big +1 on consistent naming  which will help operator as well as
>>> developer to maintain those.
>>>
>>>  >
>>>  > Lance Bragstad  wrote:
>>>  > > I'm curious if anyone has context on the "os-" part of the format?
>>>  >
>>>  > My memory of the Nova policy mess...* Nova's policy rules
>>> traditionally followed the patterns of the code
>>>  > ** Yes, horrible, but it happened.* The code used to have the
>>> OpenStack API and the EC2 API, hence the "os"* API used to expand with
>>> extensions, so the policy name is often based on extensions** note most of
>>> the extension code has now gone, including lots of related policies* Policy
>>> in code was focused on getting us to a place where we could rename policy**
>>> Whoop whoop by the way, it feels like we are really close to something
>>> sensible now!
>>>  > Lance Bragstad  wrote:
>>>  > Thoughts on using create, list, update, and delete as opposed to
>>> post, get, put, patch, and delete in the naming convention?
>>>  > I could go either way as I think about "list servers" in the API.But
>>> my preference is for the URL stub and POST, GET, etc.
>>>  >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad 
>>> wrote:If we consider dropping "os", should we entertain dropping "api",
>>> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple
>>> service types (e.g "compute" or "loadbalancer").
>>>  > +1The API is known as "compute" in api-ref, so the policy should be
>>> for "compute", etc.
>>>
>>> Agree on mapping the policy name with api-ref as much as possible. Other
>>> than policy name having 'os-', we have 'os-' in resource name also in nova
>>> API url like /os-agents, /os-aggregates etc (almost every resource except
>>> servers , flavors).  As we cannot get rid of those from API url, we need to
>>> keep the same in policy naming too? or we can have policy name like
>>> compute:agents:create/post but that mismatch from api-ref where agents
>>> resource url is os-agents.
>>>
>>
>> Good question. I think this depends on how the service does policy
>> enforcement.
>>
>> I know we did something like this in keystone, which required policy
>> names and method names to be the same:
>>
>>   "identity:list_users": "..."
>>
>> Because the initial implementation of policy enforcement used a decorator
>> like this:
>>
>>   from keystone import controller
>>
>>   @controller.protected
>>   def list_users(self):
>>   ...
>>
>> Having the policy name the same as the method name made it easier for the
>> decorator implementation to resolve the policy needed to protect the API
>> because it just looked at the name of the wrapped method. The advantage was
>> that it was easy to implement new APIs because you only needed to add a
>> policy, implement the method, and make sure you decorate the implementation.
>>
>> While this worked, we are moving away from it entirely. The decorator
>> implementation was ridiculously complicated. Only a handful of keystone
>> developers understood it. With the addition of system-scope, it would have
>> only become more convoluted. It also enables a much more copy-paste pattern
>> (e.g., so long as I wrap my method with this decorator implementation,
>> things should work right?). Instead, we're calling 

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Lance Bragstad
Bumping this thread again and proposing two conventions based on the
discussion here. I propose we decide on one of the two following
conventions:

*::*

or

*:_*

Where  is the corresponding service type of the project [0],
and  is either create, get, list, update, or delete. I think
decoupling the method from the policy name should aid in consistency,
regardless of the underlying implementation. The HTTP method specifics can
still be relayed using oslo.policy's DocumentedRuleDefault object [1].

I think the plurality of the resource should default to what makes sense
for the operation being carried out (e.g., list:foobars, create:foobar).

I don't mind the first one because it's clear about what the delimiter is
and it doesn't look weird when projects have something like:

:::

If folks are ok with this, I can start working on some documentation that
explains the motivation for this. Afterward, we can figure out how we want
to track this work.

What color do you want the shed to be?

[0] https://service-types.openstack.org/service-types.json
[1]
https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule

On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad  wrote:

>
> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann 
> wrote:
>
>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
>> j...@johngarbutt.com> wrote 
>>  > tl;dr+1 consistent names
>>  > I would make the names mirror the API... because the Operator setting
>> them knows the API, not the codeIgnore the crazy names in Nova, I certainly
>> hate them
>>
>> Big +1 on consistent naming  which will help operator as well as
>> developer to maintain those.
>>
>>  >
>>  > Lance Bragstad  wrote:
>>  > > I'm curious if anyone has context on the "os-" part of the format?
>>  >
>>  > My memory of the Nova policy mess...* Nova's policy rules
>> traditionally followed the patterns of the code
>>  > ** Yes, horrible, but it happened.* The code used to have the
>> OpenStack API and the EC2 API, hence the "os"* API used to expand with
>> extensions, so the policy name is often based on extensions** note most of
>> the extension code has now gone, including lots of related policies* Policy
>> in code was focused on getting us to a place where we could rename policy**
>> Whoop whoop by the way, it feels like we are really close to something
>> sensible now!
>>  > Lance Bragstad  wrote:
>>  > Thoughts on using create, list, update, and delete as opposed to post,
>> get, put, patch, and delete in the naming convention?
>>  > I could go either way as I think about "list servers" in the API.But
>> my preference is for the URL stub and POST, GET, etc.
>>  >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad 
>> wrote:If we consider dropping "os", should we entertain dropping "api",
>> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple
>> service types (e.g "compute" or "loadbalancer").
>>  > +1The API is known as "compute" in api-ref, so the policy should be
>> for "compute", etc.
>>
>> Agree on mapping the policy name with api-ref as much as possible. Other
>> than policy name having 'os-', we have 'os-' in resource name also in nova
>> API url like /os-agents, /os-aggregates etc (almost every resource except
>> servers , flavors).  As we cannot get rid of those from API url, we need to
>> keep the same in policy naming too? or we can have policy name like
>> compute:agents:create/post but that mismatch from api-ref where agents
>> resource url is os-agents.
>>
>
> Good question. I think this depends on how the service does policy
> enforcement.
>
> I know we did something like this in keystone, which required policy names
> and method names to be the same:
>
>   "identity:list_users": "..."
>
> Because the initial implementation of policy enforcement used a decorator
> like this:
>
>   from keystone import controller
>
>   @controller.protected
>   def list_users(self):
>   ...
>
> Having the policy name the same as the method name made it easier for the
> decorator implementation to resolve the policy needed to protect the API
> because it just looked at the name of the wrapped method. The advantage was
> that it was easy to implement new APIs because you only needed to add a
> policy, implement the method, and make sure you decorate the implementation.
>
> While this worked, we are moving away from it entirely. The decorator
> implementation was ridiculously complicated. Only a handful of keystone
> developers understood it. With the addition of system-scope, it would have
> only become more convoluted. It also enables a much more copy-paste pattern
> (e.g., so long as I wrap my method with this decorator implementation,
> things should work right?). Instead, we're calling enforcement within the
> controller implementation to ensure things are easier to understand. It
> requires developers to be cognizant of how different token types affect the
> resources within an API. That said, coupling the 

[openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv

2018-09-28 Thread Chris Dent


I'm still trying to figure out how to properly create a "modern" (as
in zuul v3 oriented) integration test for placement using gabbi and
tempest. That work is happening at https://review.openstack.org/#/c/601614/

There was lots of progress made after the last message on this
topic 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134837.html
but I've reached another interesting impasse.


From devstack's standpoint, the way to say "I want to use a tempest

plugin" is to set TEMPEST_PLUGINS to alist of where the plugins are.
devstack:lib/tempest then does a:

tox -evenv-tempest -- pip install -c 
$REQUIREMENTS_DIR/upper-constraints.txt $TEMPEST_PLUGINS

http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_12_58_138163

I have this part working as expected.

However,

The advice is then to create a new job that has a parent of
devstack-tempest. That zuul job runs a variety of tox environments,
depending on the setting of the `tox_envlist` var. If you wish to
use a `tempest_test_regex` (I do) the preferred tox environment is
'all'.

That venv doesn't have the plugin installed, thus no gabbi tests are
found:

http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683

How do I get my plugin installed into the right venv while still
following the guidelines for good zuul behavior?

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Summit Forum Submission Process Extended

2018-09-28 Thread Jimmy McArthur

Hello Everyone

We are extended the Forum Submission process through September 30, 
11:59pm Pacific (6:59am GMT).  We've already gotten a ton of great 
submissions, but we want to leave the door open through the weekend in 
case we have any stragglers.


Please submit your topics here: 
https://www.openstack.org/summit/berlin-2018/call-for-presentations


If you'd like to review the submissions to date, you can go to 
https://www.openstack.org/summit/berlin-2018/vote-for-speakers.  There 
is no voting period, this is just so Forum attendees can review the 
submissions to date.


Thank you!
Jimmy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release model for feature-complete OpenStack libraries

2018-09-28 Thread Doug Hellmann
 writes:

> How will we handle which versions of libraries work together?
> And which combinations will be run thru CI?

Dependency management will work the same way it does today.

Each component (server or library) lists the versions of the
dependencies it is compatible with. That information goes into the
packages built for the component, and is used to ensure that a
compatible version of each dependency is installed when the package is
installed.

We control what is actually tested by using the upper constraints list
managed in the requirements repository. There's more detail about how
that list is managed in the project team guide at
https://docs.openstack.org/project-team-guide/dependency-management.html

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Eric Fried
It's time somebody said this.

Every time we turn a corner or look under a rug, we find another use
case for provider traits in placement. But every time we have to have
the argument about whether that use case satisfies the original
"intended purpose" of traits.

That's only reason I've ever been able to glean: that it (whatever "it"
is) wasn't what the architects had in mind when they came up with the
idea of traits. We're not even talking about anything that would require
changes to the placement API. Just, "Oh, that's not a *capability* -
shut it down."

Bubble wrap was originally intended as a textured wallpaper and a
greenhouse insulator. Can we accept the fact that traits have (many,
many) uses beyond marking capabilities, and quit with the arbitrary
restrictions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release model for feature-complete OpenStack libraries

2018-09-28 Thread Arkady.Kanevsky
How will we handle which versions of libraries work together?
And which combinations will be run thru CI?

-Original Message-
From: Thierry Carrez  
Sent: Friday, September 28, 2018 7:17 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [release] Release model for feature-complete OpenStack 
libraries


[EXTERNAL EMAIL] 
Please report any suspicious attachments, links, or requests for sensitive 
information.


Hi everyone,

In OpenStack, libraries have to be released with a 
cycle-with-intermediary model, so that (1) they can be released early 
and often, (2) services consuming those libraries can take advantage of 
their new features, and (3) we detect integration bugs early rather than 
late. This works well while libraries see lots of changes, however it is 
a bit heavy-handed for feature-complete, stable libraries: it forces 
those to release multiple times per year even if they have not seen any 
change.

For those, we discussed[1] a number of mechanisms in the past, but at 
the last PTG we came up with the conclusion that those were a bit 
complex and not really addressing the issue. Here is a simpler proposal.

Once libraries are deemed feature-complete and stable, they should 
switch them to an "independent" release model (like all our third-party 
libraries). Those would see releases purely as needed for the occasional 
corner case bugfix. They won't be released early and often, there is no 
new feature to take advantage of, and new integration bugs should be 
very rare.

This transition should be definitive in most cases. In rare cases where 
a library were to need large feature development work again, we'd have 
two options: develop the new feature in a new library depending on the 
stable one, or grant an exception and switch it back to 
cycle-with-intermediary.

If one of your libraries should already be considered feature-complete 
and stable, please contact the release team to transition them to the 
new release model.

Thanks for reading!

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131341.html

-- 
The Release Team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release model for feature-complete OpenStack libraries

2018-09-28 Thread Thierry Carrez

Hi everyone,

In OpenStack, libraries have to be released with a 
cycle-with-intermediary model, so that (1) they can be released early 
and often, (2) services consuming those libraries can take advantage of 
their new features, and (3) we detect integration bugs early rather than 
late. This works well while libraries see lots of changes, however it is 
a bit heavy-handed for feature-complete, stable libraries: it forces 
those to release multiple times per year even if they have not seen any 
change.


For those, we discussed[1] a number of mechanisms in the past, but at 
the last PTG we came up with the conclusion that those were a bit 
complex and not really addressing the issue. Here is a simpler proposal.


Once libraries are deemed feature-complete and stable, they should 
switch them to an "independent" release model (like all our third-party 
libraries). Those would see releases purely as needed for the occasional 
corner case bugfix. They won't be released early and often, there is no 
new feature to take advantage of, and new integration bugs should be 
very rare.


This transition should be definitive in most cases. In rare cases where 
a library were to need large feature development work again, we'd have 
two options: develop the new feature in a new library depending on the 
stable one, or grant an exception and switch it back to 
cycle-with-intermediary.


If one of your libraries should already be considered feature-complete 
and stable, please contact the release team to transition them to the 
new release model.


Thanks for reading!

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131341.html

--
The Release Team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal)

2018-09-28 Thread Josephine Seifert
Hi,

Am 28.09.2018 um 13:51 schrieb Erlon Cruz:
> I don't know if our workflow supports this, but it would be nice to
> have a place to
> place cross-projec changes like that (something like,
> openstack-cross-projects-specs), 
> and use that as a initial point for high level discussions. But for
> now, you can start creating
> specs for the projects involved.
There was a repository for cross-project-specs, but it is deprecated:
https://github.com/openstack/openstack-specs

So we are currently writing specs for each involved project, as suggested.
You are right, it would be nice to discuss this topic with people from
all involved projects together.
> When you do so, please bring the topic to the project weekly
> meetings[1][2][3]
We actually started with bringing this up in the Glance meeting yesterday.
And of course, we would like to discuss our specs in the project
meetings. :)

Best regards,
Josephine (Luzi)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal)

2018-09-28 Thread Markus Hentsch
Hello Julia,

we will begin formulating an individual spec for each project accordingly.

Regarding your question: as you already assumed correctly, the code
necessary to handle image decryption is driver specific in our current
design as it is very close to the point where the ephemeral storage disk
is initialized.

Our proposed goal of direct decryption streaming makes it hard to design
this in a generic fashion since we can't simply place the decrypted
image somewhere temporarily in a generic place and then take it as a
base for a driver specific next step, since that'd expose the image data.

Best regards,
Markus

Julia Kreger wrote:
> Greetings!
> 
> I suspect the avenue of at least three different specs is likely going
> to be the best path forward and likely what will be required for each
> project to fully understand how/what/why. From my point of view, I'm
> quite interested in this from a Nova point of view because that is the
> initial user interaction point for majority of activities. I'm also
> wondering if this is virt driver specific, or if it can be applied to
> multiple virt drivers in the nova tree, since each virt driver has
> varying constraints. So maybe the best path forward is something nova
> centric to start?
> 
> -Julia
> 
> On Thu, Sep 27, 2018 at 10:36 AM Markus Hentsch
>  wrote:
>>
>> Dear OpenStack developers,
>>
>> we would like to propose the introduction of an encrypted image format
>> in OpenStack. We already created a basic implementation involving Nova,
>> Cinder, OSC and Glance, which we'd like to contribute.
>>
>> We originally created a full spec document but since the official
>> cross-project contribution workflow in OpenStack is a thing of the past,
>> we have no single repository to upload it to. Thus, the Glance team
>> advised us to post this on the mailing list [1].
>>
>> Ironically, Glance is the least affected project since the image
>> transformation processes affected are taking place elsewhere (Nova and
>> Cinder mostly).
>>
>> Below you'll find the most important parts of our spec that describe our
>> proposal - which our current implementation is based on. We'd love to
>> hear your feedback on the topic and would like to encourage all affected
>> projects to join the discussion.
>>
>> Subsequently, we'd like to receive further instructions on how we may
>> contribute to all of the affected projects in the most effective and
>> collaborative way possible. The Glance team suggested starting with a
>> complete spec in the glance-specs repository, followed by individual
>> specs/blueprints for the remaining projects [1]. Would that be alright
>> for the other teams?
>>
>> [1]
>> http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html
>>
>> Best regards,
>> Markus Hentsch
>>
> [trim]
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
**
*Markus Hentsch*
Head of Cloud Innovation

CLOUD

*CLOUD & HEAT Technologies GmbH*
Königsbrücker Str. 96 (Halle 15) | 01099 Dresden
Tel: +49 351 479 3670 - 100
Fax: +49 351 479 3670 - 110
E-Mail: markus.hent...@cloudandheat.com

Web: https://www.cloudandheat.com


Handelsregister: Amtsgericht Dresden
Registernummer: HRB 30549
USt.-Ident.-Nr.: DE281093504
Geschäftsführer: Nicolas Röhrs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal)

2018-09-28 Thread Jeremy Stanley
On 2018-09-28 08:51:46 -0300 (-0300), Erlon Cruz wrote:
> I don't know if our workflow supports this, but it would be nice
> to have a place to place cross-projec changes like that (something
> like, openstack-cross-projects-specs), and use that as a initial
> point for high level discussions. But for now, you can start
> creating specs for the projects involved.
[...]

If memory serves, the biggest challenge around that solution was
determining who approves such proposals since they still need
per-project specs for the project-specific details anyway. Perhaps
someone who has recently worked on a feature which required
coordination between several teams (but not a majority of teams like
our cycle goals process addresses) can comment on what worked for
them and what improvements they would make on the process they
followed.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][glance][osc][sdk] Image Encryption for OpenStack (proposal)

2018-09-28 Thread Erlon Cruz
I don't know if our workflow supports this, but it would be nice to have a
place to
place cross-projec changes like that (something like,
openstack-cross-projects-specs),
and use that as a initial point for high level discussions. But for now,
you can start creating
specs for the projects involved.

When you do so, please bring the topic to the project weekly
meetings[1][2][3] so you can get
some attention and feedback.

Erlon
___
[1] https://wiki.openstack.org/wiki/Meetings/Glance
[2] https://wiki.openstack.org/wiki/Meetings/Nova
[3] https://wiki.openstack.org/wiki/CinderMeetings



Em qui, 27 de set de 2018 às 22:51, hao wang 
escreveu:

> +1 to Julia's suggestion, Cinder should also have a spec to discuss
> the detail about how to implement the creation of volume from an
> encrypted image.
> Julia Kreger  于2018年9月28日周五 上午9:39写道:
> >
> > Greetings!
> >
> > I suspect the avenue of at least three different specs is likely going
> > to be the best path forward and likely what will be required for each
> > project to fully understand how/what/why. From my point of view, I'm
> > quite interested in this from a Nova point of view because that is the
> > initial user interaction point for majority of activities. I'm also
> > wondering if this is virt driver specific, or if it can be applied to
> > multiple virt drivers in the nova tree, since each virt driver has
> > varying constraints. So maybe the best path forward is something nova
> > centric to start?
> >
> > -Julia
> >
> > On Thu, Sep 27, 2018 at 10:36 AM Markus Hentsch
> >  wrote:
> > >
> > > Dear OpenStack developers,
> > >
> > > we would like to propose the introduction of an encrypted image format
> > > in OpenStack. We already created a basic implementation involving Nova,
> > > Cinder, OSC and Glance, which we'd like to contribute.
> > >
> > > We originally created a full spec document but since the official
> > > cross-project contribution workflow in OpenStack is a thing of the
> past,
> > > we have no single repository to upload it to. Thus, the Glance team
> > > advised us to post this on the mailing list [1].
> > >
> > > Ironically, Glance is the least affected project since the image
> > > transformation processes affected are taking place elsewhere (Nova and
> > > Cinder mostly).
> > >
> > > Below you'll find the most important parts of our spec that describe
> our
> > > proposal - which our current implementation is based on. We'd love to
> > > hear your feedback on the topic and would like to encourage all
> affected
> > > projects to join the discussion.
> > >
> > > Subsequently, we'd like to receive further instructions on how we may
> > > contribute to all of the affected projects in the most effective and
> > > collaborative way possible. The Glance team suggested starting with a
> > > complete spec in the glance-specs repository, followed by individual
> > > specs/blueprints for the remaining projects [1]. Would that be alright
> > > for the other teams?
> > >
> > > [1]
> > >
> http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-09-27-14.00.log.html
> > >
> > > Best regards,
> > > Markus Hentsch
> > >
> > [trim]
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][edge] Notes from the PTG

2018-09-28 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi Jim,

Thanks for sharing your notes.

One note about the jumping automomus control plane requirement.
This requirement was already identified during the Dublin PTG workshop 
[1].
 This is needed for two reasons the edge cloud instance should stay operational 
even if there is a network break towards other edge cloud instances and the 
edge cloud instance should work together with other edge cloud instances 
running other version of the control plane. In Denver we deided to leave out 
these requirements form the MVP architecture discussions.

Br,
Gerg0

[1]: 
https://wiki.openstack.org/w/index.php?title=OpenStack_Edge_Discussions_Dublin_PTG



From: Jim Rollenhagen mailto:j...@jimrollenhagen.com>>
Reply-To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, September 19, 2018 at 10:49 AM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [ironic][edge] Notes from the PTG

I wrote up some notes from my perspective at the PTG for some internal teams 
and figured I may as well share them here. They're primarily from the ironic 
and edge WG rooms. Fairly raw, very long, but hopefully useful to someone. 
Enjoy.

Tuesday: edge

Edge WG (IMHO) has historically just talked about use cases, hand-waved a bit, 
and jumped to requiring an autonomous control plane per edge site - thus 
spending all of their time talking about how they will make glance and keystone 
sync data between control planes.

penick described roughly what we do with keystone/athenz and how that can be 
used in a federated keystone deployment to provide autonomy for any control 
plane, but also a single view via a global keystone.

penick and I both kept pushing for people to define a real architecture, and we 
ended up with 10-15 people huddled around an easel for most of the afternoon. 
Of note:

- Windriver (and others?) refuse to budge on the many control plane thing
- This means that they will need some orchestration tooling up top in the 
main DC / client machines to even come close to reasonably managing all of 
these sites
- They will probably need some syncing tooling
- glance->glance isn’t a thing, no matter how many people say it is.
- Glance PTL recommends syncing metadata outside of glance process, and a 
global(ly distributed?) glance backend.
- We also defined the single pane of glass architecture that Oath plans to 
deploy
- Okay with losing connectivity from central control plane to single edge 
site
- Each edge site is a cell
- Each far edge site is just compute nodes
- Still may want to consider image distribution to edge sites so we don’t 
have to go back to main DC?
- Keystone can be distributed the same as first architecture
- Nova folks may start investigating putting API hosts at the cell level to 
get the best of both worlds - if there’s a network partition, can still talk to 
cell API to manage things
- Need to think about removing the need for rabbitmq between edge and far 
edge
- Kafka was suggested in the edge room for oslo.messaging in general
- Etcd watchers may be another option for an o.msg driver
- Other other options are more invasive into nova - involve changing 
how nova-compute talks to conductor (etcd, etc) or even putting REST APIs in 
nova-compute (and nova-conductor?)
- Neutron is going to work on an OVS “superagent” - superagent does the 
RPC handling, talks some other way to child agents. Intended to scale to 
thousands of children. Primary use case is smart nics but seems like a win for 
the edge case as well.

penick took an action item to draw up the architecture diagrams in a digestable 
format.

Wednesday: ironic things

Started with a retrospective. See 
https://etherpad.openstack.org/p/ironic-stein-ptg-retrospective for the notes - 
there wasn’t many surprising things here. We did discuss trying to target some 
quick wins for the beginning of the cycle, so that we didn’t have all of our 
features trying to land at the end. Using wsgi with the ironic-api was 
mentioned as a potential regression, but we agreed it’s a config/documentation 
issue. I took an action to make a task to document this better.

Next we quickly reviewed our vision doc, and people didn’t have much to say 
about it.

Metalsmith: it’s a thing, it’s being included into the ironic project. Dmitry 
is open to optionally supporting placement. Multiple instances will be a 
feature in the future. Otherwise mostly feature complete, goal is to keep it 
simple.

Networking-ansible: redhat building tooling that integrates with upstream 
ansible modules for networking gear. Kind of an alternative to n-g-s. Not 
really much on plans here, RH just wanted to introduce it to the community. 
Some 

Re: [openstack-dev] [horizon][plugins] npm jobs fail due to new XStatic-jQuery release (was: Horizon gates are broken)

2018-09-28 Thread Shu M.
Hi Ivan,

Thank you for your help to our plugins and sorry for bothering you.
I found problem on installing horizon in "post-install", e.g. we should
install horizon with upper-constraints.txt in "post-install".
I proposed patch[1] in zun-ui, please check it. If we can merge this, I
will expand it the other remaining plugins.

[1] https://review.openstack.org/#/c/606010/

Thanks,
Shu Muto

2018年9月28日(金) 3:34 Ivan Kolodyazhny :

> Hi,
>
> Unfortunately, this issue affects some of the plugins too :(. At least
> gates for the magnum-ui, senlin-dashboard, zaqar-ui and zun-ui are broken
> now. I'm working both with project teams to fix it asap. Let's wait if [5]
> helps for senlin-dashboard and fix all the rest of plugins.
>
>
> [5] https://review.openstack.org/#/c/605826/
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
>
> On Wed, Sep 26, 2018 at 4:50 PM Ivan Kolodyazhny  wrote:
>
>> Hi all,
>>
>> Patch [1]  is merged and our gates are un-blocked now. I went throw
>> review list and post 'recheck' where it was needed.
>>
>> We need to cherry-pick this fix to stable releases too. I'll do it asap
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>>
>> On Mon, Sep 24, 2018 at 11:18 AM Ivan Kolodyazhny  wrote:
>>
>>> Hi team,
>>>
>>> Unfortunately, horizon gates are broken now. We can't merge any patch
>>> due to the -1 from CI.
>>> I don't want to disable tests now, that's why I proposed a fix [1].
>>>
>>> We'd got released some of XStatic-* packages last week. At least new
>>> XStatic-jQuery [2] breaks horizon [3]. I'm working on a new job for
>>> requirements repo [4] to prevent such issues in the future.
>>>
>>> Please, do not try 'recheck' until [1] will be merged.
>>>
>>> [1] https://review.openstack.org/#/c/604611/
>>> [2] https://pypi.org/project/XStatic-jQuery/#history
>>> [3] https://bugs.launchpad.net/horizon/+bug/1794028
>>> [4] https://review.openstack.org/#/c/604613/
>>>
>>> Regards,
>>> Ivan Kolodyazhny,
>>> http://blog.e0ne.info/
>>>
>> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-09-28 Thread Sylvain Bauza
On Fri, Sep 28, 2018 at 12:50 AM melanie witt  wrote:

> On Thu, 27 Sep 2018 17:23:26 -0500, Matt Riedemann wrote:
> > On 9/27/2018 3:02 PM, Jay Pipes wrote:
> >> A great example of this would be the proposed "deploy template" from
> >> [2]. This is nothing more than abusing the placement traits API in order
> >> to allow passthrough of instance configuration data from the nova flavor
> >> extra spec directly into the nodes.instance_info field in the Ironic
> >> database. It's a hack that is abusing the entire concept of the
> >> placement traits concept, IMHO.
> >>
> >> We should have a way *in Nova* of allowing instance configuration
> >> key/value information to be passed through to the virt driver's spawn()
> >> method, much the same way we provide for user_data that gets exposed
> >> after boot to the guest instance via configdrive or the metadata service
> >> API. What this deploy template thing is is just a hack to get around the
> >> fact that nova doesn't have a basic way of passing through some collated
> >> instance configuration key/value information, which is a darn shame and
> >> I'm really kind of annoyed with myself for not noticing this sooner. :(
> >
> > We talked about this in Dublin through right? We said a good thing to do
> > would be to have some kind of template/profile/config/whatever stored
> > off in glare where schema could be registered on that thing, and then
> > you pass a handle (ID reference) to that to nova when creating the
> > (baremetal) server, nova pulls it down from glare and hands it off to
> > the virt driver. It's just that no one is doing that work.
>
> If I understood correctly, that discussion was around adding a way to
> pass a desired hardware configuration to nova when booting an ironic
> instance. And that it's something that isn't yet possible to do using
> the existing ComputeCapabilitiesFilter. Someone please correct me if I'm
> wrong there.
>
> That said, I still don't understand why we are talking about deprecating
> the ComputeCapabilitiesFilter if there's no supported way to replace it
> yet. If boolean traits are not enough to replace it, then we need to
> hold off on deprecating it, right? Would the
> template/profile/config/whatever in glare approach replace what the
> ComputeCapabilitiesFilter is doing or no? Sorry, I'm just not clearly
> understanding this yet.
>
>
I just feel some new traits have to be defined, like Jay said, and some
work has to be done on the Ironic side to make sure they expose them as
traits and not by the old way.
That leaves tho a question : does Ironic support custom capabilities ? If
so, that leads to Jay's point about the key/pair information that's not
intented for traits. If we all agree on the fact that traits shouldn't be
allowed for key/value pairs, could we somehow imagine Ironic to change the
customization mechanism to be boolean only ?

Also, I'm a bit confused whether operators make use of Ironic capabilities
for fancy operational queries, like the ones we have in
https://github.com/openstack/nova/blob/3716752/nova/scheduler/filters/extra_specs_ops.py#L24-L35
and if Ironic correctly documents how to put such things into traits ? (eg.
say CUSTOM_I_HAVE_MORE_THAN_2_GPUS)

All of the above makes me a bit worried by a possible
ComputeCapabilitiesFilter deprecation, if we aren't yet able to provide a
clear upgrade path for our users.

-Sylvain

-melanie
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][castellan] Time for a 1.0 release?

2018-09-28 Thread Thierry Carrez

Ade Lee wrote:

On Tue, 2018-09-25 at 16:30 -0500, Ben Nemec wrote:

Doug pointed out on a recent Oslo release review that castellan is
still
not officially 1.0. Given the age of the project and the fact that
we're
asking people to deploy a Castellan-compatible keystore as one of
the
base services, it's probably time to address that.

To that end, I'm sending this to see if anyone is aware of any
reasons
we shouldn't go ahead and tag a 1.0 of Castellan.



+ 1


+1
Propose it and we can continue the discussion on the review :)

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] How is the docker image of rally-openstack managed?

2018-09-28 Thread Jae Sang Lee
Hi guys,


Last week I posted a commit at rally-openstack to work with mysql and
postgres in rally docker image.(c8272e8591f812ced9c2f7ebdad6abca5c160dbf)
mysql and postgres.
At the docker hub, the latest tag is pushed 3 months ago and is no longer
being updated.
I would like to use the official rally-openstack docker image with mysql
support in openstack-helm rally.
How is the xrally-openstack docker image managed?


Thanks.

Jaesang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] Tetsuro Nakamura now core

2018-09-28 Thread TETSURO NAKAMURA

Hi all,

Thank you for putting your trust in me.
It's my pleasure to work with you and to support the community.

Thanks!

On 2018/09/27 18:47, Chris Dent wrote:


Since there were no objections and a week has passed, I've made
Tetsuro a member of placement-core.

Thanks for your willingness and continued help. Use your powers
wisely.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Tetsuro Nakamura 
NTT Network Service Systems Laboratories
TEL:0422 59 6914(National)/+81 422 59 6914(International)
3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][puppet][kolla][helm][ansible] Change in Cinder backup driver naming

2018-09-28 Thread Tobias Urdin

Thanks Sean!

I did a quick sanity check on the backup part in the puppet-cinder 
module and there is no opinionated

default value there which needs to be changed.

Best regards

On 09/27/2018 08:37 PM, Sean McGinnis wrote:

This probably applies to all deployment tools, so hopefully this reaches the
right folks.

In Havana, Cinder deprecated the use of specifying the module for configuring
backup drivers. Patch https://review.openstack.org/#/c/595372/ finally removed
the backwards compatibility handling for configs that still used the old way.

Looking through a quick search, it appears there may be some tools that are
still defaulting to setting the backup driver name using the patch. If your
project does not specify the full driver class path, please update these to do
so now.

Any questions, please reach out here or in the #openstack-cinder channel.

Thanks!
Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev