Re: [openstack-dev] [keystone] Adding foreign keys between subsystems

2017-04-12 Thread Lance Bragstad
On Wed, Apr 12, 2017 at 9:28 AM, David Stanek  wrote:

> [tl;dr I want to remove the artificial restriction of not allowing FKs
> between
> subsystems and I want to stop FK enforcement in code.]
>
> The keystone code architecture is pretty simple. The data and
> functionality are
> divided up into subsystems. Each subsystem can be configured to use a
> different
> backend datastore. Of course, there are always exceptions to the rule like
> how
> the federation and identity subsystems are highly coupled in the data
> model.
>
> On the surface this flexible model sounds good, but there are some
> interesting
> consequences. First, you can't tell from looking at the data model that
> there
> actually is a lot of coupling between the subsystems. So instead of
> looking at
> our sqlalchemy models to see relationships, we must look throughout the
> code
> for where a particular primary key is used and look for enforcement.
> (Hopefully
> we enforce it in all of the right places.) Additionally, a DBA/data
> architect/
> whenever can't see the relationship at all by looking at the database.
>
> Second, this has polluted our code and causes erroneous API errors. We
> have added
> lots of get_*() calls in our code that checks for the existence of IDs in
> another subsystem. In some cases we probably don't do the check and thus
> would
> allow bad data to be stored. The check often causes 404s instead of 400s
> when
> bad data is provided.
>

Having these cleaned up would be awesome. I imagine we'd also see some sort
of performance benefit as a result, too.


>
> So I'd like us to be more deliberate in defining which parts of the data
> model
> are truly independent and a separate backend datastore would make sense.
> For
> instance, we know we want to support LDAP for identity (although this
> still puts
> identity info into a SQL database) and catalog is very isolated from the
> rest of
> the data model.
>
> As a side effect this means that if deployers wished to use a separate
> backend
> for something they would need to also implement it for the other highly
> coupled
> subsystems. They would also have to provide any FK enforcement that their
> own
> datastore does not provide.
>

So for deployers, this would mean that if today they only deploy keystone
backed with LDAP for *only* identity, tomorrow they will have to ensure
that LDAP has all the proper things for other subsystems that use to have
an in-code constraint with identity (i.e. assignment). I wonder how many
folks that would be? What would an upgrade look like for deployments
wishing to stick to LDAP? I imagine we'd be raising the bar for that
particular upgrade.


>
> Thoughts?
>
> --
> david stanek
> web: https://dstanek.com
> twitter: https://twitter.com/dstanek
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] policy meeting 2017-4-12

2017-04-12 Thread Lance Bragstad
Just a reminder that we will be having the policy meeting in 45 minutes in
#openstack-meeting-cp [0]. It was cancelled last week due to tight
schedules.

See you there!


[0] https://etherpad.openstack.org/p/keystone-policy-meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] pike-1 release

2017-04-12 Thread Lance Bragstad
I've proposed keystone's pike-1 release [0]. If there is anything that we
need to wait on for pike-1 that hasn't merged yet, please let me know at
your earliest convenience.


[0] https://review.openstack.org/#/c/456319/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][horizon] weekly meeting

2017-04-13 Thread Lance Bragstad
Happy Thursday folks,

Rob and I have noticed that the weekly attendance for the Keystone/Horizon
[0] meeting has dropped significantly in the last month or two. We
contemplated changing the frequency of this meeting to be monthly instead
of weekly. We still think it is important to have a sync point between the
two projects, but maybe it doesn't need to be as often as we were expecting.

Does anyone have any objections to making this a monthly meeting?

Does anyone have a preference on the week or day of the month (i.e. 3rd
Thursday of the month)?

Once we have consensus on a time, I'll submit a patch for the meeting
agenda.

Thanks and have a great weekend!

[0] http://eavesdrop.openstack.org/#Keystone/Horizon_Collaboration_Meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][horizon] weekly meeting

2017-04-20 Thread Lance Bragstad
I wonder if the meeting tooling supports a monthly cadence?

On Thu, Apr 20, 2017 at 2:42 PM, Rob Cresswell  wrote:

> It's been a week since the original email; I think we should scale back to
> a monthly sync up. No preference on which week of the month it falls in.
> Thanks!
>
> Rob
>
> On 13 April 2017 at 22:03, Lance Bragstad  wrote:
>
>> Happy Thursday folks,
>>
>> Rob and I have noticed that the weekly attendance for the
>> Keystone/Horizon [0] meeting has dropped significantly in the last month or
>> two. We contemplated changing the frequency of this meeting to be monthly
>> instead of weekly. We still think it is important to have a sync point
>> between the two projects, but maybe it doesn't need to be as often as we
>> were expecting.
>>
>> Does anyone have any objections to making this a monthly meeting?
>>
>> Does anyone have a preference on the week or day of the month (i.e. 3rd
>> Thursday of the month)?
>>
>> Once we have consensus on a time, I'll submit a patch for the meeting
>> agenda.
>>
>> Thanks and have a great weekend!
>>
>> [0] http://eavesdrop.openstack.org/#Keystone/Horizon_
>> Collaboration_Meeting
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] mascot v2.0

2017-04-24 Thread Lance Bragstad
Based on some feedback of the original mascot, the Foundation passed along
another revision that incorporates a keyhole into the turtle shell. There
are two versions [0] [1]. We can choose to adopt one of the new formats, or
stick with the one we already have.

I have it on our agenda for tomorrow's meeting.

Thanks!


[0]
https://drive.google.com/open?id=0B5G9bO9bw3ObeHk4RG1MS1Zfak16cDdtWjlqUlBlRDRQTUZn
[1]
https://drive.google.com/open?id=0B5G9bO9bw3ObRTdEV041Y0lfb1pmNV9QZWlBOTkzOGNOMnNN
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo.utils] Bug-1680130 Check validation of UUID length

2017-04-24 Thread Lance Bragstad
We had to do similar things in keystone in order to validate uuid-ish types
(just not as fancy) [0] [1]. If we didn't have to worry about being
backwards compatible with non-uuid formats, it would be awesome to have one
implementation for checking that.

[0]
https://github.com/openstack/keystone/blob/6c6589d2b0f308cb788b37b29ebde515304ee41e/keystone/identity/schema.py#L69
[1]
https://github.com/openstack/keystone/blob/6c6589d2b0f308cb788b37b29ebde515304ee41e/keystone/common/validation/parameter_types.py#L38-L45

On Mon, Apr 24, 2017 at 1:05 PM, Matt Riedemann  wrote:

> On 4/24/2017 12:58 PM, Sean Dague wrote:
>
>>
>> Which uses is_uuid_like to do the validation -
>> https://github.com/openstack/nova/blob/1106477b78c80743e6443
>> abc30911b24a9ab7b15/nova/api/validation/validators.py#L85-L87
>>
>> We assumed (as did many others) that is_uuid_like was strict enough for
>> param validation. It is apparently not.
>>
>> Either it needs to be fixed to be so, or some other function needs to be
>> created that is, that people can cut over to.
>>
>> -Sean
>>
>>
> Well kiss my grits. I had always assumed that was built into jsonschema.
>
> --
>
> Thanks,
>
> Matt
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] forum session etherpads

2017-04-26 Thread Lance Bragstad
Hi all,

I've created the etherpads for our sessions and linked them to the wiki
[0]. I've bootstrapped them with basic content and they are ready to be
bookmarked!

If you'd like to help flesh out the agendas for any of those sessions, just
ping me.

Thanks!


[0] https://wiki.openstack.org/wiki/Forum/Boston2017
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] mascot v2.0

2017-04-26 Thread Lance Bragstad
In yesterday's meeting we decided to let this sit for a week so that folks
could post their feedback here. I just got an email from the foundation
asking for feedback since they'd like to have it before the deadline for
ordering stickers for the Forum, which is tomorrow.

As a result, I'm going to bump up the timeline for this and add Heidi to
the thread. That way she is aware of any feedback we want to give. If we
don't have any feedback by tomorrow, we will default to the mascot we
already have.

Thanks!

On Mon, Apr 24, 2017 at 9:13 AM, Lance Bragstad  wrote:

> Based on some feedback of the original mascot, the Foundation passed along
> another revision that incorporates a keyhole into the turtle shell. There
> are two versions [0] [1]. We can choose to adopt one of the new formats, or
> stick with the one we already have.
>
> I have it on our agenda for tomorrow's meeting.
>
> Thanks!
>
>
> [0] https://drive.google.com/open?id=0B5G9bO9bw3ObeHk4RG1MS1Zfak16c
> DdtWjlqUlBlRDRQTUZn
> [1] https://drive.google.com/open?id=0B5G9bO9bw3ObRTdEV041Y0lfb1pmN
> V9QZWlBOTkzOGNOMnNN
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] No meeting next week (2017-05-09)

2017-05-02 Thread Lance Bragstad
Just a reminder that we won't have a meeting next week since it will be the
week of the Forum in Boston.

Our next meeting will be on May 16th. See you then!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Colleen Murphy for core

2017-05-02 Thread Lance Bragstad
Hey folks,

During today's keystone meeting we added another member to keystone's core
team. For several releases, Colleen's had a profound impact on keystone.
Her reviews are meticulous and of incredible quality. She has no hesitation
to jump into keystone's most confusing realms and as a result has become an
expert on several identity topics like federation and LDAP integration.

I'd like to thank Colleen for all her hard work and upholding the stability
and usability of the project.


Congratulations, Colleen!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][forum] BM/VM session conflict with project workshop

2017-05-03 Thread Lance Bragstad
Looking through the schedule of keystone-tagged sessions, it appears we
have a conflict between one of the BM/VM sessions [0] and keystone's
project on-boarding session [1].

I wouldn't be opposed to shuffling, but I assume it's too late for that? If
we can get a good idea of who is going to show up for the project
on-boarding session and set a schedule, we might just be able to start it
at 11:45 AM instead of 11:00. This would avoid the conflict, not requiring
late shuffling of sessions. It might also allow the team scheduled before
keystone to run over in their on-boarding session if needed.

I'd be fine doing that if we can get a rough idea of attendance, what
people want information on, and plan accordingly.

Let's use this thread to air some of that out. Thoughts?


Thanks,

Lance

[0]
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18794/operating-the-vm-and-baremetal-platform-22
[1]
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18712/keystone-project-onboarding
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] No policy meeting next week (2017-05-10)

2017-05-03 Thread Lance Bragstad
Next week is the Forum, so we'll forego the the policy meeting in favor of
some face-to-face discussions.

Let's pick back up with policy recaps on the 17th of May.

Thanks,


Lance
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][horizon] weekly meeting

2017-05-04 Thread Lance Bragstad
I've proposed a patch to update the week meeting schedule [0].


[0] https://review.openstack.org/#/c/462569/

On Thu, Apr 20, 2017 at 2:49 PM, Steve Martinelli 
wrote:

> As someone who helped orchestrate the weekly sync-ups, I'll chime in. I
> always intended for these meetings to end once we accomplished most of the
> goals [1] we identified last summit. With most of the goals accomplished,
> scaling back or ending them entirely seems appropriate. We can always start
> them up again if our backlog grows again.
>
> [1] https://etherpad.openstack.org/p/ocata-keystone-horizon
>
> On Thu, Apr 20, 2017 at 3:46 PM, Lance Bragstad 
> wrote:
>
>> I wonder if the meeting tooling supports a monthly cadence?
>>
>> On Thu, Apr 20, 2017 at 2:42 PM, Rob Cresswell <
>> robert.cressw...@outlook.com> wrote:
>>
>>> It's been a week since the original email; I think we should scale back
>>> to a monthly sync up. No preference on which week of the month it falls in.
>>> Thanks!
>>>
>>> Rob
>>>
>>> On 13 April 2017 at 22:03, Lance Bragstad  wrote:
>>>
>>>> Happy Thursday folks,
>>>>
>>>> Rob and I have noticed that the weekly attendance for the
>>>> Keystone/Horizon [0] meeting has dropped significantly in the last month or
>>>> two. We contemplated changing the frequency of this meeting to be monthly
>>>> instead of weekly. We still think it is important to have a sync point
>>>> between the two projects, but maybe it doesn't need to be as often as we
>>>> were expecting.
>>>>
>>>> Does anyone have any objections to making this a monthly meeting?
>>>>
>>>> Does anyone have a preference on the week or day of the month (i.e. 3rd
>>>> Thursday of the month)?
>>>>
>>>> Once we have consensus on a time, I'll submit a patch for the meeting
>>>> agenda.
>>>>
>>>> Thanks and have a great weekend!
>>>>
>>>> [0] http://eavesdrop.openstack.org/#Keystone/Horizon_Collabo
>>>> ration_Meeting
>>>>
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova][policy] policy goals and roadmap

2017-05-04 Thread Lance Bragstad
Hi all,

I spent some time today summarizing a discussion [0] about global roles. I
figured it would help build some context for next week as there are a
couple cross project policy/RBAC sessions at the Forum.

The first patch is a very general document trying to nail down our policy
goals [1]. The second is a proposed roadmap (given the existing patches and
direction) of how we can mitigate several of the security issues we face
today with policy across OpenStack [2].

Feel free to poke holes as it will hopefully lead to productive discussions
next week.

Thanks!


[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-05-04.log.html#t2017-05-04T15:00:41
[1] https://review.openstack.org/#/c/460344/7
[2] https://review.openstack.org/#/c/462733/3
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][goals] Community goals for Queen

2017-05-06 Thread Lance Bragstad
For scheduling purposes, here is a link to the session [0].

[0]
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18732/queens-goals

On Sat, May 6, 2017 at 5:36 PM, Matt Riedemann  wrote:

> On 5/5/2017 8:23 PM, Sean Dague wrote:
>
>> On 05/05/2017 05:09 PM, Matt Riedemann wrote:
>> 
>>
>>> This time is tough given how front-loaded the sessions are at the Forum
>>> on Monday. The Nova onboarding session overlaps with this, along with
>>> some other sessions that impact or are related to Nova. It would have
>>> been nice to do stuff like this toward the end of the week, but I
>>> realize scheduling is a nightmare and not everyone can be pleased, and
>>> that ship has sailed. So I don't think I can be there, but I assume
>>> anything that comes out of it will be proposed to the governance repo or
>>> recapped in the mailing list afterward so we can discuss there.
>>>
>>
>> Right, given that it's against Operating the VM/Baremetal Platform, and
>> Nova Onboarding, I'll just give feedback here.
>>
>> A migration path off of paste would be a huge win. Paste deploy is
>> unmaintained (as noted in the etherpad) and being in etc means it's
>> another piece of gratuitous state that makes upgrading harder than it
>> really should be.
>>
>> This is one of those that is going to require someone to commit to
>> working out that migration path up front. But it would be a pretty good
>> chunk of debt and upgrade ease.
>>
>> -Sean
>>
>>
> So, I don't know what I was thinking when I read thingee's original email,
> but I thought it was Monday but it's actually Thursday which makes it
> better. Sorry for any confusion I caused.
>
> --
>
> Thanks,
>
> Matt
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] session etherpads

2017-05-07 Thread Lance Bragstad
Hey all,

We have a couple sessions to start off the week and I wanted to send out
the links to the etherpads [0] [1] [2].

Let me know if you have any questions. Otherwise feel free to catch up or
pre-populate the etherpads with content if you have any.

Thanks!



[0] https://etherpad.openstack.org/p/BOS-forum-consumable-keystone
[1]
https://etherpad.openstack.org/p/BOS-forum-next-steps-for-rbac-and-policy
[2] https://etherpad.openstack.org/p/BOS-forum-keystone-operator-feedback
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-11 Thread Lance Bragstad
Hey all,

One of the Baremetal/VM sessions at the summit focused on what we need to
do to make OpenStack more consumable for application developers [0]. As a
group we recognized the need for application specific passwords or API keys
and nearly everyone (above 85% is my best guess) in the session thought it
was an important thing to pursue. The API key/application-specific password
specification is up for review [1].

The problem is that with all the recent churn in the keystone project, we
don't really have the capacity to commit to this for the cycle. As a
project, we're still working through what we've committed to for Pike
before the OSIC fallout. It was suggested that we reach out to the PWG to
see if this is something we can get some help on from a keystone
development perspective. Let's use this thread to see if there is anyway we
can better enable the community through API keys/application-specific
passwords by seeing if anyone can contribute resources to this effort.

Thanks,

Lance


[0] https://etherpad.openstack.org/p/BOS-forum-using-vm-and-baremetal
[1] https://review.openstack.org/#/c/450415/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-15 Thread Lance Bragstad
On Mon, May 15, 2017 at 6:20 AM, Sean Dague  wrote:

> On 05/15/2017 05:59 AM, Andrey Volkov wrote:
> >
> >> The last time this came up, some people were concerned that trusting
> >> request-id on the wire was concerning to them because it's coming from
> >> random users.
> >
> > TBH I don't see the reason why a validated request-id value can't be
> > logged on a callee service side, probably because I missed some previous
> > context. Could you please give an example of such concerns?
> >
> > With service user I see two blocks:
> > - A callee service needs to know if it's "special" user or not.
> > - Until all services don't use a service user we'll not get the complete
> trace.
>
> That is doable, but then you need to build special tools to generate
> even basic flows. It means that the Elastic Search use case (where
> plopping in a request id shows you things across services) does not
> work. Because the child flows don't have the new id.
>
> It's also fine to *also* cross log the child/callee request idea on the
> parent/caller, but it's not actually going to be sufficiently useful to
> most people.
>

+1

To me it makes sense to supply the override so that a single request-id can
track multiple operations across services. But I'm struggling to find a
case where passing a list(global_request_id, local_request_id) is useful.
This might be something we can elaborate on later, if we find a use case
for including multiple request-ids.


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-15 Thread Lance Bragstad
On Sun, May 14, 2017 at 11:59 AM, Monty Taylor  wrote:

> On 05/11/2017 02:32 PM, Lance Bragstad wrote:
>
>> Hey all,
>>
>> One of the Baremetal/VM sessions at the summit focused on what we need
>> to do to make OpenStack more consumable for application developers [0].
>> As a group we recognized the need for application specific passwords or
>> API keys and nearly everyone (above 85% is my best guess) in the session
>> thought it was an important thing to pursue. The API
>> key/application-specific password specification is up for review [1].
>>
>> The problem is that with all the recent churn in the keystone project,
>> we don't really have the capacity to commit to this for the cycle. As a
>> project, we're still working through what we've committed to for Pike
>> before the OSIC fallout. It was suggested that we reach out to the PWG
>> to see if this is something we can get some help on from a keystone
>> development perspective. Let's use this thread to see if there is anyway
>> we can better enable the community through API keys/application-specific
>> passwords by seeing if anyone can contribute resources to this effort.
>>
>
> In the session, I signed up to help get the spec across the finish line.
> I'm also going to do my best to write up something resembling a user story
> so that we're all on the same page about what this is, what it isn't and
> what comes next.
>

Thanks Monty. If you have questions about the current proposal, Ron might
be lingering in IRC (rderose). David (dstanek) was also documenting his
perspective in another spec [0].


[0] https://review.openstack.org/#/c/440593/


>
> I probably will not have the time to actually implement the code - but if
> the PWG can help us get resources allocated to this I'll be happy to help
> them.
>
> [0] https://etherpad.openstack.org/p/BOS-forum-using-vm-and-baremetal
>> <https://etherpad.openstack.org/p/BOS-forum-using-vm-and-baremetal>
>> [1] https://review.openstack.org/#/c/450415/
>> <https://review.openstack.org/#/c/450415/>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-15 Thread Lance Bragstad
On Mon, May 15, 2017 at 7:07 PM, Adrian Turjak 
wrote:

>
> On 16/05/17 01:09, Lance Bragstad wrote:
>
>
>
> On Sun, May 14, 2017 at 11:59 AM, Monty Taylor 
> wrote:
>
>> On 05/11/2017 02:32 PM, Lance Bragstad wrote:
>>
>>> Hey all,
>>>
>>> One of the Baremetal/VM sessions at the summit focused on what we need
>>> to do to make OpenStack more consumable for application developers [0].
>>> As a group we recognized the need for application specific passwords or
>>> API keys and nearly everyone (above 85% is my best guess) in the session
>>> thought it was an important thing to pursue. The API
>>> key/application-specific password specification is up for review [1].
>>>
>>> The problem is that with all the recent churn in the keystone project,
>>> we don't really have the capacity to commit to this for the cycle. As a
>>> project, we're still working through what we've committed to for Pike
>>> before the OSIC fallout. It was suggested that we reach out to the PWG
>>> to see if this is something we can get some help on from a keystone
>>> development perspective. Let's use this thread to see if there is anyway
>>> we can better enable the community through API keys/application-specific
>>> passwords by seeing if anyone can contribute resources to this effort.
>>>
>>
>> In the session, I signed up to help get the spec across the finish line.
>> I'm also going to do my best to write up something resembling a user story
>> so that we're all on the same page about what this is, what it isn't and
>> what comes next.
>>
>
> Thanks Monty. If you have questions about the current proposal, Ron might
> be lingering in IRC (rderose). David (dstanek) was also documenting his
> perspective in another spec [0].
>
>
> [0] https://review.openstack.org/#/c/440593/
>
>
>
> Based on the specs that are currently up in Keystone-specs, I would highly
> recommend not doing this per user.
>
> The scenario I imagine is you have a sysadmin at a company who created a
> ton of these for various jobs and then leaves. The company then needs to
> keep his user account around, or create tons of new API keys, and then
> disable his user once all the scripts he had keys for are replaced. Or more
> often then not, disable his user and then cry as everything breaks and no
> one really knows why or no one fully documented it all, or didn't read the
> docs. Keeping them per project and unrelated to the user makes more sense,
> as then someone else on your team can regenerate the secrets for the
> specific Keys as they want. Sure we can advise them to use generic user
> accounts within which to create these API keys but that implies password
> sharing which is bad.
>
>
> That said, I'm curious why we would make these as a thing separate to
> users. In reality, if you can create users, you can create API specific
> users. Would this be a different authentication mechanism? Why? Why not
> just continue the work on better access control and let people create users
> for this. Because lets be honest, isn't a user already an API key? The
> issue (and the Ron's spec mentions this) is a user having too much access,
> how would this fix that when the issue is that we don't have fine grained
> policy in the first place? How does a new auth mechanism fix that? Both
> specs mention roles so I assume it really doesn't. If we had fine grained
> policy we could just create users specific to a service with only the roles
> it needs, and the same problem is solved without any special API, new auth,
> or different 'user-lite' object model. It feels like this is trying to
> solve an issue that is better solved by fixing the existing problems.
>
> I like the idea behind these specs, but... I'm curious what exactly they
> are trying to solve. Not to mention if you wanted to automate anything
> larger such as creating sub-projects and setting up a basic network for
> each new developer to get access to your team, this wouldn't work unless
> you could have your API key inherit to subprojects or something more
> complex, at which point they may as well be users. Users already work for
> all of this, why reinvent the wheel when really the issue isn't the wheel
> itself, but the steering mechanism (access control/policy in this case)?
>
>
All valid points, but IMO the discussions around API keys didn't set out to
fix deep-rooted issues with policy. We have several specs in flights across
projects to help mitigate the real issues with policy [0] [1] [2] [3] [4].

I see an API key implementation as 

Re: [openstack-dev] [keystone] [Pile] Need Exemption On Submitted Spec for the Keystone

2017-05-16 Thread Lance Bragstad
That sounds good - I'll review the spec before today's meeting [0]. Will
someone be around to answer questions about the spec if there are any?


[0] http://eavesdrop.openstack.org/#Keystone_Team_Meeting

On Mon, May 15, 2017 at 11:24 PM, Mh Raies  wrote:

> Hi Lance,
>
>
>
> We had submitted one blueprint and it’s Specs last weeks.
>
> Blueprint - https://blueprints.launchpad.
> net/keystone/+spec/api-implemetation-required-to-
> download-identity-policies
>
> Spec - https://review.openstack.org/#/c/463547/
>
>
>
> As Keystone Pike proposal freeze is already completed on April 14th 2017,
> to proceed on this Spec we need your help.
>
> Implementation of this Spec is also started and being addressed by -
> https://review.openstack.org/#/c/463543/
>
>
>
> So, if we can get an exemption to proceed with the Spec review and
> approval process, it will be a great help for us.
>
>
>
>
>
> [image: Ericsson] 
>
>
>
> *Mh Raies*
>
> *Senior Solution Integrator*
> *Ericsson** Consulting and Systems Integration*
>
> *Gurgaon, India | Mobile **+91 9901555661 <+91%2099015%2055661>*
>
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Lance Bragstad
On Tue, May 16, 2017 at 8:54 AM, Monty Taylor  wrote:

> On 05/16/2017 05:39 AM, Sean Dague wrote:
>
>> On 05/15/2017 10:00 PM, Adrian Turjak wrote:
>>
>>>
>>>
>>> On 16/05/17 13:29, Lance Bragstad wrote:
>>>
>>>>
>>>>
>>>> On Mon, May 15, 2017 at 7:07 PM, Adrian Turjak
>>>> mailto:adri...@catalyst.net.nz>> wrote:
>>>>
>>> 
>>
>>> Based on the specs that are currently up in Keystone-specs, I
>>>> would highly recommend not doing this per user.
>>>>
>>>> The scenario I imagine is you have a sysadmin at a company who
>>>> created a ton of these for various jobs and then leaves. The
>>>> company then needs to keep his user account around, or create tons
>>>> of new API keys, and then disable his user once all the scripts he
>>>> had keys for are replaced. Or more often then not, disable his
>>>> user and then cry as everything breaks and no one really knows why
>>>> or no one fully documented it all, or didn't read the docs.
>>>> Keeping them per project and unrelated to the user makes more
>>>> sense, as then someone else on your team can regenerate the
>>>> secrets for the specific Keys as they want. Sure we can advise
>>>> them to use generic user accounts within which to create these API
>>>> keys but that implies password sharing which is bad.
>>>>
>>>>
>>>> That said, I'm curious why we would make these as a thing separate
>>>> to users. In reality, if you can create users, you can create API
>>>> specific users. Would this be a different authentication
>>>> mechanism? Why? Why not just continue the work on better access
>>>> control and let people create users for this. Because lets be
>>>> honest, isn't a user already an API key? The issue (and the Ron's
>>>> spec mentions this) is a user having too much access, how would
>>>> this fix that when the issue is that we don't have fine grained
>>>> policy in the first place? How does a new auth mechanism fix that?
>>>> Both specs mention roles so I assume it really doesn't. If we had
>>>> fine grained policy we could just create users specific to a
>>>> service with only the roles it needs, and the same problem is
>>>> solved without any special API, new auth, or different 'user-lite'
>>>> object model. It feels like this is trying to solve an issue that
>>>> is better solved by fixing the existing problems.
>>>>
>>>> I like the idea behind these specs, but... I'm curious what
>>>> exactly they are trying to solve. Not to mention if you wanted to
>>>> automate anything larger such as creating sub-projects and setting
>>>> up a basic network for each new developer to get access to your
>>>> team, this wouldn't work unless you could have your API key
>>>> inherit to subprojects or something more complex, at which point
>>>> they may as well be users. Users already work for all of this, why
>>>> reinvent the wheel when really the issue isn't the wheel itself,
>>>> but the steering mechanism (access control/policy in this case)?
>>>>
>>>>
>>>> All valid points, but IMO the discussions around API keys didn't set
>>>> out to fix deep-rooted issues with policy. We have several specs in
>>>> flights across projects to help mitigate the real issues with policy
>>>> [0] [1] [2] [3] [4].
>>>>
>>>> I see an API key implementation as something that provides a cleaner
>>>> fit and finish once we've addressed the policy bits. It's also a
>>>> familiar concept for application developers, which was the use case
>>>> the session was targeting.
>>>>
>>>> I probably should have laid out the related policy work before jumping
>>>> into API keys. We've already committed a bunch of keystone resource to
>>>> policy improvements this cycle, but I'm hoping we can work API keys
>>>> and policy improvements in parallel.
>>>>
>>>> [0] https://review.openstack.org/#/c/460344/
>>>> [1] https://review.openstack.org/#/c/462733/
>>>> [2] https://review.openstack.org/#/c/464763/
>>>> [

[openstack-dev] [keystone][nova][cinder][policy] policy meeting tomorrow

2017-05-16 Thread Lance Bragstad
Hey folks,

Sending out a reminder that we will have the policy meeting tomorrow [0].
The agenda [1] is already pretty full but we are going to need
cross-project involvement tomorrow considering the topics and impacts.

I'll be reviewing policy things in the morning so if anyone has questions
or wants to hash things out before hand, come find me.

Thanks,

Lance

[0] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting
[1] https://etherpad.openstack.org/p/keystone-policy-meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Lance Bragstad
On Thu, May 18, 2017 at 8:45 AM, Sean Dague  wrote:

> On 05/18/2017 09:27 AM, Doug Hellmann wrote:
> > Excerpts from Adrian Turjak's message of 2017-05-18 13:34:56 +1200:
> >
> >> Fully agree that expecting users of a particular cloud to understand how
> >> the policy stuff works is pointless, but it does fall on the cloud
> >> provider to educate and document their roles and the permissions of
> >> those roles. I think step 1 plus some basic role permissions for the
> >
> > Doesn't basing the API key permissions directly on roles also imply that
> > the cloud provider has to anticipate all of the possible ways API keys
> > might be used so they can then set up those roles?
>
> Not really. It's not explicit roles, it's inherited ones. At some point
> an adminstrator gave a user permission to do stuff (through roles that
> may be site specific). Don't care how we got there. The important thing
> is those are cloned to the APIKey, otherwise, the APIKey litterally
> would not be able to do anything, ever. Discussing roles here was an
> attempt to look at how internals would work today, though it's
> definitely not part of contract of this new interface.
>
> There is a lot more implicitness in what roles mean (see
> https://bugs.launchpad.net/keystone/+bug/968696) which is another reason
> I'm really skeptical that we should have roles or policy points in the
> APIKey interface. Describing what they do in any particular installation
> is a ton of work. And you thought ordering a Medium coffee at Starbucks
> was annoying. :)
>
> The important thing is to make a clear and expressive API with the user
> so they can be really clear about what they expect a thing should do.
>
> >> Keys with the expectation of operators to document their roles/policy is
> >> a safe enough place to start, and for us to document and set some
> >> sensible default roles and policy. I don't think we currently have good
> >
> > This seems like an area where we want to encourage interoperability.
> > Policy doesn't do that today, because deployers can use arbitrary
> > names for roles and set permissions in those roles in any way they
> > want. That's fine for human users, but doesn't work for enabling
> > automation. If the sets of roles and permissions are different in
> > every cloud, how would anyone write a key allocation script that
> > could provision a key for their application on more than one cloud?
>
> So, this is where there are internals happening distinctly from user
> expressed intent.
>
> POST /apikey {}
>
> Creates an APIKey, in the project the token is currently authed to, and
> the APIKey inherits all the roles on that project that the user
> currently has. The user may or may not even know what these are. It's
> not a user interface.
>

If we know the user_id and project_id of the API key, then can't we build
the roles dynamically whenever the API key is used (unless the API key is
scoped to a single role)? This is the same approach we recently took with
token validation because it made the revocation API sub-system *way*
simpler (i.e. we no longer have to write revocation events anytime a role
is removed from a user on a project, instead the revocation happens
naturally when the token is used). Would this be helpful from a "default
open" PoV with API keys?

We touched on blacklisting certain operations a bit in Atlanta at the PTG
(see the API key section) [0]. I attempted to document it shortly after the
PTG, but some of those statement might be superseded at this point.


[0] https://www.lbragstad.com/blog/keystone-pike-ptg-summary


>
> The contract is "Give me an APIKey that can do what I do*" (* with the
> exception of self propogating, i.e. the skynet exception).
>
> That's iteration #1. APIKey can do what I can do.
>
> Iteration #2 is fine grained permissions that make it so I can have an
> APIKey do far less than I can do.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-18 Thread Lance Bragstad
I followed up with Sean in IRC [0]. My last note about rebuilding role
assignment dynamically doesn't really make sense. I was approaching this
from a different perspective.


[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-05-18.log.html#t2017-05-18T15:20:32

On Thu, May 18, 2017 at 9:39 AM, Lance Bragstad  wrote:

>
>
> On Thu, May 18, 2017 at 8:45 AM, Sean Dague  wrote:
>
>> On 05/18/2017 09:27 AM, Doug Hellmann wrote:
>> > Excerpts from Adrian Turjak's message of 2017-05-18 13:34:56 +1200:
>> >
>> >> Fully agree that expecting users of a particular cloud to understand
>> how
>> >> the policy stuff works is pointless, but it does fall on the cloud
>> >> provider to educate and document their roles and the permissions of
>> >> those roles. I think step 1 plus some basic role permissions for the
>> >
>> > Doesn't basing the API key permissions directly on roles also imply that
>> > the cloud provider has to anticipate all of the possible ways API keys
>> > might be used so they can then set up those roles?
>>
>> Not really. It's not explicit roles, it's inherited ones. At some point
>> an adminstrator gave a user permission to do stuff (through roles that
>> may be site specific). Don't care how we got there. The important thing
>> is those are cloned to the APIKey, otherwise, the APIKey litterally
>> would not be able to do anything, ever. Discussing roles here was an
>> attempt to look at how internals would work today, though it's
>> definitely not part of contract of this new interface.
>>
>> There is a lot more implicitness in what roles mean (see
>> https://bugs.launchpad.net/keystone/+bug/968696) which is another reason
>> I'm really skeptical that we should have roles or policy points in the
>> APIKey interface. Describing what they do in any particular installation
>> is a ton of work. And you thought ordering a Medium coffee at Starbucks
>> was annoying. :)
>>
>> The important thing is to make a clear and expressive API with the user
>> so they can be really clear about what they expect a thing should do.
>>
>> >> Keys with the expectation of operators to document their roles/policy
>> is
>> >> a safe enough place to start, and for us to document and set some
>> >> sensible default roles and policy. I don't think we currently have good
>> >
>> > This seems like an area where we want to encourage interoperability.
>> > Policy doesn't do that today, because deployers can use arbitrary
>> > names for roles and set permissions in those roles in any way they
>> > want. That's fine for human users, but doesn't work for enabling
>> > automation. If the sets of roles and permissions are different in
>> > every cloud, how would anyone write a key allocation script that
>> > could provision a key for their application on more than one cloud?
>>
>> So, this is where there are internals happening distinctly from user
>> expressed intent.
>>
>> POST /apikey {}
>>
>> Creates an APIKey, in the project the token is currently authed to, and
>> the APIKey inherits all the roles on that project that the user
>> currently has. The user may or may not even know what these are. It's
>> not a user interface.
>>
>
> If we know the user_id and project_id of the API key, then can't we build
> the roles dynamically whenever the API key is used (unless the API key is
> scoped to a single role)? This is the same approach we recently took with
> token validation because it made the revocation API sub-system *way*
> simpler (i.e. we no longer have to write revocation events anytime a role
> is removed from a user on a project, instead the revocation happens
> naturally when the token is used). Would this be helpful from a "default
> open" PoV with API keys?
>
> We touched on blacklisting certain operations a bit in Atlanta at the PTG
> (see the API key section) [0]. I attempted to document it shortly after the
> PTG, but some of those statement might be superseded at this point.
>
>
> [0] https://www.lbragstad.com/blog/keystone-pike-ptg-summary
>
>
>>
>> The contract is "Give me an APIKey that can do what I do*" (* with the
>> exception of self propogating, i.e. the skynet exception).
>>
>> That's iteration #1. APIKey can do what I can do.
>>
>> Iteration #2 is fine grained permissions that make it so I can have an
>> APIKey do far less than I can do.
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-19 Thread Lance Bragstad
On Thu, May 18, 2017 at 6:43 PM, Curtis  wrote:

> On Thu, May 18, 2017 at 4:13 PM, Adrian Turjak 
> wrote:
> > Hello fellow OpenStackers,
> >
> > For the last while I've been looking at options for multi-region
> > multi-master Keystone, as well as multi-master for other services I've
> > been developing and one thing that always came up was there aren't many
> > truly good options for a true multi-master backend. Recently I've been
> > looking at Cockroachdb and while I haven't had the chance to do any
> > testing I'm curious if anyone else has looked into it. It sounds like
> > the perfect solution, and if it can be proved to be stable enough it
> > could solve a lot of problems.
> >
> > So, specifically in the realm of Keystone, since we are using sqlalchemy
> > we already have Postgresql support, and since Cockroachdb does talk
> > Postgres it shouldn't be too hard to back Keystone with it. At that
> > stage you have a Keystone DB that could be multi-region, multi-master,
> > consistent, and mostly impervious to disaster. Is that not the holy
> > grail for a service like Keystone? Combine that with fernet tokens and
> > suddenly Keystone becomes a service you can't really kill, and can
> > mostly forget about.
>

++


> >
> > I'm welcome to being called mad, but I am curious if anyone has looked
> > at this. I'm likely to do some tests at some stage regarding this,
> > because I'm hoping this is the solution I've been hoping to find for
> > quite a long time.
>
> I was going to take a look at this a bit myself, just try it out. I
> can't completely speak for the Fog/Edge/Massively Distributed working
> group in OpenStack, but I feel like this might be something they look
> into.
>
> For standard multi-site I don't know how much it would help, say if
> you only had a couple or three clouds, but more than that maybe this
> starts to make sense. Also running Galera has gotten easier but still
> not that easy.
>

When we originally tested a PoC fernet implementation, we did it globally
distributed across five data centers. We didn't generate enough non-token
load to notice significant service degradation due to replication lag or
issues. I have heard replication across regions in the double digits is
where you start getting into some real interesting problems (gyee was one
of the folks in keystone who knew more about that). Dusting off those cases
with something like cockroachdb would be an interesting exercise!


>
> I had thought that the OpenStack community was deprecating Postgres
> support though, so that could make things a bit harder here (I might
> be wrong about this).
>
> Thanks,
> Curtis.
>
> >
> > Further reading:
> > https://www.cockroachlabs.com/
> > https://github.com/cockroachdb/cockroach
> > https://www.cockroachlabs.com/docs/build-a-python-app-with-
> cockroachdb-sqlalchemy.html
> >
> > Cheers,
> > - Adrian Turjak
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Blog: serverascode.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-05-19 Thread Lance Bragstad
Project: Keystone
Attendees: 12 - 15

We conflicted with one of the Baremetal/VM sessions

I attempted to document most of the session in my recap [0].

We started out by doing a round-the-room of introductions so that folks
could put IRC nicks to faces (we also didn't have a packed room so this
went pretty quick). After that we cruised through a summary of keystone,
the format of the projects, and the various processes we use. All of this
took *maybe* 30 minutes.

>From there we had an open discussion and things evolved organically. We
ended up going through:

   - the differences between the v2.0 and v3 APIs
   - keystonemiddleware architecture, how it aids services, and how it
   interacts with keystone
  - we essentially followed an API call for creating a instance from
  keystone -> nova -> glance
   - how authentication scoping works and why it works that way
   - how federation works and why it's setup the way it is
   - how federated authentication works (https://goo.gl/NfY3mr)

All of this was pretty well-received and generated a lot of productive
discussion. We also had several seasoned keystone contributors in the room,
which helped a lot. Most of the attendees were all curious about similar
topics, which was great, but we totally could have split into separate
groups given the experience we had in the room (we'll save that in our back
pocket for next time).

[0] https://www.lbragstad.com/blog/openstack-boston-summit-recap
[1] https://www.slideshare.net/LanceBragstad/keystone-project-onboarding

On Fri, May 19, 2017 at 10:37 AM, Michał Jastrzębski 
wrote:

> Kolla:
> Attendees - full room (20-30?)
> Notes - Conflict with kolla-k8s demo probably didn't help
>
> While we didn't have etherpad, slides, recording (and video dongle
> that could fit my laptop), we had great session with analog tools
> (whiteboard and my voice chords). We walked through architecture of
> each Kolla project, how they relate to each other and so on.
>
> Couple things to take out from our onboarding:
> 1. Bring dongles
> 2. We could've used bigger room - people were leaving because we had
> no chairs left
> 3. Recording would be awesome
> 4. Low tech is not a bad tech
>
> All and all, when we started session I didn't know what to expect or
> what people will expect so we just...rolled with it, and people seemed
> to be happy with it:) I think onboarding rooms were great idea (kudos
> to whoever came up with it)! I'll be happy to run it again in Sydney.
>
> Cheers,
> Michal
>
>
> On 19 May 2017 at 08:12, Julien Danjou  wrote:
> > On Fri, May 19 2017, Sean Dague wrote:
> >
> >> If you ran a room, please post the project, what you did in the room,
> >> what you think worked, what you would have done differently. If you
> >> attended a room you didn't run, please provide feedback about which one
> >> it was, and what you thought worked / didn't work from the other side of
> >> the table.
> >
> > We shared a room for Telemetry and CloudKitty for 90 minutes.
> > I was there with Gordon Chung for Telemetry.
> > Christophe Sauthier was there for CloudKitty.
> >
> > We only had 3 people showing up in the session. One wanted to read his
> > emails in a quiet room, the two others had a couple of question on
> > Telemetry – though it was not really related to contribution as far as I
> > can recall.
> >
> > I had to leave after 45 minutes because they was an overlap with a talk
> > I was doing and rescheduling did not seem possible. And everybody left a
> > few minutes after I left apparently.
> >
> > --
> > Julien Danjou
> > -- Free Software hacker
> > -- https://julien.danjou.info
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-23 Thread Lance Bragstad
I'm in favor of option #1. I think it encourages our developers to become
better writers with guidance from the docs team. While ensuring docs are
proposed prior to merging the implementation cross-repository is totally
possible, I think #1 makes that flow easier.

Thanks for putting together the options, Alex.

On Tue, May 23, 2017 at 11:02 AM, Ildiko Vancsa 
wrote:

> Hi Alex,
>
> First of all thank you for writing this up the summary and list options
> with their expected impacts.
>
> >
> > 1. We could combine all of the documentation builds, so that each
> project has a single doc/source directory that includes developer,
> contributor, and user documentation. This option would reduce the number of
> build jobs we have to run, and cut down on the number of separate sphinx
> configurations in each repository. It would completely change the way we
> publish the results, though, and we would need to set up redirects from all
> of the existing locations to the new locations and move all of the existing
> documentation under the new structure.
> >
> > 2. We could retain the existing trees for developer and API docs, and
> add a new one for "user" documentation. The installation guide,
> configuration guide, and admin guide would move here for all projects.
> Neutron's user documentation would include the current networking guide as
> well. This option would add 1 new build to each repository, but would allow
> us to easily roll out the change with less disruption in the way the site
> is organized and published, so there would be less work in the short term.
>
> I’m fully in favor for option #1 and/or option #2. From the perspective of
> trying to move step-by-step and give a chance to project teams to
> acclimatize with the changes I think starting with #2 should be sufficient.
>
> Although if we think that option #1 is doable as a starting point and also
> end goal, you have my support for that too.
>
> >
> > 3. We could do option 2, but use a separate repository for the new
> user-oriented documentation. This would allow project teams to delegate
> management of the documentation to a separate review project-sub-team, but
> would complicate the process of landing code and documentation updates
> together so that the docs are always up to date.
> >
>
> As being one of the advocates on having the documentation living together
> with the code so we can give a chance to the experts of the code changes to
> add the corresponding documentation as well, I'm definitely against option
> #3. :)
>
> Thanks and Best Regards,
> IldikĂ³
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-24 Thread Lance Bragstad
Hey all,

To date we have two proposed solutions for tackling the admin-ness issue we
have across the services. One builds on the existing scope concepts by
scoping to an admin project [0]. The other introduces global role
assignments [1] as a way to denote elevated privileges.

I'd like to get some feedback from operators, as well as developers from
other projects, on each approach. Since work is required in keystone, it
would be good to get consensus before spec freeze (June 9th). If you have
specific questions on either approach, feel free to ping me or drop by the
weekly policy meeting [2].

Thanks!

[0] http://adam.younglogic.com/2017/05/fixing-bug-96869/
[1] https://review.openstack.org/#/c/464763/
[2] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-24 Thread Lance Bragstad
I'd like to fill in a little more context here. I see three options with
the current two proposals.

*Option 1*

Use a special admin project to denote elevated privileges. For those
unfamiliar with the approach, it would rely on every deployment having an
"admin" project defined in configuration [0].

*How it works:*

Role assignments on this project represent global scope which is denoted by
a boolean attribute in the token response. A user with an 'admin' role
assignment on this project is equivalent to the global or cloud
administrator. Ideally, if a user has a 'reader' role assignment on the
admin project, they could have access to list everything within the
deployment, pending all the proper changes are made across the various
services. The workflow requires a special project for any sort of elevated
privilege.

Pros:
- Almost all the work is done to make keystone understand the admin
project, there are already several patches in review to other projects to
consume this
- Operators can create roles and assign them to the admin_project as needed
after the upgrade to give proper global scope to their users

Cons:
- All global assignments are linked back to a single project
- Describing the flow is confusing because in order to give someone global
access you have to give them a role assignment on a very specific project,
which seems like an anti-pattern
- We currently don't allow some things to exist in the global sense (i.e. I
can't launch instances without tenancy), the admin project could own
resources
- What happens if the admin project disappears?
- Tooling or scripts will be written around the admin project, instead of
treating all projects equally

*Option 2*

Implement global role assignments in keystone.

*How it works:*

Role assignments in keystone can be scoped to global context. Users can
then ask for a globally scoped token

Pros:
- This approach represents a more accurate long term vision for role
assignments (at least how we understand it today)
- Operators can create global roles and assign them as needed after the
upgrade to give proper global scope to their users
- It's easier to explain global scope using global role assignments instead
of a special project
- token.is_global = True and token.role = 'reader' is easier to understand
than token.is_admin_project = True and token.role = 'reader'
- A global token can't be associated to a project, making it harder for
operations that require a project to consume a global token (i.e. I
shouldn't be able to launch an instance with a globally scoped token)

Cons:
- We need to start from scratch implementing global scope in keystone,
steps for this are detailed in the spec

*Option 3*

We do option one and then follow it up with option two.

*How it works:*

We implement option one and continue solving the admin-ness issues in Pike
by helping projects consume and enforce it. We then target the
implementation of global roles for Queens.

Pros:
- If we make the interface in oslo.context for global roles consistent,
then consuming projects shouldn't know the difference between using the
admin_project or a global role assignment

Cons:
- It's more work and we're already strapped for resources
- We've told operators that the admin_project is a thing but after Queens
they will be able to do real global role assignments, so they should now
migrate *again*
- We have to support two paths for solving the same problem in keystone,
more maintenance and more testing to ensure they both behave exactly the
same way
  - This can get more complicated for projects dedicated to testing policy
and RBAC, like Patrole


Looking for feedback here as to which one is preferred given timing and
payoff, specifically from operators who would be doing the migrations to
implement and maintain proper scope in their deployments.

Thanks for reading!


[0]
https://github.com/openstack/keystone/blob/3d033df1c0fdc6cc9d2b02a702efca286371f2bd/etc/keystone.conf.sample#L2334-L2342

On Wed, May 24, 2017 at 10:35 AM, Lance Bragstad 
wrote:

> Hey all,
>
> To date we have two proposed solutions for tackling the admin-ness issue
> we have across the services. One builds on the existing scope concepts by
> scoping to an admin project [0]. The other introduces global role
> assignments [1] as a way to denote elevated privileges.
>
> I'd like to get some feedback from operators, as well as developers from
> other projects, on each approach. Since work is required in keystone, it
> would be good to get consensus before spec freeze (June 9th). If you have
> specific questions on either approach, feel free to ping me or drop by the
> weekly policy meeting [2].
>
> Thanks!
>
> [0] http://adam.younglogic.com/2017/05/fixing-bug-96869/
> [1] https://review.openstack.org/#/c/464763/
> [2] http://eavesdrop.openstack.org/#Keystone_Polic

Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-25 Thread Lance Bragstad
On Thu, May 25, 2017 at 2:36 PM, Marc Heckmann 
wrote:

> First of all @Lance, thanks for taking the time to write and summarize
> this for us. It's much appreciated.
>

Absolutely! it helps me think about it, too.


>
> While I'm not aware of all the nuances, based on my own testing, I feel
> that we are really close with option 1.
>
> That being said, as you already stated, option 2 is clearly more inline
> with the idea of having a "global" Cloud Admin role. So long term, #2 is
> more desirable.
>
> Given the two sentences above, I certainly would prefer option 3 so that
> we can have a usable solution quickly. I certainly will continue to test
> and provide feedback for the option 1 part.
>
>
It sounds like eventually migrating everything from the is_admin_project to
true global roles is a migration you're willing to make. This might be a
loaded question and it will vary across deployments, but how long would you
expect that migration to take for you're specific deployment(s)?


-m
>
>
>
>
> On Thu, 2017-05-25 at 10:42 +1200, Adrian Turjak wrote:
>
>
>
> On 25/05/17 07:47, Lance Bragstad wrote:
> 
>
> *Option 2*
>
> Implement global role assignments in keystone.
>
> *How it works:*
>
> Role assignments in keystone can be scoped to global context. Users can
> then ask for a globally scoped token
>
> Pros:
> - This approach represents a more accurate long term vision for role
> assignments (at least how we understand it today)
> - Operators can create global roles and assign them as needed after the
> upgrade to give proper global scope to their users
> - It's easier to explain global scope using global role assignments
> instead of a special project
> - token.is_global = True and token.role = 'reader' is easier to understand
> than token.is_admin_project = True and token.role = 'reader'
> - A global token can't be associated to a project, making it harder for
> operations that require a project to consume a global token (i.e. I
> shouldn't be able to launch an instance with a globally scoped token)
>
> Cons:
> - We need to start from scratch implementing global scope in keystone,
> steps for this are detailed in the spec
>
> 
>
>
> On Wed, May 24, 2017 at 10:35 AM, Lance Bragstad 
> wrote:
>
> Hey all,
>
> To date we have two proposed solutions for tackling the admin-ness issue
> we have across the services. One builds on the existing scope concepts by
> scoping to an admin project [0]. The other introduces global role
> assignments [1] as a way to denote elevated privileges.
>
> I'd like to get some feedback from operators, as well as developers from
> other projects, on each approach. Since work is required in keystone, it
> would be good to get consensus before spec freeze (June 9th). If you have
> specific questions on either approach, feel free to ping me or drop by the
> weekly policy meeting [2].
>
> Thanks!
>
>
> Please option 2. The concept of being an "admin" while you are only scoped
> to a project is stupid when that admin role gives you super user power yet
> you only have it when scoped to just that project. That concept never
> really made sense. Global scope makes so much more sense when that is the
> power the role gives.
>
> At same time, it kind of would be nice to make scope actually matter. As
> admin you have a role on Project X, yet you can now (while scoped to this
> project) pretty much do anything anywhere! I think global roles is a great
> step in the right direction, but beyond and after that we need to seriously
> start looking at making scope itself matter, so that giving someone 'admin'
> or some such on a project actually only gives them something akin to
> project_admin or some sort of admin-lite powers scoped to that project and
> sub-projects. That though falls into the policy work being done, but should
> be noted, as it is related.
>
> Still, at least global scope for roles make the superuser case make some
> actual sense, because (and I can't speak for other deployers), we have one
> project pretty much dedicated as an "admin_project" and it's just odd to
> actually need to give our service users roles in a project when that
> project is empty and a pointless construct for their purpose.
>
> Also thanks for pushing this! I've been watching your global roles spec
> review in hopes we'd go down that path. :)
>
> -Adrian
>
> ___
> OpenStack-operators mailing 
> listOpenStack-operators@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-26 Thread Lance Bragstad
On Fri, May 26, 2017 at 5:32 AM, Sean Dague  wrote:

> On 05/26/2017 03:44 AM, John Garbutt wrote:
> > +1 on not forcing Operators to transition to something new twice, even
> > if we did go for option 3.
> >
> > Do we have an agreed non-distruptive upgrade path mapped out yet? (For
> > any of the options) We spoke about fallback rules you pass but with a
> > warning to give us a smoother transition. I think that's my main
> > objection with the existing patches, having to tell all admins to get
> > their token for a different project, and give them roles in that
> > project, all before being able to upgrade.
>
> I definitely think the double migration is a good reason to just do this
> thing right the first time.
>
> My biggest real concern with is_admin_project (and the service project),
> is that it's not very explicit. It's mostly a way to trick the current
> plumbing into acting a different way. Which is fine if you are a
> deployer and need to create this behavior with the existing codebase you
> have. Which seems to have all come down to their being a
> misunderstanding of what Roles were back in 2012, and the two camps went
> off in different directions (roles really being project scoped, and
> roles meaning global).
>
> It would be really great if the inflated context we got was "roles: x,
> y, z, project_roles: q, r, s" (and fully accepting keystonemiddleware
> and oslo.context might be weaving some magic there). I honestly think
> that until we've got a very clear separation at that level, it's going
> to be really tough to get policy files in projects to be any more
> sensible in their defaults. Leaking is_admin_project all the way through
> to a service and having them have to consider it for their policy (which
> we do with the context today) definitely feels like a layer violation.
>

This is another good point. If we can ensure projects rely on oslo.context
to get scope information in a canonical form (like context.scope ==
'global' or context.scope == 'project') that might make consuming all this
easier. But it does require us to ensure oslo.context does the right thing
with various token types. I included some of that information in the spec
[0] but I didn't go into great detail. I thought about adding it to the
keystone spec but wasn't sure if that would be the right place for it.

[0] https://review.openstack.org/#/c/464763


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-26 Thread Lance Bragstad
On Fri, May 26, 2017 at 9:31 AM, Sean Dague  wrote:

> On 05/26/2017 10:05 AM, Lance Bragstad wrote:
> >
> >
> > On Fri, May 26, 2017 at 5:32 AM, Sean Dague  > <mailto:s...@dague.net>> wrote:
> >
> > On 05/26/2017 03:44 AM, John Garbutt wrote:
> > > +1 on not forcing Operators to transition to something new twice,
> even
> > > if we did go for option 3.
> > >
> > > Do we have an agreed non-distruptive upgrade path mapped out yet?
> (For
> > > any of the options) We spoke about fallback rules you pass but
> with a
> > > warning to give us a smoother transition. I think that's my main
> > > objection with the existing patches, having to tell all admins to
> get
> > > their token for a different project, and give them roles in that
> > > project, all before being able to upgrade.
> >
> > I definitely think the double migration is a good reason to just do
> this
> > thing right the first time.
> >
> > My biggest real concern with is_admin_project (and the service
> project),
> > is that it's not very explicit. It's mostly a way to trick the
> current
> > plumbing into acting a different way. Which is fine if you are a
> > deployer and need to create this behavior with the existing codebase
> you
> > have. Which seems to have all come down to their being a
> > misunderstanding of what Roles were back in 2012, and the two camps
> went
> > off in different directions (roles really being project scoped, and
> > roles meaning global).
> >
> > It would be really great if the inflated context we got was "roles:
> x,
> > y, z, project_roles: q, r, s" (and fully accepting keystonemiddleware
> > and oslo.context might be weaving some magic there). I honestly think
> > that until we've got a very clear separation at that level, it's
> going
> > to be really tough to get policy files in projects to be any more
> > sensible in their defaults. Leaking is_admin_project all the way
> through
> > to a service and having them have to consider it for their policy
> (which
> > we do with the context today) definitely feels like a layer
> violation.
> >
> >
> > This is another good point. If we can ensure projects rely on
> > oslo.context to get scope information in a canonical form (like
> > context.scope == 'global' or context.scope == 'project') that might make
> > consuming all this easier. But it does require us to ensure oslo.context
> > does the right thing with various token types. I included some of that
> > information in the spec [0] but I didn't go into great detail. I thought
> > about adding it to the keystone spec but wasn't sure if that would be
> > the right place for it.
> >
> > [0] https://review.openstack.org/#/c/464763
>
> Personally, as someone that has to think about consuming oslo.context, I
> really don't want
> "scope" as a context option. Because now it means role means something
> different.
>
> I want the context to say:
>
> {
>"user": "me!"
>"project": "some_fun_work",
>"project_roles": ["member"],
>"is_admin": True,
>"roles": ["admin", "auditor"],
>
> }
>
> That's something I can imagine understanding. Because context switching
> on scope and conditionally doing different things in code depending on
> that is something that's going to cause bugs. It's hard code to not get
> wrong.
>
>
Interesting - I guess the way I was thinking about it was on a per-token
basis, since today you can't have a single token represent multiple scopes.
Would it be unreasonable to have oslo.context build this information based
on multiple tokens from the same user, or is that a bad idea?


> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] deprecating the policy and credential APIs

2017-05-26 Thread Lance Bragstad
At the PTG in Atlanta, we talked about deprecating the policy and
credential APIs. The policy API doesn't do anything and secrets shouldn't
be stored in credential API. Reasoning and outcomes can be found in the
etherpad from the session [0]. There was some progress made on the policy
API [1], but it's missing a couple patches to tempest. Is anyone willing to
carry the deprecation over the finish line for Pike?

According to the outcomes from the session, the credential API needs a
little bit of work before we can deprecate it. It was determined at the PTG
that we if keystone absolutely has to store ec2 and totp secrets, they
should be formal first-class attributes of the user (i.e. like how we treat
passwords `user.password`). This would require refactoring the existing
totp and ec2 implementations to use user attributes. Then we could move
forward with deprecating the actual credential API. Depending on the amount
of work required to make .totp and .ec2 formal user attributes, I'd be fine
with pushing the deprecation to Queens if needed.

Does this interest anyone?


[0] https://etherpad.openstack.org/p/pike-ptg-keystone-deprecations
[1] https://review.openstack.org/#/c/438096/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][osprofiler][keystone][neutron][nova] osprofiler in paste deploy files

2017-05-30 Thread Lance Bragstad
On Mon, May 29, 2017 at 4:08 AM, Matthieu Simonin  wrote:

> Hello,
>
> I'd like to have more insight on OSProfiler support in paste-deploy files
> as it seems not similar across projects.
> As a result, the way you can enable it on Kolla side differs. Here are
> some examples:
>
> a) Nova paste.ini already contains OSProfiler middleware[1].
>
> b) Keystone paste.ini doesn't contain OSProfiler but the file is exposed
> in Kolla-ansible.
> Thus it can be overwritten[2] by providing an alternate paste file using a
> node_custom_config directory.
>

I'm looking through keystone's sample paste file we keep in the project and
we do have osprofiler in our v2 and v3 pipelines [0] [1]. It looks like it
has been in keystone's sample paste file since Mitaka [2]


[0]
https://github.com/openstack/keystone/blob/58d7eaca41f83a52e100cbae9afe7d3faf1b9693/etc/keystone-paste.ini#L43-L44
[1]
https://github.com/openstack/keystone/blob/58d7eaca41f83a52e100cbae9afe7d3faf1b9693/etc/keystone-paste.ini#L68
[2]
https://github.com/openstack/keystone/commit/639e36adbfa0f58ce2c3f31856b4343e9197aa0e


>
> c) Neutron paste.ini doesn't contain OSProfiler middleware[3]. For
> devstack, a hook can reconfigure the file at deploy time[4].
> For Kolla, it seems that the only solution right now is to rebuild the
> whole docker image.
>
> As a user of Kolla and OSprofiler a) is the most convenient thing.
>
> Regarding b) and c), is it a deliberate choice to ship the paste deploy
> files without OSProfiler middleware?
>
> Do you think we could converge ? ideally having a) for every API services ?
>
> Best,
>
> Matt
>
> [1]: https://github.com/openstack/nova/blob/0d31fb303e07b7ed9f55b9c823b43e
> 6db5153ee6/etc/nova/api-paste.ini#L29-L37
> [2]: https://github.com/openstack/kolla-ansible/blob/
> fe61612ec6db469cccf2d2b4f0bd404ad4ced112/ansible/roles/
> keystone/tasks/config.yml#L119
> [3]: https://github.com/openstack/neutron/blob/
> e4557a7793fbf3461bfae36ead41ee4d349920ab/neutron/tests/
> contrib/hooks/osprofiler
> [4]: https://github.com/openstack/neutron/blob/
> e4557a7793fbf3461bfae36ead41ee4d349920ab/etc/api-paste.ini#L6-L9
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-31 Thread Lance Bragstad
On Fri, May 26, 2017 at 10:21 AM, Sean Dague  wrote:

> On 05/26/2017 10:44 AM, Lance Bragstad wrote:
> 
> > Interesting - I guess the way I was thinking about it was on a per-token
> > basis, since today you can't have a single token represent multiple
> > scopes. Would it be unreasonable to have oslo.context build this
> > information based on multiple tokens from the same user, or is that a
> > bad idea?
>
> No service consumer is interacting with Tokens. That's all been
> abstracted away. The code in the consumers consumer is interested in is
> the context representation.
>
> Which is good, because then the important parts are figuring out the
> right context interface to consume. And the right Keystone front end to
> be explicit about what was intended by the operator "make jane an admin
> on compute in region 1".
>
> And the middle can be whatever works best on the Keystone side. As long
> as the details of that aren't leaked out, it can also be refactored in
> the future by having keystonemiddleware+oslo.context translate to the
> known interface.
>

Ok - I think that makes sense. So if I copy/paste your example from earlier
and modify it a bit ( s/is_admin/global/)::

{
   "user": "me!",
   "global": True,
   "roles": ["admin", "auditor"],
   
}

Or

{
   "user": "me!",
   "global": True,
   "roles": ["reader"],
   
}

That might be one way we can represent global roles through
oslo.context/keystonemiddleware. The library would be on the hook for
maintaining the mapping of token scope to context scope, which makes sense:

if token['is_global'] == True:
context.global = True
elif token['domain_scoped']:
# domain scoping?
else:
# handle project scoping

I need to go dig into oslo.context a bit more to get familiar with how this
works on the project level. Because if I understand correctly, oslo.context
currently doesn't relay global scope and that will be a required thing to
get done before this work is useful, regardless of going with option #1,
#2, and especially #3.



> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-31 Thread Lance Bragstad
I took a stab at working through the API a bit more and I've capture that
information in the spec [0]. Rendered version is available, too [1].

[0] https://review.openstack.org/#/c/464763/
[1]
http://docs-draft.openstack.org/63/464763/12/check/gate-keystone-specs-docs-ubuntu-xenial/1dbeb65//doc/build/html/specs/keystone/ongoing/global-roles.html

On Wed, May 31, 2017 at 9:10 AM, Lance Bragstad  wrote:

>
>
> On Fri, May 26, 2017 at 10:21 AM, Sean Dague  wrote:
>
>> On 05/26/2017 10:44 AM, Lance Bragstad wrote:
>> 
>> > Interesting - I guess the way I was thinking about it was on a per-token
>> > basis, since today you can't have a single token represent multiple
>> > scopes. Would it be unreasonable to have oslo.context build this
>> > information based on multiple tokens from the same user, or is that a
>> > bad idea?
>>
>> No service consumer is interacting with Tokens. That's all been
>> abstracted away. The code in the consumers consumer is interested in is
>> the context representation.
>>
>> Which is good, because then the important parts are figuring out the
>> right context interface to consume. And the right Keystone front end to
>> be explicit about what was intended by the operator "make jane an admin
>> on compute in region 1".
>>
>> And the middle can be whatever works best on the Keystone side. As long
>> as the details of that aren't leaked out, it can also be refactored in
>> the future by having keystonemiddleware+oslo.context translate to the
>> known interface.
>>
>
> Ok - I think that makes sense. So if I copy/paste your example from
> earlier and modify it a bit ( s/is_admin/global/)::
>
> {
>"user": "me!",
>"global": True,
>"roles": ["admin", "auditor"],
>
> }
>
> Or
>
> {
>"user": "me!",
>"global": True,
>"roles": ["reader"],
>
> }
>
> That might be one way we can represent global roles through 
> oslo.context/keystonemiddleware.
> The library would be on the hook for maintaining the mapping of token scope
> to context scope, which makes sense:
>
> if token['is_global'] == True:
> context.global = True
> elif token['domain_scoped']:
> # domain scoping?
> else:
> # handle project scoping
>
> I need to go dig into oslo.context a bit more to get familiar with how
> this works on the project level. Because if I understand correctly,
> oslo.context currently doesn't relay global scope and that will be a
> required thing to get done before this work is useful, regardless of going
> with option #1, #2, and especially #3.
>
>
>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ptls][all] Potential Queens Goal: Move policy and policy docs into code

2017-06-01 Thread Lance Bragstad
Hi all,

I've proposed a community-wide goal for Queens to move policy into code and
supply documentation for each policy [0]. I've included references to
existing documentation and specifications completed by various projects and
attempted to lay out the benefits for both developers and operators.

I'd greatly appreciate any feedback or discussion.

Thanks!

Lance


[0] https://review.openstack.org/#/c/469954/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-06-01 Thread Lance Bragstad
On Thu, Jun 1, 2017 at 3:46 PM, Andrey Grebennikov <
agrebenni...@mirantis.com> wrote:

> We had a very similar conversation multiple times with Keystone cores
> (multi-site Keystone).
>
Geo-rep Galera was suggested first and it was immediately declined (one of
> the reasons was the case of complete corruption of Keystone DB everywhere
> in case of accidental table corrupt in one site) by me as well as current
> customer.
> Right after that I was told many times that federation is the only right
> way to go nowadays.
>

After doing some digging, I found the original specification [0] and the
meeting agenda [1] where we talked about the alternative.

If I recall correctly, I thought I remember the proposal (being able to
specify project IDs at creation time) being driven by not wanting to
replicate all of keystone's backends in multi-region deployments,but still
wanting to validate tokens across regions. Today, if you have a region in
Seattle and region in Sydney, a token obtained from a keystone in Seattle
and validated in Sydney would require both regions to share identity,
resource, and assignment backends (among others depending on what kind of
token it is). The request in the specification would allow only the
identity and role backends to be replicated but the project backend in each
region wouldn't need to be synced or replicated. Instead, operators could
create projects with matching IDs in each region in order for tokens
generated from one to be validated in the other. Most folks involved in the
meeting considered this behavior for project IDs to be a slippery-slope.

Federation was brought up because sharing identity information globally,
but not project or role information globally sounded like federation (e.g.
having all your user information in an IdP somewhere and setting up each
region's keystone to federate to the IdP). The group seemed eager to expose
gaps in the federation implementation that prevented that case and address
those.

Hopefully that helps capture some of the context (feel free to fill in gaps
if I missed any).


[0] https://review.openstack.org/#/c/323499/
[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2016-05-31.log.html#t2016-05-31T18:05:05


>
> Is this statement still valid?
>
> On Thu, Jun 1, 2017 at 12:51 PM, Jay Pipes  wrote:
>
>> On 05/31/2017 11:06 PM, Mike Bayer wrote:
>>
>>> I'd also throw in, there's lots of versions of Galera with different
>>> bugfixes / improvements as we go along, not to mention configuration
>>> settings if Jay observes it working great on a distributed cluster and
>>> Clint observes it working terribly, it could be that these were not the
>>> same Galera versions being used.
>>>
>>
>> Agreed. The version of Galera we were using IIRC was Percona XtraDB
>> Cluster 5.6. And, remember that the wsrep_provider_options do make a big
>> difference, especially in WAN-replicated setups.
>>
>> We also increased the tolerance settings for network disruption so that
>> the cluster operated without hiccups over the WAN. I think the
>> wsrep_provider_options setting was evs.inactive_timeout=PT30Sm
>> evs.suspect_timeout=PT15S, and evs.join_retrans_period=PT1S.
>>
>> Also, regardless of settings, if your network sucks, none of these
>> distributed databases are going to be fun to operate :)
>>
>> At AT&T, we jumped through a lot of hoops to ensure multiple levels of
>> redundancy and high performance for the network links inside and between
>> datacenters. It really makes a huge difference when your network rocks.
>>
>>
>> Best,
>> -jay
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Andrey Grebennikov
> Principal Deployment Engineer
> Mirantis Inc, Austin TX
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-06-06 Thread Lance Bragstad
I replied to John, but directly. I'm sending the responses I sent to him
but with the intended audience on the thread. Sorry for not catching that
earlier.


On Fri, May 26, 2017 at 2:44 AM, John Garbutt  wrote:

> +1 on not forcing Operators to transition to something new twice, even if
> we did go for option 3.
>

The more I think about this, the more it worries me from a developer
perspective. If we ended up going with option 3, then we'd be supporting
both methods of elevating privileges. That means two paths for doing the
same thing in keystone. It also means oslo.context, keystonemiddleware, or
any other library consuming tokens that needs to understand elevated
privileges needs to understand both approaches.


>
> Do we have an agreed non-distruptive upgrade path mapped out yet? (For any
> of the options) We spoke about fallback rules you pass but with a warning
> to give us a smoother transition. I think that's my main objection with the
> existing patches, having to tell all admins to get their token for a
> different project, and give them roles in that project, all before being
> able to upgrade.
>

Thanks for bringing up the upgrade case! You've kinda described an upgrade
for option 1. This is what I was thinking for option 2:

- deployment upgrades to a release that supports global role assignments
- operator creates a set of global roles (i.e. global_admin)
- operator grants global roles to various people that need it (i.e. all
admins)
- operator informs admins to create globally scoped tokens
- operator rolls out necessary policy changes

If I'm thinking about this properly, nothing would change at the
project-scope level for existing users (who don't need a global role
assignment). I'm hoping someone can help firm ^ that up or improve it if
needed.


>
> Thanks,
> johnthetubaguy
>
> On Fri, 26 May 2017 at 08:09, Belmiro Moreira <
> moreira.belmiro.email.li...@gmail.com> wrote:
>
>> Hi,
>> thanks for bringing this into discussion in the Operators list.
>>
>> Option 1 and 2 and not complementary but complety different.
>> So, considering "Option 2" and the goal to target it for Queens I would
>> prefer not going into a migration path in
>> Pike and then again in Queens.
>>
>> Belmiro
>>
>> On Fri, May 26, 2017 at 2:52 AM, joehuang  wrote:
>>
>>> I think a option 2 is better.
>>>
>>> Best Regards
>>> Chaoyi Huang (joehuang)
>>> --
>>> *From:* Lance Bragstad [lbrags...@gmail.com]
>>> *Sent:* 25 May 2017 3:47
>>> *To:* OpenStack Development Mailing List (not for usage questions);
>>> openstack-operat...@lists.openstack.org
>>> *Subject:* Re: [openstack-dev] [keystone][nova][cinder][
>>> glance][neutron][horizon][policy] defining admin-ness
>>>
>>> I'd like to fill in a little more context here. I see three options with
>>> the current two proposals.
>>>
>>> *Option 1*
>>>
>>> Use a special admin project to denote elevated privileges. For those
>>> unfamiliar with the approach, it would rely on every deployment having an
>>> "admin" project defined in configuration [0].
>>>
>>> *How it works:*
>>>
>>> Role assignments on this project represent global scope which is denoted
>>> by a boolean attribute in the token response. A user with an 'admin' role
>>> assignment on this project is equivalent to the global or cloud
>>> administrator. Ideally, if a user has a 'reader' role assignment on the
>>> admin project, they could have access to list everything within the
>>> deployment, pending all the proper changes are made across the various
>>> services. The workflow requires a special project for any sort of elevated
>>> privilege.
>>>
>>> Pros:
>>> - Almost all the work is done to make keystone understand the admin
>>> project, there are already several patches in review to other projects to
>>> consume this
>>> - Operators can create roles and assign them to the admin_project as
>>> needed after the upgrade to give proper global scope to their users
>>>
>>> Cons:
>>> - All global assignments are linked back to a single project
>>> - Describing the flow is confusing because in order to give someone
>>> global access you have to give them a role assignment on a very specific
>>> project, which seems like an anti-pattern
>>> - We currently don't allow some things to exist in the global sense
>>> (i.e. I can't launch instances without tenancy), the a

Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-06-06 Thread Lance Bragstad
Also, with all the people involved with this thread, I'm curious what the
best way is to get consensus. If I've tallied the responses properly, we
have 5 in favor of option #2 and 1 in favor of option #3. This week is spec
freeze for keystone, so I see a slim chance of this getting committed to
Pike [0]. If we do have spare cycles across the team we could start working
on an early version and get eyes on it. If we straighten out everyone
concerns early we could land option #2 early in Queens.

I guess it comes down to how fast folks want it.

[0] https://review.openstack.org/#/c/464763/

On Tue, Jun 6, 2017 at 10:01 AM, Lance Bragstad  wrote:

> I replied to John, but directly. I'm sending the responses I sent to him
> but with the intended audience on the thread. Sorry for not catching that
> earlier.
>
>
> On Fri, May 26, 2017 at 2:44 AM, John Garbutt 
> wrote:
>
>> +1 on not forcing Operators to transition to something new twice, even if
>> we did go for option 3.
>>
>
> The more I think about this, the more it worries me from a developer
> perspective. If we ended up going with option 3, then we'd be supporting
> both methods of elevating privileges. That means two paths for doing the
> same thing in keystone. It also means oslo.context, keystonemiddleware, or
> any other library consuming tokens that needs to understand elevated
> privileges needs to understand both approaches.
>
>
>>
>> Do we have an agreed non-distruptive upgrade path mapped out yet? (For
>> any of the options) We spoke about fallback rules you pass but with a
>> warning to give us a smoother transition. I think that's my main objection
>> with the existing patches, having to tell all admins to get their token for
>> a different project, and give them roles in that project, all before being
>> able to upgrade.
>>
>
> Thanks for bringing up the upgrade case! You've kinda described an upgrade
> for option 1. This is what I was thinking for option 2:
>
> - deployment upgrades to a release that supports global role assignments
> - operator creates a set of global roles (i.e. global_admin)
> - operator grants global roles to various people that need it (i.e. all
> admins)
> - operator informs admins to create globally scoped tokens
> - operator rolls out necessary policy changes
>
> If I'm thinking about this properly, nothing would change at the
> project-scope level for existing users (who don't need a global role
> assignment). I'm hoping someone can help firm ^ that up or improve it if
> needed.
>
>
>>
>> Thanks,
>> johnthetubaguy
>>
>> On Fri, 26 May 2017 at 08:09, Belmiro Moreira <
>> moreira.belmiro.email.li...@gmail.com> wrote:
>>
>>> Hi,
>>> thanks for bringing this into discussion in the Operators list.
>>>
>>> Option 1 and 2 and not complementary but complety different.
>>> So, considering "Option 2" and the goal to target it for Queens I would
>>> prefer not going into a migration path in
>>> Pike and then again in Queens.
>>>
>>> Belmiro
>>>
>>> On Fri, May 26, 2017 at 2:52 AM, joehuang  wrote:
>>>
>>>> I think a option 2 is better.
>>>>
>>>> Best Regards
>>>> Chaoyi Huang (joehuang)
>>>> --
>>>> *From:* Lance Bragstad [lbrags...@gmail.com]
>>>> *Sent:* 25 May 2017 3:47
>>>> *To:* OpenStack Development Mailing List (not for usage questions);
>>>> openstack-operat...@lists.openstack.org
>>>> *Subject:* Re: [openstack-dev] 
>>>> [keystone][nova][cinder][glance][neutron][horizon][policy]
>>>> defining admin-ness
>>>>
>>>> I'd like to fill in a little more context here. I see three options
>>>> with the current two proposals.
>>>>
>>>> *Option 1*
>>>>
>>>> Use a special admin project to denote elevated privileges. For those
>>>> unfamiliar with the approach, it would rely on every deployment having an
>>>> "admin" project defined in configuration [0].
>>>>
>>>> *How it works:*
>>>>
>>>> Role assignments on this project represent global scope which is
>>>> denoted by a boolean attribute in the token response. A user with an
>>>> 'admin' role assignment on this project is equivalent to the global or
>>>> cloud administrator. Ideally, if a user has a 'reader' role assignment on
>>>> the admin project, they could have access to list everything within the
&

Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-06-06 Thread Lance Bragstad
On Tue, Jun 6, 2017 at 3:06 PM, Marc Heckmann 
wrote:

> Hi,
>
> On Tue, 2017-06-06 at 10:09 -0500, Lance Bragstad wrote:
>
> Also, with all the people involved with this thread, I'm curious what the
> best way is to get consensus. If I've tallied the responses properly, we
> have 5 in favor of option #2 and 1 in favor of option #3. This week is spec
> freeze for keystone, so I see a slim chance of this getting committed to
> Pike [0]. If we do have spare cycles across the team we could start working
> on an early version and get eyes on it. If we straighten out everyone
> concerns early we could land option #2 early in Queens.
>
>
> I was the only one in favour of option 3 only because I've spent a bunch
> of time playing with option #1 in the past. As I mentioned previously in
> the thread, if #2 is more in line with where the project is going, then I'm
> all for it. At this point, the admin scope issue has been around long
> enough that Queens doesn't seem that far off.
>

>From an administrative point-of-view, would you consider option #1 or
option #2 to better long term?


>
> -m
>
>
> I guess it comes down to how fast folks want it.
>
> [0] https://review.openstack.org/#/c/464763/
>
> On Tue, Jun 6, 2017 at 10:01 AM, Lance Bragstad 
> wrote:
>
> I replied to John, but directly. I'm sending the responses I sent to him
> but with the intended audience on the thread. Sorry for not catching that
> earlier.
>
>
> On Fri, May 26, 2017 at 2:44 AM, John Garbutt 
> wrote:
>
> +1 on not forcing Operators to transition to something new twice, even if
> we did go for option 3.
>
>
> The more I think about this, the more it worries me from a developer
> perspective. If we ended up going with option 3, then we'd be supporting
> both methods of elevating privileges. That means two paths for doing the
> same thing in keystone. It also means oslo.context, keystonemiddleware, or
> any other library consuming tokens that needs to understand elevated
> privileges needs to understand both approaches.
>
>
>
> Do we have an agreed non-distruptive upgrade path mapped out yet? (For any
> of the options) We spoke about fallback rules you pass but with a warning
> to give us a smoother transition. I think that's my main objection with the
> existing patches, having to tell all admins to get their token for a
> different project, and give them roles in that project, all before being
> able to upgrade.
>
>
> Thanks for bringing up the upgrade case! You've kinda described an upgrade
> for option 1. This is what I was thinking for option 2:
>
> - deployment upgrades to a release that supports global role assignments
> - operator creates a set of global roles (i.e. global_admin)
> - operator grants global roles to various people that need it (i.e. all
> admins)
> - operator informs admins to create globally scoped tokens
> - operator rolls out necessary policy changes
>
> If I'm thinking about this properly, nothing would change at the
> project-scope level for existing users (who don't need a global role
> assignment). I'm hoping someone can help firm ^ that up or improve it if
> needed.
>
>
>
> Thanks,
> johnthetubaguy
>
> On Fri, 26 May 2017 at 08:09, Belmiro Moreira <
> moreira.belmiro.email.li...@gmail.com> wrote:
>
> Hi,
> thanks for bringing this into discussion in the Operators list.
>
> Option 1 and 2 and not complementary but complety different.
> So, considering "Option 2" and the goal to target it for Queens I would
> prefer not going into a migration path in
> Pike and then again in Queens.
>
> Belmiro
>
> On Fri, May 26, 2017 at 2:52 AM, joehuang  wrote:
>
> I think a option 2 is better.
>
> Best Regards
> Chaoyi Huang (joehuang)
> --
> *From:* Lance Bragstad [lbrags...@gmail.com]
> *Sent:* 25 May 2017 3:47
> *To:* OpenStack Development Mailing List (not for usage questions);
> openstack-operat...@lists.openstack.org
> *Subject:* Re: [openstack-dev] 
> [keystone][nova][cinder][glance][neutron][horizon][policy]
> defining admin-ness
>
> I'd like to fill in a little more context here. I see three options with
> the current two proposals.
>
> *Option 1*
>
> Use a special admin project to denote elevated privileges. For those
> unfamiliar with the approach, it would rely on every deployment having an
> "admin" project defined in configuration [0].
>
> *How it works:*
>
> Role assignments on this project represent global scope which is denoted
> by a boolean attribute in the token response. A user with an 'admin' role
> assignment o

Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-06-08 Thread Lance Bragstad
Ok - based on the responses in the thread here, I've re-proposed the global
roles specification to keystone's backlog [0]. I'll start working on the
implementation and get something in review as soon as possible. I'll plan
to move the specification from backlog to Queens once development opens.

Thanks for all the feedback and patience.


[0] https://review.openstack.org/#/c/464763/

On Tue, Jun 6, 2017 at 4:39 PM, Marc Heckmann 
wrote:

> On Tue, 2017-06-06 at 17:01 -0400, Erik McCormick wrote:
> > On Tue, Jun 6, 2017 at 4:44 PM, Lance Bragstad 
> > wrote:
> > >
> > >
> > > On Tue, Jun 6, 2017 at 3:06 PM, Marc Heckmann  > > t.com>
> > > wrote:
> > > >
> > > > Hi,
> > > >
> > > > On Tue, 2017-06-06 at 10:09 -0500, Lance Bragstad wrote:
> > > >
> > > > Also, with all the people involved with this thread, I'm curious
> > > > what the
> > > > best way is to get consensus. If I've tallied the responses
> > > > properly, we
> > > > have 5 in favor of option #2 and 1 in favor of option #3. This
> > > > week is spec
> > > > freeze for keystone, so I see a slim chance of this getting
> > > > committed to
> > > > Pike [0]. If we do have spare cycles across the team we could
> > > > start working
> > > > on an early version and get eyes on it. If we straighten out
> > > > everyone
> > > > concerns early we could land option #2 early in Queens.
> > > >
> > > >
> > > > I was the only one in favour of option 3 only because I've spent
> > > > a bunch
> > > > of time playing with option #1 in the past. As I mentioned
> > > > previously in the
> > > > thread, if #2 is more in line with where the project is going,
> > > > then I'm all
> > > > for it. At this point, the admin scope issue has been around long
> > > > enough
> > > > that Queens doesn't seem that far off.
> > >
> > >
> > > From an administrative point-of-view, would you consider option #1
> > > or option
> > > #2 to better long term?
>
> #2
>
> > >
> >
> > Count me as another +1 for option 2. It's the right way to go long
> > term, and we've lived with how it is now long enough that I'm OK
> > waiting a release or even 2 more for it with things as is. I think
> > option 3 would just muddy the waters.
> >
> > -Erik
> >
> > > >
> > > >
> > > > -m
> > > >
> > > >
> > > > I guess it comes down to how fast folks want it.
> > > >
> > > > [0] https://review.openstack.org/#/c/464763/
> > > >
> > > > On Tue, Jun 6, 2017 at 10:01 AM, Lance Bragstad  > > > com>
> > > > wrote:
> > > >
> > > > I replied to John, but directly. I'm sending the responses I sent
> > > > to him
> > > > but with the intended audience on the thread. Sorry for not
> > > > catching that
> > > > earlier.
> > > >
> > > >
> > > > On Fri, May 26, 2017 at 2:44 AM, John Garbutt  > > > om>
> > > > wrote:
> > > >
> > > > +1 on not forcing Operators to transition to something new twice,
> > > > even if
> > > > we did go for option 3.
> > > >
> > > >
> > > > The more I think about this, the more it worries me from a
> > > > developer
> > > > perspective. If we ended up going with option 3, then we'd be
> > > > supporting
> > > > both methods of elevating privileges. That means two paths for
> > > > doing the
> > > > same thing in keystone. It also means oslo.context,
> > > > keystonemiddleware, or
> > > > any other library consuming tokens that needs to understand
> > > > elevated
> > > > privileges needs to understand both approaches.
> > > >
> > > >
> > > >
> > > > Do we have an agreed non-distruptive upgrade path mapped out yet?
> > > > (For any
> > > > of the options) We spoke about fallback rules you pass but with a
> > > > warning to
> > > > give us a smoother transition. I think that's my main objection
> > > > with the
> > > > existing patches, having to tell all admins to get their token
> > > > for a
> > &g

Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Lance Bragstad
After digging into etcd a bit, one place this might be help deployer
experience would be the handling of fernet keys for token encryption in
keystone. Currently, all keys used to encrypt and decrypt tokens are kept
on disk for each keystone node in the deployment. While simple, it requires
operators to perform rotation on a single node and then push, or sync, the
new key set to the rest of the nodes. This must be done in lock step in
order to prevent early token invalidation and inconsistent token responses.

An alternative would be to keep the keys in etcd and make the fernet bits
pluggable so that it's possible to read keys from disk or etcd (pending
configuration). The advantage would be that operators could initiate key
rotations from any keystone node in the deployment (or using etcd directly)
and not have to worry about distributing the new key set. Since etcd
associates metadata to the key-value pairs, we might be able to simplify
the rotation strategy as well.

On Thu, Jun 8, 2017 at 11:37 AM, Mike Bayer  wrote:

>
>
> On 06/08/2017 12:47 AM, Joshua Harlow wrote:
>
>> So just out of curiosity, but do people really even know what etcd is
>> good for? I am thinking that there should be some guidance from folks in
>> the community as to where etcd should be used and where it shouldn't
>> (otherwise we just all end up in a mess).
>>
>
> So far I've seen a proposal of etcd3 as a replacement for memcached in
> keystone, and a new dogpile connector was added to oslo.cache to handle
> referring to etcd3 as a cache backend.  This is a really simplistic /
> minimal kind of use case for a key-store.
>
> But, keeping in mind I don't know anything about etcd3 other than "it's
> another key-store", it's the only database used by Kubernetes as a whole,
> which suggests it's doing a better job than Redis in terms of "durable".
>  So I wouldn't be surprised if new / existing openstack applications
> express some gravitational pull towards using it as their own datastore as
> well.I'll be trying to hang onto the etcd3 track as much as possible so
> that if/when that happens I still have a job :).
>
>
>
>
>
>> Perhaps a good idea to actually give examples of how it should be used,
>> how it shouldn't be used, what it offers, what it doesn't... Or at least
>> provide links for people to read up on this.
>>
>> Thoughts?
>>
>> Davanum Srinivas wrote:
>>
>>> One clarification: Since https://pypi.python.org/pypi/etcd3gw just
>>> uses the HTTP API (/v3alpha) it will work under both eventlet and
>>> non-eventlet environments.
>>>
>>> Thanks,
>>> Dims
>>>
>>>
>>> On Wed, Jun 7, 2017 at 6:47 AM, Davanum Srinivas
>>> wrote:
>>>
 Team,

 Here's the update to the base services resolution from the TC:
 https://governance.openstack.org/tc/reference/base-services.html

 First request is to Distros, Packagers, Deployers, anyone who
 installs/configures OpenStack:
 Please make sure you have latest etcd 3.x available in your
 environment for Services to use, Fedora already does, we need help in
 making sure all distros and architectures are covered.

 Any project who want to use etcd v3 API via grpc, please use:
 https://pypi.python.org/pypi/etcd3 (works only for non-eventlet
 services)

 Those that depend on eventlet, please use the etcd3 v3alpha HTTP API
 using:
 https://pypi.python.org/pypi/etcd3gw

 If you use tooz, there are 2 driver choices for you:
 https://github.com/openstack/tooz/blob/master/setup.cfg#L29
 https://github.com/openstack/tooz/blob/master/setup.cfg#L30

 If you use oslo.cache, there is a driver for you:
 https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33

 Devstack installs etcd3 by default and points cinder to it:
 http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
 http://git.openstack.org/cgit/openstack-dev/devstack/tree/li
 b/cinder#n356

 Review in progress for keystone to use etcd3 for caching:
 https://review.openstack.org/#/c/469621/

 Doug is working on proposal(s) for oslo.config to store some
 configuration in etcd3:
 https://review.openstack.org/#/c/454897/

 So, feel free to turn on / test with etcd3 and report issues.

 Thanks,
 Dims

 --
 Davanum Srinivas :: https://twitter.com/dims

>>>
>>>
>>>
>>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Lance Bragstad
On Thu, Jun 8, 2017 at 3:21 PM, Emilien Macchi  wrote:

> On Thu, Jun 8, 2017 at 7:34 PM, Lance Bragstad 
> wrote:
> > After digging into etcd a bit, one place this might be help deployer
> > experience would be the handling of fernet keys for token encryption in
> > keystone. Currently, all keys used to encrypt and decrypt tokens are
> kept on
> > disk for each keystone node in the deployment. While simple, it requires
> > operators to perform rotation on a single node and then push, or sync,
> the
> > new key set to the rest of the nodes. This must be done in lock step in
> > order to prevent early token invalidation and inconsistent token
> responses.
>
> This is what we discussed a few months ago :-)
>
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113943.html
>
> I'm glad it's coming back ;-)
>

Yep! I've proposed a pretty basic spec to backlog [0] in an effort to
capture the discussion. I've also noted the point Kevin brought up about
authorization in etcd (thanks, Kevin!)

If someone feels compelled to take that and run with it, they are more than
welcome to.

[0] https://review.openstack.org/#/c/472385/


> > An alternative would be to keep the keys in etcd and make the fernet bits
> > pluggable so that it's possible to read keys from disk or etcd (pending
> > configuration). The advantage would be that operators could initiate key
> > rotations from any keystone node in the deployment (or using etcd
> directly)
> > and not have to worry about distributing the new key set. Since etcd
> > associates metadata to the key-value pairs, we might be able to simplify
> the
> > rotation strategy as well.
> >
> > On Thu, Jun 8, 2017 at 11:37 AM, Mike Bayer  wrote:
> >>
> >>
> >>
> >> On 06/08/2017 12:47 AM, Joshua Harlow wrote:
> >>>
> >>> So just out of curiosity, but do people really even know what etcd is
> >>> good for? I am thinking that there should be some guidance from folks
> in the
> >>> community as to where etcd should be used and where it shouldn't
> (otherwise
> >>> we just all end up in a mess).
> >>
> >>
> >> So far I've seen a proposal of etcd3 as a replacement for memcached in
> >> keystone, and a new dogpile connector was added to oslo.cache to handle
> >> referring to etcd3 as a cache backend.  This is a really simplistic /
> >> minimal kind of use case for a key-store.
> >>
> >> But, keeping in mind I don't know anything about etcd3 other than "it's
> >> another key-store", it's the only database used by Kubernetes as a
> whole,
> >> which suggests it's doing a better job than Redis in terms of "durable".
> >> So I wouldn't be surprised if new / existing openstack applications
> express
> >> some gravitational pull towards using it as their own datastore as well.
> >> I'll be trying to hang onto the etcd3 track as much as possible so that
> >> if/when that happens I still have a job :).
> >>
> >>
> >>
> >>
> >>>
> >>> Perhaps a good idea to actually give examples of how it should be used,
> >>> how it shouldn't be used, what it offers, what it doesn't... Or at
> least
> >>> provide links for people to read up on this.
> >>>
> >>> Thoughts?
> >>>
> >>> Davanum Srinivas wrote:
> >>>>
> >>>> One clarification: Since https://pypi.python.org/pypi/etcd3gw just
> >>>> uses the HTTP API (/v3alpha) it will work under both eventlet and
> >>>> non-eventlet environments.
> >>>>
> >>>> Thanks,
> >>>> Dims
> >>>>
> >>>>
> >>>> On Wed, Jun 7, 2017 at 6:47 AM, Davanum Srinivas
> >>>> wrote:
> >>>>>
> >>>>> Team,
> >>>>>
> >>>>> Here's the update to the base services resolution from the TC:
> >>>>> https://governance.openstack.org/tc/reference/base-services.html
> >>>>>
> >>>>> First request is to Distros, Packagers, Deployers, anyone who
> >>>>> installs/configures OpenStack:
> >>>>> Please make sure you have latest etcd 3.x available in your
> >>>>> environment for Services to use, Fedora already does, we need help in
> >>>>> making sure all distros and architectures are covered.
> >>>>>
> >>>>> An

[openstack-dev] [keystone] Specification Freeze

2017-06-08 Thread Lance Bragstad
Happy Stanley-Cup-Playoff-Game-5 Day,

Sending out a quick reminder that tomorrow is specification freeze. I'll be
making a final push for specifications that target Pike work tomorrow. I'd
also like to merge others to backlog as we see fit.

By EOD tomorrow, I'll go through and put procedural -2's on the remaining
specs.

Thanks,

Lance
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][glance][barbican][telemetry][keystone][designate][congress][magnum][searchlight][swift][tacker] unreleased libraries

2017-06-09 Thread Lance Bragstad
We have a review in flight to release python-keystoneclient [0]. Thanks for
the reminder!

[0] https://review.openstack.org/#/c/472667/

On Fri, Jun 9, 2017 at 9:39 AM, Doug Hellmann  wrote:

> We have several teams with library deliverables that haven't seen
> any releases at all yet this cycle. Please review the list below,
> and if there are changes on master since the last release prepare
> a release request.  Remember that because of the way our CI system
> works, patches that land in libraries are not used in tests for
> services that use the libs unless the library has a release and the
> constraints list is updated.
>
> Doug
>
> glance-store
> instack
> pycadf
> python-barbicanclient
> python-ceilometerclient
> python-congressclient
> python-designateclient
> python-keystoneclient
> python-magnumclient
> python-searchlightclient
> python-swiftclient
> python-tackerclient
> requestsexceptions
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][glance][barbican][telemetry][keystone][designate][congress][magnum][searchlight][swift][tacker] unreleased libraries

2017-06-09 Thread Lance Bragstad
Just pushed a release for pycadf as well [1].

[1] https://review.openstack.org/#/c/472717/

On Fri, Jun 9, 2017 at 9:43 AM, Lance Bragstad  wrote:

> We have a review in flight to release python-keystoneclient [0]. Thanks
> for the reminder!
>
> [0] https://review.openstack.org/#/c/472667/
>
> On Fri, Jun 9, 2017 at 9:39 AM, Doug Hellmann 
> wrote:
>
>> We have several teams with library deliverables that haven't seen
>> any releases at all yet this cycle. Please review the list below,
>> and if there are changes on master since the last release prepare
>> a release request.  Remember that because of the way our CI system
>> works, patches that land in libraries are not used in tests for
>> services that use the libs unless the library has a release and the
>> constraints list is updated.
>>
>> Doug
>>
>> glance-store
>> instack
>> pycadf
>> python-barbicanclient
>> python-ceilometerclient
>> python-congressclient
>> python-designateclient
>> python-keystoneclient
>> python-magnumclient
>> python-searchlightclient
>> python-swiftclient
>> python-tackerclient
>> requestsexceptions
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-09 Thread Lance Bragstad
On Fri, Jun 9, 2017 at 9:57 AM, Mike Bayer  wrote:

>
>
> On 06/08/2017 01:34 PM, Lance Bragstad wrote:
>
>> After digging into etcd a bit, one place this might be help deployer
>> experience would be the handling of fernet keys for token encryption in
>> keystone. Currently, all keys used to encrypt and decrypt tokens are kept
>> on disk for each keystone node in the deployment. While simple, it requires
>> operators to perform rotation on a single node and then push, or sync, the
>> new key set to the rest of the nodes. This must be done in lock step in
>> order to prevent early token invalidation and inconsistent token responses.
>>
>> An alternative would be to keep the keys in etcd and make the fernet bits
>> pluggable so that it's possible to read keys from disk or etcd (pending
>> configuration). The advantage would be that operators could initiate key
>> rotations from any keystone node in the deployment (or using etcd directly)
>> and not have to worry about distributing the new key set. Since etcd
>> associates metadata to the key-value pairs, we might be able to simplify
>> the rotation strategy as well.
>>
>
> Interesting, I had the mis-conception that "fernet" keys no longer
> required any server-side storage (how is "kept-on-disk" now implemented?) .


Currently - the keys used to encrypt and decrypt fernet tokens are stored
as files on the keystone server. The repositories default location is in
`/etc/keystone/fernet-keys`. The size of this repository is regulated by
the rotation process we provide in keystone-manage tooling [0].


> We've had continuous issues with the pre-fernet Keystone tokens filling up
> databases, even when operators were correctly expunging old tokens; some
> environments just did so many requests that the keystone-token table still
> blew up to where MySQL can no longer delete from it without producing a
> too-large transaction for Galera.
>

Yep - we actually just fixed a bug related to this [1].


>
> So after all the "finally fernet solves this problem" we propose, hey lets
> put them *back* in the database :).  That's great.  But, lets please not
> leave "cleaning out old tokens" as some kind of cron/worry-about-it-later
> thing.  that was a terrible architectural decision, with apologies to
> whoever made it.if you're putting some kind of "we create an infinite,
> rapidly growing, turns-to-garbage-in-30-seconds" kind of data in a
> database, removing that data robustly and ASAP needs to be part of the
> process.
>
>
I should have clarified. The idea was to put the keys used to encrypt and
decrypt the tokens in etcd so that synchronizing the repository across a
cluster for keystone nodes is easier for operators (but not without other
operator pain as Kevin pointed out). The tokens themselves will remain
completely non-persistent. Fernet key creation is explicitly controlled by
operators and isn't something that end users generate.

[0]
https://github.com/openstack/keystone/blob/c528539879e824b8e6d5654292a85ccbee6dcf89/keystone/conf/fernet_tokens.py#L44-L54
[1] https://launchpad.net/bugs/1649616


>
>
>
>
>
>> On Thu, Jun 8, 2017 at 11:37 AM, Mike Bayer > mba...@redhat.com>> wrote:
>>
>>
>>
>> On 06/08/2017 12:47 AM, Joshua Harlow wrote:
>>
>> So just out of curiosity, but do people really even know what
>> etcd is good for? I am thinking that there should be some
>> guidance from folks in the community as to where etcd should be
>> used and where it shouldn't (otherwise we just all end up in a
>> mess).
>>
>>
>> So far I've seen a proposal of etcd3 as a replacement for memcached
>> in keystone, and a new dogpile connector was added to oslo.cache to
>> handle referring to etcd3 as a cache backend.  This is a really
>> simplistic / minimal kind of use case for a key-store.
>>
>> But, keeping in mind I don't know anything about etcd3 other than
>> "it's another key-store", it's the only database used by Kubernetes
>> as a whole, which suggests it's doing a better job than Redis in
>> terms of "durable".   So I wouldn't be surprised if new / existing
>> openstack applications express some gravitational pull towards using
>> it as their own datastore as well.I'll be trying to hang onto
>> the etcd3 track as much as possible so that if/when that happens I
>> still have a job :).
>>
>>
>>
>>
>>
>> Perhaps a good idea to actually give examples of

Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-09 Thread Lance Bragstad
On Fri, Jun 9, 2017 at 11:17 AM, Clint Byrum  wrote:

> Excerpts from Lance Bragstad's message of 2017-06-08 16:10:00 -0500:
> > On Thu, Jun 8, 2017 at 3:21 PM, Emilien Macchi 
> wrote:
> >
> > > On Thu, Jun 8, 2017 at 7:34 PM, Lance Bragstad 
> > > wrote:
> > > > After digging into etcd a bit, one place this might be help deployer
> > > > experience would be the handling of fernet keys for token encryption
> in
> > > > keystone. Currently, all keys used to encrypt and decrypt tokens are
> > > kept on
> > > > disk for each keystone node in the deployment. While simple, it
> requires
> > > > operators to perform rotation on a single node and then push, or
> sync,
> > > the
> > > > new key set to the rest of the nodes. This must be done in lock step
> in
> > > > order to prevent early token invalidation and inconsistent token
> > > responses.
> > >
> > > This is what we discussed a few months ago :-)
> > >
> > > http://lists.openstack.org/pipermail/openstack-dev/2017-
> March/113943.html
> > >
> > > I'm glad it's coming back ;-)
> > >
> >
> > Yep! I've proposed a pretty basic spec to backlog [0] in an effort to
> > capture the discussion. I've also noted the point Kevin brought up about
> > authorization in etcd (thanks, Kevin!)
> >
> > If someone feels compelled to take that and run with it, they are more
> than
> > welcome to.
> >
> > [0] https://review.openstack.org/#/c/472385/
> >
>
> I commented on the spec. I think this is a misguided idea. etcd3 is a
> _coordination_ service. Not a key manager. It lacks the audit logging
> and access control one expects to protect and manage key material. I'd
> much rather see something like Hashicorp's Vault [1] implemented for
> Fernet keys than etcd3. We even have a library for such things called
> Castellan[2].
>

Great point, and thanks for leaving it in the spec. I'm glad we're getting
this documented since this specific discussion has cropped up a couple
times.


>
> [1] https://www.vaultproject.io/
> [2] https://docs.openstack.org/developer/castellan/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][no-admin] Finally Rally can be run without admin user

2017-06-14 Thread Lance Bragstad
On Tue, Jun 13, 2017 at 3:51 PM, Morgan Fainberg 
wrote:

> On Tue, Jun 13, 2017 at 1:04 PM, Boris Pavlovic  wrote:
> > Hi stackers,
> >
> > Intro
> >
> > Initially Rally was targeted for developers which means running it from
> > admin was OK.
> > Admin was basically used to simplify preparing environment for testing:
> > create and setup users/tenants, networks, quotas and other resources that
> > requires admin role.
> > As well it was used to cleanup all resources after test was executed.
> >
> > Problem
> >
> > More and more operators were running Rally against their production
> > environments, and they were not happy with the thing that they should
> > provide admin, they would rather prepare environment by hand and provide
> > already existing users than allow Rally to mess up with admin rights =)
> >
> > Solution
> >
> > After years of refactoring we changed almost everything;) and we managed
> to
> > keep Rally as simple as it was and support Operators and Developers
> needs.
> >
> > Now Rally supports 3 different modes:
> >
> > admin mode -> Rally manages users that are used for testing
> > admin + existing users mode -> Rally uses existing users for testing (if
> no
> > user context)
> > [new one] existing users mode -> Rally uses existing users for testing
> >
> > In every mode input task will look the same, however in case of only
> > existing users mode you won't be able to use plugins that requires admin
> > role.
> >
> > This patch finishes works: https://review.openstack.org/#/c/465495/
> >
> > Thanks to everybody that was involved in this huge effort!
> >
> >
> > Best regards,
> > Boris Pavlovic
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> Good work, and fantastic news. This will make rally a more interesting
> tool to use against real-world deployments.
>
> Congrats on a job well done.
>

I completely agree here. Nice work!


> --Morgan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Policy rules for APIs based on "domain_id"

2017-06-20 Thread Lance Bragstad
Domain support hasn't really been adopted across various OpenStack
projects, yet. Ocata was the first release where we had a v3-only
jenkins job set up for projects to run against (domains are a v3-only
concept in keystone and don't really exist in v2.0).

I think it would be great to push on some of that work so that we can
start working the concept of domain-scope into various services. I'd be
happy to help here. John Garbutt had some good ideas on this track, too.

https://review.openstack.org/#/c/433037/
https://review.openstack.org/#/c/427872/

On 06/20/2017 08:59 AM, Valeriy Ponomaryov wrote:
> Also, one more additional kind of "feature-request" is to be able to
> filter each project's entities per domain as well as we can do it with
> project/tenant now.
>
> So, as a result, we will be able to configure different "list" APIs to
> return objects grouped by either domain or project.
>
> Thoughts?
>
> On Tue, Jun 20, 2017 at 1:07 PM, Adam Heczko  > wrote:
>
> Hello Valeriy,
> agree, that would be very useful. I think that this deserves
> attention and cross project discussion.
> Maybe a community goal process [2] is a valid path forward in this
> regard.
>
> [2] https://governance.openstack.org/tc/goals/
> 
>
> On Tue, Jun 20, 2017 at 11:15 AM, Valeriy Ponomaryov
> mailto:vponomar...@mirantis.com>> wrote:
>
> Hello OpenStackers,
>
> Wanted to pay some attention to one of restrictions in OpenStack.
> It came out, that it is impossible to define policy rules for
> API services based on "domain_id".
> As far as I know, only Keystone supports it.
>
> So, it is unclear whether it is intended or it is just
> technical debt that each OpenStack project should
> eliminate?
>
> For the moment, I filed bug [1].
>
> Use case is following: usage of Keystone API v3 all over the
> cloud and level of trust is domain, not project.
>
> And if it is technical debt how much different teams are
> interested in having such possibility?
>
> [1] https://bugs.launchpad.net/nova/+bug/1699060
> 
>
> -- 
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com 
> vponomar...@mirantis.com 
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
>
> -- 
> Adam Heczko
> Security Engineer @ Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
>
> -- 
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com 
> vponomar...@mirantis.com 
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Lance Bragstad


On 06/21/2017 11:55 AM, Matt Riedemann wrote:
> On 6/21/2017 11:17 AM, Shamail Tahir wrote:
>>
>>
>> On Wed, Jun 21, 2017 at 12:02 PM, Thierry Carrez
>> mailto:thie...@openstack.org>> wrote:
>>
>> Shamail Tahir wrote:
>> > In the past, governance has helped (on the UC WG side) to reduce
>> > overlaps/duplication in WGs chartered for similar objectives. I
>> would
>> > like to understand how we will handle this (if at all) with the
>> new SIG
>> > proposa?
>>
>> I tend to think that any overlap/duplication would get solved
>> naturally,
>> without having to force everyone through an application process
>> that may
>> discourage natural emergence of such groups. I feel like an
>> application
>> process would be premature optimization. We can always encourage
>> groups
>> to merge (or clean them up) after the fact. How much
>> overlaps/duplicative groups did you end up having ?
>>
>>
>> Fair point, it wasn't many. The reason I recalled this effort was
>> because we had to go through the exercise after the fact and that
>> made the volume of WGs to review much larger than had we asked the
>> purpose whenever they were created. As long as we check back
>> periodically and not let the work for validation/clean up pile up
>> then this is probably a non-issue.
>>
>>
>> > Also, do we have to replace WGs as a concept or could SIG
>> > augment them? One suggestion I have would be to keep projects
>> on the TC
>> > side and WGs on the UC side and then allow for
>> spin-up/spin-down of SIGs
>> > as needed for accomplishing specific goals/tasks (picture of a 
>> diagram
>> > I created at the Forum[1]).
>>
>> I feel like most groups should be inclusive of all community, so I'd
>> rather see the SIGs being the default, and ops-specific or
>> dev-specific
>> groups the exception. To come back to my Public Cloud WG example,
>> you
>> need to have devs and ops in the same group in the first place
>> before
>> you would spin-up a "address scalability" SIG. Why not just have a
>> Public Cloud SIG in the first place?
>>
>>
>> +1, I interpreted originally that each use-case would be a SIG versus
>> the SIG being able to be segment oriented (in which multiple
>> use-cases could be pursued)
>>
>>
>>  > [...]
>> > Finally, how will this change impact the ATC/AUC status of the SIG
>> > members for voting rights in the TC/UC elections?
>>
>> There are various options. Currently you give UC WG leads the AUC
>> status. We could give any SIG lead both statuses. Or only give
>> the AUC
>> status to a subset of SIGs that the UC deems appropriate. It's
>> really an
>> implementation detail imho. (Also I would expect any SIG lead to
>> already
>> be both AUC and ATC somehow anyway, so that may be a non-issue).
>>
>>
>> We can discuss this later because it really is an implementation
>> detail. Thanks for the answers.
>>
>>
>> --
>> Thierry Carrez (ttx)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> -- 
>> Thanks,
>> Shamail Tahir
>> t: @ShamailXD
>> tz: Eastern Time
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> I think a key point you're going to want to convey and repeat ad
> nauseum with this SIG idea is that each SIG is focused on a specific
> use case and they can be spun up and spun down. Assuming that's what
> you want them to be.
>
> One problem I've seen with the various work groups is they overlap in
> a lot of ways but are probably driven as silos. For example, how many
> different work groups are there that care about scaling? So rather
> than have 5 work groups that all overlap on some level for a specific
> issue, create a SIG for that specific issue so the people involved can
> work on defining the specific problem and work to come up with a
> solution that can then be implemented by the upstream development
> teams, either within a single project or across projects depending on
> the issue. And once the specific issue is resolved, close down the SIG.
>
> Examples here would be things that fall under proposed community wide
> goals for a release, like running API services under wsgi, py3
> support, moving policy rules into code, hierarchical quotas, RBAC
> "admin of admins" policy changes, etc. Have a SIG tha

Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-22 Thread Lance Bragstad


On 06/22/2017 12:57 PM, Mike Perez wrote:
> Hey all,
>
> In the community wide goals, we started as a group discussing goals at
> the OpenStack Forum. Then we brought those ideas to the mailing list
> to continue the discussion and include those that were not able to be
> at the forum. The discussions help the TC decide on what goals we will
> do for the Queens release. The goals that have the most support so far
> are:
>
> 1) Split Tempest plugins into separate repos/projects [1]
> 2) Move policy and policy docs into code [2]
>
> In the recent TC meeting [3] it was recognized that goals in Pike
> haven't been going as smoothly and not being completed. There will be
> a follow up thread to cover gathering feedback in an etherpad later,
> but for now the TC has discussed potential actions to improve
> completing goals in Queens.
>
> An idea that came from the meeting was creating a role of "Champions",
> who are the drum beaters to get a goal done by helping projects with
> tracking status and sometimes doing code patches. These would be
> interested volunteers who have a good understanding of their selected
> goal and its implementation to be a trusted person.
>
> What do people think before we bikeshed on the name? Would having a
> champion volunteer to each goal to help? Are there ideas that weren't
> mentioned in the TC meeting [3]?
I like this idea. Some projects might have existing context about a
particular goal built up before it's even proposed, others might not. I
think this will help share knowledge across the projects that understand
the goal with projects who might not be as familiar with it (even though
the community goal proposal process attempts to fix that).

Is the role of a goal "champion" limited to a single person? Can it be
distributed across multiple people pending actions are well communicated?
>
> [1]
> - https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
> [2]
> - 
> https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg106392.html
> [3]
> - 
> http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-06-20-20.01.log.html#l-10
>
> —
> Mike Perez
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Move away from meeting channels

2017-06-26 Thread Lance Bragstad


On 06/26/2017 08:58 AM, Chris Dent wrote:
> On Mon, 26 Jun 2017, Flavio Percoco wrote:
>
>> So, should we let teams to host IRC meetings in their own channels?
>
> Yes.
+1
>
>> Thoughts?
>
> I think the silo-ing concern is, at least recently, not relevant on
> two fronts: IRC was never a good fix for that and silos gonna be
> silos.
>
> There are so many meetings and so many projects there already are
> silos and by encouraging people to use the mailing lists more we are
> more effectively enabling diverse access than IRC ever could,
> especially if the IRC-based solution is the impossible "always be on
> IRC, always use a bouncer, always read all the backlogs, always read
> all the meeting logs".
>
> The effective way for a team not to be a silo is for it to be
> better about publishing accessible summaries of itself (as in: make
> more email) and participating in cross project related reviews. If
> it doesn't do that, that's the team's loss.
>
> Synchronous communication is fine for small groups of speakers but
> that's pretty much where it ends.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] New Office Hours Proposal

2017-06-26 Thread Lance Bragstad
According to the poll results, office hours will be moved to Tuesday
19:00 - 22:00 UTC. We'll officially start tomorrow after the keystone
meeting.

Thanks for putting together and advertising the poll, Harry!

On 06/20/2017 02:30 PM, Harry Rybacki wrote:
> Greetings All,
>
> We would like to foster a more interactive community within Keystone
> focused on fixing bugs on a regular basis! On a regular datetime (to
> be voted upon) we will have "office hours"[1] where Keystone cores
> will be available specifically to advise, help and review your efforts
> in squashing bugs. We want to aggressively attack our growing list of
> bugs and make sure Keystone is as responsive as possible to fixing
> them. The best way to do this is get people working on them and have
> the resources to get the fixes reviewed and merged.
>
> Please take a few moments to fill out our Doodle poll[2] to select the
> time block(s) that work best for you. We will tally the results and
> announce the official Keystone Office hours on Friday, 23-June-2017,
> by 2100 (UTC).
>
> [1] - https://etherpad.openstack.org/p/keystone-office-hours
> [2] - https://beta.doodle.com/poll/epvs95npfvrd3h5e
>
>
> /R
>
> Harry Rybacki
> Software Engineer, Red Hat
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] documentation migration and consolidation

2017-06-26 Thread Lance Bragstad
Hey all,

We recently merged the openstack-manuals admin-guide into keystone [0]
and there is a lot of duplication between the admin-guide and keystone's
"internal" operator-guide [1]. I've started proposing small patches to
consolidate the documentation from the operator-guide to the official
admin-guide. In case you're interested in helping out, please use the
remove-duplicate-docs branch [2]. The admin-guide is really well written
and it would be great to get some reviews from members of the docs team
if possible to help us maintain the style and consistency of the
admin-guide.

Ping me if you have any questions. Thanks!


[0] https://review.openstack.org/#/c/469515/
[1] https://docs.openstack.org/developer/keystone/configuration.html
[2]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:remove-duplicate-docs




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] removing domain configuration upload via keystone-manage

2017-06-27 Thread Lance Bragstad
Hi all,

Keystone has deprecated the domain configuration upload capability
provided through `keystone-manage`. We discussed it's removal in today's
meeting [0] and wanted to send a quick note to the operator list. The
ability to upload a domain config into keystone was done as a stop-gap
until the API was marked as stable [1]. It seems as though file-based
domain configuration was only a band-aid until full support was done.

Of the operators using the domain config API in keystone, how many are
backing their configurations with actual configuration files versus the API?


[0]
http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-06-27-18.00.log.html#l-167
[1]
https://github.com/openstack/keystone/commit/a5c5f5bce812fad3c6c88a23203bd6c00451e7b3



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office-hours tag

2017-06-28 Thread Lance Bragstad
Hey all,

I've created a new official tag, 'office-hours' [0]. If you're reviewing
or triaging bugs and come across one that would be a good fit for us to
tackle during office hours, please feel free to tag it. I was
maintaining lists locally, and I'm sure you were, too. This should help
reduce duplicate lists and we can parse the tagged bugs at the beginning
of each session. Let me know if you have any questions.

Thanks,

Lance


[0] https://goo.gl/ZvvBx2



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] removing domain configuration upload via keystone-manage

2017-06-28 Thread Lance Bragstad
That sounds like reason enough to bump the removal of it to Queens or
later. This was also discussed in IRC today [0]. We've decided to move
forward with the following steps:

- Deprecate the ability to have file-backed domain configuration since
removing the ability to upload domains via keystone-manage makes
file-backed domain configs less useful
- Bump the removal date of domain configuration uploads via keystone-manage
- Remove both pieces together in a subsequent release

Thanks for the input!

[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-06-28.log.html#t2017-06-28T18:10:06


On 06/28/2017 04:43 AM, Juan Antonio Osorio wrote:
> On the TripleO side we use the file based approach. Using the API
> would have been easier to orchestrate (no need for reloads/restarts)
> but it's not available yet in puppet-keystone.
>
> On Wed, Jun 28, 2017 at 2:00 AM, Lance Bragstad  <mailto:lbrags...@gmail.com>> wrote:
>
> Hi all,
>
> Keystone has deprecated the domain configuration upload capability
> provided through `keystone-manage`. We discussed it's removal in
> today's meeting [0] and wanted to send a quick note to the
> operator list. The ability to upload a domain config into keystone
> was done as a stop-gap until the API was marked as stable [1]. It
> seems as though file-based domain configuration was only a
> band-aid until full support was done.
>
> Of the operators using the domain config API in keystone, how many
> are backing their configurations with actual configuration files
> versus the API?
>
>
> [0]
> 
> http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-06-27-18.00.log.html#l-167
> 
> <http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-06-27-18.00.log.html#l-167>
> [1]
> 
> https://github.com/openstack/keystone/commit/a5c5f5bce812fad3c6c88a23203bd6c00451e7b3
> 
> <https://github.com/openstack/keystone/commit/a5c5f5bce812fad3c6c88a23203bd6c00451e7b3>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
> -- 
> Juan Antonio Osorio R.
> e-mail: jaosor...@gmail.com <mailto:jaosor...@gmail.com>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][keystone] Pt. 2 of Passing along some field feedback

2017-06-28 Thread Lance Bragstad


On 06/28/2017 02:29 PM, Fox, Kevin M wrote:
> I think everyone would benefit from a read-only role for keystone out of the 
> box. Can we get this into keystone rather then in the various distro's?
Yeah - I think that would be an awesome idea. John Garbutt had some good
work on this earlier in the cycle. Most of it was documented in specs
[0] [1]. FWIW - this will be another policy change that is going to have
cross-project effects. It's implementation or impact won't be isolated
to keystone if we want read-only roles out-of-the-box.

[0] https://review.openstack.org/#/c/427872/19
[1] https://review.openstack.org/#/c/428454/
>
> Thanks,
> Kevin
> 
> From: Ben Nemec [openst...@nemebean.com]
> Sent: Wednesday, June 28, 2017 12:06 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [TripleO] Pt. 2 of Passing along some field feedback
>
> A few weeks later than I had planned, but here's the other half of the
> field feedback I mentioned in my previous email:
>
> * They very emphatically want in-place upgrades to work when moving from
> non-containerized to containerized.  I think this is already the plan,
> but I told them I'd make sure development was aware of the desire.
>
> * There was also great interest in contributing back some of the custom
> templates that they've had to write to get advanced features working in
> the field.  Here again we recommended that they start with an RFE so
> things could be triaged appropriately.  I'm hoping we can find some
> developer time to help polish and shepherd these things through the
> review process.
>
> * Policy configuration was discussed, and I pointed them at some recent
> work we have done around that:
> https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/api_policies.html
>   I'm not sure it fully addressed their issues, but I suggested they
> take a closer look and provide feedback on any ways it doesn't meet
> their needs.
>
> The specific use case they were looking at right now was adding a
> read-only role.  They did provide me with a repo containing their
> initial work, but unfortunately it's private to Red Hat so I can't share
> it here.
>
> * They wanted to be able to maintain separate role files instead of one
> monolithic roles_data.yaml.  Apparently they have a pre-deploy script
> now that essentially concatenates some individual files to get this
> functionality.  I think this has already been addressed by
> https://review.openstack.org/#/c/445687
>
> * They've also been looking at ways to reorganize the templates in a
> more intuitive fashion.  At first glance the changes seemed reasonable,
> but they were still just defining the layout.  I don't know that they've
> actually tried to use the reorganized templates yet and given the number
> of relative paths in tht I suspect it may be a bigger headache than they
> expect, but I thought it was interesting.  There may at least be
> elements of this work that we can use to make the templates easier to
> understand for deployers.
>
> Thanks.
>
> -Ben
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][keystone] Pt. 2 of Passing along some field feedback

2017-06-28 Thread Lance Bragstad


On 06/28/2017 03:20 PM, Ben Nemec wrote:
>
>
> On 06/28/2017 02:47 PM, Lance Bragstad wrote:
>>
>>
>> On 06/28/2017 02:29 PM, Fox, Kevin M wrote:
>>> I think everyone would benefit from a read-only role for keystone
>>> out of the box. Can we get this into keystone rather then in the
>>> various distro's?
>> Yeah - I think that would be an awesome idea. John Garbutt had some good
>> work on this earlier in the cycle. Most of it was documented in specs
>> [0] [1]. FWIW - this will be another policy change that is going to have
>> cross-project effects. It's implementation or impact won't be isolated
>> to keystone if we want read-only roles out-of-the-box.
>>
>> [0] https://review.openstack.org/#/c/427872/19
>> [1] https://review.openstack.org/#/c/428454/
>
> Cool, I will point our folks at those specs.  I know doing a custom
> read-only role has been pretty painful, so I expect they would be very
> happy if this functionality could become standard.
Absolutely - it would be awesome to provide some standard roles out of
the box (at least for the sake of interoperability). I'm happy to help
in any way I can. We also have the weekly policy meeting that's focused
on nailing down cross-project issues with policy [0].

[0] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting
>
> Thanks for the replies.
>
> -Ben
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] removing domain configuration upload via keystone-manage

2017-06-28 Thread Lance Bragstad
Cool - I'm glad this is generating discussion. I personally don't see a
whole lot of maintenance costs with `keystone-manage
domain_config_upload`. I was parsing deprecation warnings in the code
base and noticed it was staged for removal, but it wasn't clear when or
why. It also wasn't very clear if there was a desire to move away from
the file-based approach all together, but it was something that came up
in the meeting.

Based on the responses and the reasons listed, I think removing the
deprecation to avoid confusion on where we stand would be a good thing
(especially since it's low maintenance).

I appreciate the feedback!


On 06/28/2017 04:22 PM, Steve Martinelli wrote:
> ++ to what colleen said. I've always preferred using the file-backed
> approach.
>
> I think we deprecated it for completeness and to only have a single
> tool for configuring LDAP-backed domains. If it's tested well enough
> and not much effort to support then we should keep it around as an
> alternative method for configuring LDAP-backed domains.
>
> On Wed, Jun 28, 2017 at 4:53 PM, Colleen Murphy  <mailto:coll...@gazlene.net>> wrote:
>
>> On Wed, Jun 28, 2017 at 2:00 AM, Lance Bragstad
>> mailto:lbrags...@gmail.com>> wrote:
>>
>> Hi all,
>>
>> Keystone has deprecated the domain configuration upload
>> capability provided through `keystone-manage`. We
>> discussed it's removal in today's meeting [0] and wanted
>> to send a quick note to the operator list. The ability to
>> upload a domain config into keystone was done as a
>> stop-gap until the API was marked as stable [1]. It seems
>> as though file-based domain configuration was only a
>> band-aid until full support was done.
>>
>> Of the operators using the domain config API in keystone,
>> how many are backing their configurations with actual
>> configuration files versus the API?
>>
>>
>> [0]
>> 
>> http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-06-27-18.00.log.html#l-167
>> 
>> <http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-06-27-18.00.log.html#l-167>
>> [1]
>> 
>> https://github.com/openstack/keystone/commit/a5c5f5bce812fad3c6c88a23203bd6c00451e7b3
>> 
>> <https://github.com/openstack/keystone/commit/a5c5f5bce812fad3c6c88a23203bd6c00451e7b3>
>>
>  I am not clear on why we need to deprecate and remove file-backed
> domain configuration. The way I see it:
>
> * It's reflectve with the primary configuration, so I can copy
> over the chunks I need from keystone.conf into
> /etc/keystone/domains/keystone.domain.conf without thinking too
> hard about it
> * It's convenient for deployment tools to just lay down config files
> * It's not that much extra effort for the keystone team to
> maintain (is it?)
>
> The use case for file-backed domain configs is for smaller clouds
> with just one or two LDAP-backed domains. There's not a real need
> for users to change domain configs so the file-backed config is
> plenty fine. I don't see a lot of gain from removing that
> functionality.
>
> I don't particularly care about the keystone-manage tool, if that
> goes away it would still be relatively easy to write a python
> script to parse and upload configs if a user does eventually
> decide to transition.
>
> As a side note, SUSE happens to be using file-backed domain
> configs in our product. It would not be a big deal to rewrite that
> bit to use the API, but I think it's just as easy to let us keep
> using it.
>
> Colleen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] stable/newton is broken

2017-06-29 Thread Lance Bragstad
Keystone's stable/newton gate is broken [0] [1]. The TL;DR is that our
keystone_tempest_plugin is validating federated mappings before updating
the protocol [2]. The lack of validation was a bug [3] that was fixed in
Ocata, but the fix [4] was never backported.

Since stable/newton is in Phase II, I would consider this a critical fix
to unblock the stable/newton gate. I have a backport up for review [5].

[0] https://review.openstack.org/#/c/469514/
[1]
http://logs.openstack.org/14/469514/1/check/gate-keystone-dsvm-functional-ubuntu-xenial/a4aac66/console.html
[2]
https://github.com/openstack/keystone-tempest-plugin/blob/360bbafa385624f1e86841875baabbbf1104e877/keystone_tempest_plugin/tests/api/identity/v3/test_identity_providers.py#L228-L244
[3] https://bugs.launchpad.net/keystone/+bug/1571878
[4] https://review.openstack.org/#/c/362397/
[5] https://review.openstack.org/#/c/478994/





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] no policy meeting today

2017-07-05 Thread Lance Bragstad
Hey all,

Given the empty agenda [0] and the holiday, we will cancel the policy
meeting this week. We'll pick up again next week.

Thanks

[0] https://etherpad.openstack.org/p/keystone-policy-meeting




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-07-05 Thread Lance Bragstad


On 06/30/2017 04:38 AM, Thierry Carrez wrote:
> Mike Perez wrote:
>> [...]
>> What do people think before we bikeshed on the name? Would having a
>> champion volunteer to each goal to help?
> It feels like most agree that having champions would help. Do we have
> any volunteer for the currently-proposed Pike goals ? As a reminder,
> those are:
>
> * Split Tempest plugins into separate repos/projects [1]
> * Move policy and policy docs into code [2]
I can champion the policy docs changes.
>
> [1]
> https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
> [2] https://review.openstack.org/#/c/469954/
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Queens PTG Planning

2017-07-05 Thread Lance Bragstad
Hey all,

I've started an etherpad [0] for us to collect topics and ideas for the
PTG in September. I hope to follow the same planning format as last
time. Everyone has the opportunity to add topics to the agenda and after
some time we'll group related topics and start building a formal schedule.

The etherpad has two lists. One for project-specific topics and one for
cross-project topics. As soon as we firm up the things we need to
collaborate on with other projects, I'll start coordinating with other
teams. These were the sessions we had to work around last time due to
schedules. We can sprinkle the project related topics in to fill the gaps.

Let me know if you have any questions.

Thanks!


[0] https://etherpad.openstack.org/p/keystone-queens-ptg




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] deprecating and removing tools/sample_data.sh

2017-07-05 Thread Lance Bragstad
Hi all,   

Keystone has a script to perform some bootstrapping operations [0]. It's
not really tested and its purpose has been superseded by using the
`keystone-manage bootstrap` command. Based on codesearch, only
openstack/rpm-packaging references the script [1].

Is anyone opposed to the removal of this script in favor of more
supported and tested bootstrapping methods?

Thanks,


[0]
https://github.com/openstack/keystone/blob/82f60fe22c405829f8e5f6576f25cf3663b10f73/tools/sample_data.sh
[1] http://codesearch.openstack.org/?q=sample_data.sh&i=nope&files=&repos=




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] We still have a not identical HEAD response

2017-07-11 Thread Lance Bragstad
Based on the comments and opinions in the original thread, I think a fix
for this is justified. I wouldn't mind running this by the TC to double
check that nothing has changed from the first time we had to fix this
issue though.


On 07/11/2017 06:03 AM, Attila Fazekas wrote:
> Hi all,
>
> Long time ago it was discussed to make the keystone HEAD responses
>  right [1] as the RFC [2][3] recommends:
>
> "  A response to the HEAD method is identical to what an equivalent
>request made with a GET would have been, except it lacks a body. "
>
> So, the status code needs to be identical as well !
>
> Recently  turned out, keystone is still not correct in all cases [4].
>
> 'Get role inference rule' (GET), 'Confirm role inference rule' (HEAD)
>  has the same URL pattern, but they differs in the status code (200/204)
>  which is not allowed! [5]
>
> This is the only documented case where both the HEAD and GET defined and
> the HEAD has a 204 response.
>
> Are you going to fix this [4] as it was fixed before [6] ?
>
> Best Regards,
> Attila
>
> PS.:
>  Here is the tempest change for accepting the right code [7].
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-July/039140.html
> [2] https://tools.ietf.org/html/rfc7231#section-4.3.2
> [3] https://tools.ietf.org/html/rfc7234#section-4.3.5
> [4] https://bugs.launchpad.net/keystone/+bug/1701541
> [5]
> https://developer.openstack.org/api-ref/identity/v3/?expanded=confirm-role-inference-rule-detail,get-role-inference-rule-detail
> [6] https://bugs.launchpad.net/keystone/+bug/1334368
> [7] https://review.openstack.org/#/c/479286/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours reminder

2017-07-11 Thread Lance Bragstad
Hey all,

Just a quick reminder that today we will be holding office hours after
the keystone meeting [0]. See you there!

Thanks,

Lance

[0] http://eavesdrop.openstack.org/#Keystone_Team_Meeting




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] deprecating and removing tools/sample_data.sh

2017-07-11 Thread Lance Bragstad
Good point. I did a bit more digging and it looks like it was originally
intended for devstack [0]. At least based on the original commit message
that introduced the file. Devstack seems to take its own approach to
generating sample data, mainly using keystone-manage and functions
defined in lib/keystone [1].

I'll propose a patch to remove it and we can continue the discussion in
Gerrit.

Thanks!


[0]
https://github.com/openstack/keystone/commit/09a64dd862463fe116c4ddb8aee538e4bc7f56e0
[1]
https://github.com/openstack-dev/devstack/blob/e4b2e3b93e892df3cb4be778bcd9813cf17f9a1c/lib/keystone#L331


On 07/05/2017 04:28 PM, Colleen Murphy wrote:
> On Wed, Jul 5, 2017 at 9:36 PM, Lance Bragstad  <mailto:lbrags...@gmail.com>> wrote:
>
> Hi all,
>
> Keystone has a script to perform some bootstrapping operations
> [0]. It's
> not really tested and its purpose has been superseded by using the
> `keystone-manage bootstrap` command. Based on codesearch, only
> openstack/rpm-packaging references the script [1].
>
> It's not exactly superceded by `keystone-manage bootstrap` - in fact
> it uses bootstrap as part of its data generation:
>
> https://github.com/openstack/keystone/blob/82f60fe22c405829f8e5f6576f25cf3663b10f73/tools/sample_data.sh#L97
>
>
>
> Is anyone opposed to the removal of this script in favor of more
> supported and tested bootstrapping methods?
>
> I haven't used this script in a while but I have found value in it in
> the past. It would be great if it or something like it was gate tested.
>
> Colleen 
>
>
> Thanks,
>
>
> [0]
> 
> https://github.com/openstack/keystone/blob/82f60fe22c405829f8e5f6576f25cf3663b10f73/tools/sample_data.sh
> 
> <https://github.com/openstack/keystone/blob/82f60fe22c405829f8e5f6576f25cf3663b10f73/tools/sample_data.sh>
> [1]
> http://codesearch.openstack.org/?q=sample_data.sh&i=nope&files=&repos=
> <http://codesearch.openstack.org/?q=sample_data.sh&i=nope&files=&repos=>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours report 2017-7-7

2017-07-11 Thread Lance Bragstad
Hey all,

This is a summary of what was worked on today during office hours. Full
logs of the meeting can be found below:

http://eavesdrop.openstack.org/meetings/office_hours/2017/office_hours.2017-07-11-19.00.log.html

*The future of the templated catalog backend
*

Some issues were uncovered, or just resurfaced, with the templated
catalog backend. The net of the discussion boiled down to - do we fix it
or remove it? The answer actually ended up being both. It was determined
that instead of trying to maintain and fix the existing templated
backend, we should deprecate it for removal [0]. Since it does provide
some value, it was suggested that we can start implementing a new
backend based on YAML to fill the purpose instead. The advantage here is
that the approach is directed towards a specific format (YAML). This
should hopefully make things easier for both developers and users.

[0] https://review.openstack.org/#/c/482714/

*Policy fixes*

All the policy-in-code work has exposed several issues with policy
defaults in keystone. We spent time as a group going through several of
the bugs [0] [1] [2] [3], the corresponding fixes, and impact. One of
which will be backported specifically for the importance of
communicating a release note to stable users [0].

[0] https://bugs.launchpad.net/keystone/+bug/1703369
[1] https://bugs.launchpad.net/keystone/+bug/1703392
[2] https://bugs.launchpad.net/keystone/+bug/1703467
[3] https://bugs.launchpad.net/keystone/+bug/1133435

*Additional bugs worked*

Transient bug with security compliance or PCI-DSS:
https://bugs.launchpad.net/keystone/+bug/1702211
Request header issues: https://bugs.launchpad.net/keystone/+bug/1689468


I hope to find ways to automate most of what is communicated in this
summary. Until then I'm happy to hear feedback if you find the report
lacking in a specific area.


Thanks,

Lance



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] office hours report 2017-7-7

2017-07-12 Thread Lance Bragstad


On 07/11/2017 09:28 PM, Mathieu Gagné wrote:
> Hi,
>
> So this email is relevant to my interests as an operator. =)

Glad to hear it!

>
> On Tue, Jul 11, 2017 at 9:35 PM, Lance Bragstad  <mailto:lbrags...@gmail.com>> wrote:
>
> *The future of the templated catalog backend*
>
> Some issues were uncovered, or just resurfaced, with the templated
> catalog backend. The net of the discussion boiled down to - do we
> fix it or remove it? The answer actually ended up being both. It
> was determined that instead of trying to maintain and fix the
> existing templated backend, we should deprecate it for removal
> [0]. Since it does provide some value, it was suggested that we
> can start implementing a new backend based on YAML to fill the
> purpose instead. The advantage here is that the approach is
> directed towards a specific format (YAML). This should hopefully
> make things easier for both developers and users.
>
> [0] https://review.openstack.org/#/c/482714/
> <https://review.openstack.org/#/c/482714/>​
>
>
> We have been exclusively using the templated catalog backend for at
> least 5 years without any major issues. And it looks like we are now
> among the < 3% using templated according to the April 2017 user survey. 
> ¯\_(ăƒ„)_/¯
>
> We choose the templated catalog backend for its simplicity (especially
> with our CMS) and because it makes no sense (to me) to use and rely on an 
> SQL server to serve what is essentially static content
> ​.​
>
>
> Regarding the v3 catalog support, we do have an in-house fix we
> intended to upstream
> ​ very soon (and just did right now)​
> . [1]
>
>
> So if the templated catalog backend gets deprecated, 
> ​my wish would be to have access to
>  a
> ​n alternate​
> ​ file based
> ​ implementation​, a production grade implementation
>  ready to be used
> ​ before I get spammed with deprecation warnings in the keystone logs.

I think that is fair. Morgan was working on an implementation yesterday,
but I don't think anything made it to Gerrit. As soon as it does, I'll
be sure to update the thread. Thanks for speaking up!

>
> Thanks
>
> [1] https://review.openstack.org/#/c/482766/
>
> --
> Mathieu
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Queens Goal for policy-in-code

2017-07-12 Thread Lance Bragstad
Hi all,

I'd like to reach out and get ahead of the curve now that we established
the community goals for Queens. If you have any questions about the
policy-in-code work [0] and how it pertains to your project, please
don't hesitate to ping me in #openstack-dev. Once pike starts winding
down, I'll start dropping by individual team meetings. If I end up
getting similar questions from multiple projects, I can look into
organizing a slot at the PTG so we can work through things as a group.

Thanks,

Lance
irc: lbragstad


[0] https://governance.openstack.org/tc/goals/queens/policy-in-code.html




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] office hours report 2017-7-7

2017-07-12 Thread Lance Bragstad


On 07/12/2017 09:17 AM, Akihiro Motoki wrote:
> 2017-07-12 10:35 GMT+09:00 Lance Bragstad :
>> Hey all,
>>
>> This is a summary of what was worked on today during office hours. Full logs
>> of the meeting can be found below:
>>
>> http://eavesdrop.openstack.org/meetings/office_hours/2017/office_hours.2017-07-11-19.00.log.html
> It is not specific to keystone.
>
> I think it is better to use keystone-office-hours instead of
> office-hours as a meeting name.
> If we use the same meeting names, we will have office-hours logs of
> multiple projects
> in a same directory in eavesdrop.openstack.org.
>
> Thanks,
> Akihiro

Ah - good point. Thanks for the heads up! I'll be sure to do that for
next week's session.

>> The future of the templated catalog backend
>>
>> Some issues were uncovered, or just resurfaced, with the templated catalog
>> backend. The net of the discussion boiled down to - do we fix it or remove
>> it? The answer actually ended up being both. It was determined that instead
>> of trying to maintain and fix the existing templated backend, we should
>> deprecate it for removal [0]. Since it does provide some value, it was
>> suggested that we can start implementing a new backend based on YAML to fill
>> the purpose instead. The advantage here is that the approach is directed
>> towards a specific format (YAML). This should hopefully make things easier
>> for both developers and users.
>>
>> [0] https://review.openstack.org/#/c/482714/
>>
>> Policy fixes
>>
>> All the policy-in-code work has exposed several issues with policy defaults
>> in keystone. We spent time as a group going through several of the bugs [0]
>> [1] [2] [3], the corresponding fixes, and impact. One of which will be
>> backported specifically for the importance of communicating a release note
>> to stable users [0].
>>
>> [0] https://bugs.launchpad.net/keystone/+bug/1703369
>> [1] https://bugs.launchpad.net/keystone/+bug/1703392
>> [2] https://bugs.launchpad.net/keystone/+bug/1703467
>> [3] https://bugs.launchpad.net/keystone/+bug/1133435
>>
>> Additional bugs worked
>>
>> Transient bug with security compliance or PCI-DSS:
>> https://bugs.launchpad.net/keystone/+bug/1702211
>> Request header issues: https://bugs.launchpad.net/keystone/+bug/1689468
>>
>>
>> I hope to find ways to automate most of what is communicated in this
>> summary. Until then I'm happy to hear feedback if you find the report
>> lacking in a specific area.
>>
>>
>> Thanks,
>>
>> Lance
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] stable/ocata and stable/newton are broken

2017-07-13 Thread Lance Bragstad
Colleen found out today while doing a backport that both of our stable
branches are broken. After doing some digging, it looks like bug 1687593
is the culprit [0]. The fix to that bug merged in master and the author
added some nicely written functional tests using the
keystone-tempest-plugin. The functional tests are being run against both
stables branches but the fix wasn't actually backported. As a result,
both stable branches are bricked at the moment because of the functional
tests.

I've proposed the necessary backports for stable/ocata [1] and
stable/newton [2], in addition to a cleaned up release note for master
[3]. Any reviews would be greatly appreciated since we'll be doing a
release of both stable branches relatively soon.

Thanks!


[0] https://bugs.launchpad.net/keystone/+bug/1687593
[1]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/ocata+topic:bug/1687593
[2]
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/newton+topic:bug/1687593
[3] https://review.openstack.org/#/c/483598/




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] stable/ocata and stable/newton are broken

2017-07-13 Thread Lance Bragstad
Oh - the original issues with the stable branches were reported here:

https://bugs.launchpad.net/keystone/+bug/1704148


On 07/13/2017 06:00 PM, Lance Bragstad wrote:
> Colleen found out today while doing a backport that both of our stable
> branches are broken. After doing some digging, it looks like bug 1687593
> is the culprit [0]. The fix to that bug merged in master and the author
> added some nicely written functional tests using the
> keystone-tempest-plugin. The functional tests are being run against both
> stables branches but the fix wasn't actually backported. As a result,
> both stable branches are bricked at the moment because of the functional
> tests.
>
> I've proposed the necessary backports for stable/ocata [1] and
> stable/newton [2], in addition to a cleaned up release note for master
> [3]. Any reviews would be greatly appreciated since we'll be doing a
> release of both stable branches relatively soon.
>
> Thanks!
>
>
> [0] https://bugs.launchpad.net/keystone/+bug/1687593
> [1]
> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/ocata+topic:bug/1687593
> [2]
> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/newton+topic:bug/1687593
> [3] https://review.openstack.org/#/c/483598/
>
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] stable/ocata and stable/newton are broken

2017-07-14 Thread Lance Bragstad
All the patches in the original note have merged for both stable/ocata
and stable/newton. Existing patches to both branches are being recheck
and rebased.


On 07/13/2017 06:04 PM, Lance Bragstad wrote:
> Oh - the original issues with the stable branches were reported here:
>
> https://bugs.launchpad.net/keystone/+bug/1704148
>
>
> On 07/13/2017 06:00 PM, Lance Bragstad wrote:
>> Colleen found out today while doing a backport that both of our stable
>> branches are broken. After doing some digging, it looks like bug 1687593
>> is the culprit [0]. The fix to that bug merged in master and the author
>> added some nicely written functional tests using the
>> keystone-tempest-plugin. The functional tests are being run against both
>> stables branches but the fix wasn't actually backported. As a result,
>> both stable branches are bricked at the moment because of the functional
>> tests.
>>
>> I've proposed the necessary backports for stable/ocata [1] and
>> stable/newton [2], in addition to a cleaned up release note for master
>> [3]. Any reviews would be greatly appreciated since we'll be doing a
>> release of both stable branches relatively soon.
>>
>> Thanks!
>>
>>
>> [0] https://bugs.launchpad.net/keystone/+bug/1687593
>> [1]
>> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/ocata+topic:bug/1687593
>> [2]
>> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/newton+topic:bug/1687593
>> [3] https://review.openstack.org/#/c/483598/
>>
>>
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] feature freeze and spec status

2017-07-17 Thread Lance Bragstad
Hi all,

I wanted to send a friendly reminder that feature freeze for keystone
will be in R-5 [0], which is the end of next week. That leaves just
under 10 business days for feature work (8 considering the time to get
through the gate). Of the specifications we've committed to for Pike,
the following are still in progress:

*Application* *Credentials*
Specification:
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/application-credentials.html

*Project* *Tags*
Specification:
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/project-tags.html
Implementation: https://review.openstack.org/#/c/470317/

*Extending the User API to support federated attributes*
Specification:
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/support-federated-attr.html
Implementation:
https://review.openstack.org/#/q/topic:bp/support-federated-attr

With feature freeze just around the corner, we should be scaling up our
focus on bugs. We'll be continuing bug work tomorrow after the weekly
keystone meeting.

Thanks and let me know if you have any questions,

Lance


[0] https://releases.openstack.org/pike/schedule.html



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-17 Thread Lance Bragstad
On Mon, Jul 17, 2017 at 6:39 PM, Zane Bitter  wrote:

> So the application credentials spec has merged - huge thanks to Monty and
> the Keystone team for getting this done:
>
> https://review.openstack.org/#/c/450415/
> http://specs.openstack.org/openstack/keystone-specs/specs/
> keystone/pike/application-credentials.html
>
> However, it appears that there was a disconnect in how two groups of folks
> were reading the spec that only became apparent towards the end of the
> process. Specifically, at this exact moment:
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone
> /%23openstack-keystone.2017-06-09.log.html#t2017-06-09T17:43:59
>
> To summarise, Keystone folks are uncomfortable with the idea of
> application credentials that share the lifecycle of the project (rather
> than the user that created them), because a consumer could surreptitiously
> create an application credential and continue to use that to access the
> OpenStack APIs even after their User account is deleted. The agreed
> solution was to delete the application credentials when the User that
> created them is deleted, thus tying the lifecycle to that of the User.
>
> This means that teams using this feature will need to audit all of their
> applications for credential usage and rotate any credentials created by a
> soon-to-be-former team member *before* removing said team member's User
> account, or risk breakage. Basically we're relying on users to do the Right
> Thing (bad), but when they don't we're defaulting to breaking [some of]
> their apps over leaving them insecure (all things being equal, good).
>
> Unfortunately, if we do regard this as a serious problem, I don't think
> this solution is sufficient. Assuming that application credentials are
> stored on VMs in the project for use by the applications running on them,
> then anyone with access to those servers can obtain the credentials and
> continue to use them even if their own account is deleted. The solution to
> this is to rotate *all* application keys when a user is deleted. So really
> we're relying on users to do the Right Thing (bad), but when they don't
> we're defaulting to breaking [some of] their apps *and* [potentially]
> leaving them insecure (worst possible combination).
>
> (We're also being inconsistent, because according to the spec if you
> revoke a role from a User then any application credentials they've created
> that rely on that role continue to work. It's only if you delete the User
> that they're revoked.)
>
>
> As far as I can see, there are only two solutions to the fundamental
> problem:
>
> 1) Fine-grained user-defined access control. We can minimise the set of
> things that the application credentials are authorised to do. That's out of
> scope for this spec, but something we're already planning as a future
> enhancement.
> 2) Automated regular rotation of credentials. We can make sure that
> whatever a departing team member does manage to hang onto quickly becomes
> useless.
>
> By way of comparison, AWS does both. There's fine-grained defined access
> control in the form of IAM Roles, and these Roles can be associated with
> EC2 servers. The servers have an account with rotating keys provided
> through the metadata server. I can't find the exact period of rotation
> documented, but it's on the order of magnitude of 1 hour.
>
> There's plenty not to like about this design. Specifically, it's 2017 not
> 2007 and the idea that there's no point offering to segment permissions at
> a finer grained level than that of a VM no longer holds water IMHO, thanks
> to SELinux and containers. It'd be nice to be able to provide multiple sets
> of credentials to different services running on a VM, and it's probably
> essential to our survival that we find a way to provide individual
> credentials to containers. Nevertheless, what they have does solve the
> problem.
>
> Note that there's pretty much no sane way for the user to automate
> credential rotation themselves, because it's turtles all the way down. e.g.
> it's easy in principle to set up a Heat template with a Mistral workflow
> that will rotate the credentials for you, but they'll do so using trusts
> that are, in turn, tied back to the consumer who created the stack. (It
> suddenly occurs to me that this is a problem that all services using trusts
> are going to need to solve.) Somewhere it all has to be tied back to
> something that survives the entire lifecycle of the project.
>
> Would Keystone folks be happy to allow persistent credentials once we have
> a way to hand out only the minimum required privileges?
>

If I'm understanding correctly, this would make application credentials
dependent on several cycles of policy work. Right?


>
> If not I think we're back to https://review.openstack.org/#/c/93/
>
> cheers,
> Zane.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: open

Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-18 Thread Lance Bragstad


On 07/17/2017 10:12 PM, Lance Bragstad wrote:
>
>
> On Mon, Jul 17, 2017 at 6:39 PM, Zane Bitter  <mailto:zbit...@redhat.com>> wrote:
>
> So the application credentials spec has merged - huge thanks to
> Monty and the Keystone team for getting this done:
>
> https://review.openstack.org/#/c/450415/
> <https://review.openstack.org/#/c/450415/>
> 
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/application-credentials.html
> 
> <http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/application-credentials.html>
>
> However, it appears that there was a disconnect in how two groups
> of folks were reading the spec that only became apparent towards
> the end of the process. Specifically, at this exact moment:
>
> 
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-06-09.log.html#t2017-06-09T17:43:59
> 
> <http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-06-09.log.html#t2017-06-09T17:43:59>
>
> To summarise, Keystone folks are uncomfortable with the idea of
> application credentials that share the lifecycle of the project
> (rather than the user that created them), because a consumer could
> surreptitiously create an application credential and continue to
> use that to access the OpenStack APIs even after their User
> account is deleted. The agreed solution was to delete the
> application credentials when the User that created them is
> deleted, thus tying the lifecycle to that of the User.
>
> This means that teams using this feature will need to audit all of
> their applications for credential usage and rotate any credentials
> created by a soon-to-be-former team member *before* removing said
> team member's User account, or risk breakage. Basically we're
> relying on users to do the Right Thing (bad), but when they don't
> we're defaulting to breaking [some of] their apps over leaving
> them insecure (all things being equal, good).
>
> Unfortunately, if we do regard this as a serious problem, I don't
> think this solution is sufficient. Assuming that application
> credentials are stored on VMs in the project for use by the
> applications running on them, then anyone with access to those
> servers can obtain the credentials and continue to use them even
> if their own account is deleted. The solution to this is to rotate
> *all* application keys when a user is deleted. So really we're
> relying on users to do the Right Thing (bad), but when they don't
> we're defaulting to breaking [some of] their apps *and*
> [potentially] leaving them insecure (worst possible combination).
>
> (We're also being inconsistent, because according to the spec if
> you revoke a role from a User then any application credentials
> they've created that rely on that role continue to work. It's only
> if you delete the User that they're revoked.)
>
>
> As far as I can see, there are only two solutions to the
> fundamental problem:
>
> 1) Fine-grained user-defined access control. We can minimise the
> set of things that the application credentials are authorised to
> do. That's out of scope for this spec, but something we're already
> planning as a future enhancement.
> 2) Automated regular rotation of credentials. We can make sure
> that whatever a departing team member does manage to hang onto
> quickly becomes useless.
>
> By way of comparison, AWS does both. There's fine-grained defined
> access control in the form of IAM Roles, and these Roles can be
> associated with EC2 servers. The servers have an account with
> rotating keys provided through the metadata server. I can't find
> the exact period of rotation documented, but it's on the order of
> magnitude of 1 hour.
>
> There's plenty not to like about this design. Specifically, it's
> 2017 not 2007 and the idea that there's no point offering to
> segment permissions at a finer grained level than that of a VM no
> longer holds water IMHO, thanks to SELinux and containers. It'd be
> nice to be able to provide multiple sets of credentials to
> different services running on a VM, and it's probably essential to
> our survival that we find a way to provide individual credentials
> to containers. Nevertheless, what they have does solve the problem.
>
> Note that there's pretty much no sane way for the user to automate
&

Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-07-18 Thread Lance Bragstad


On 07/18/2017 08:21 AM, Andy McCrae wrote:
>
>
>
> The branches have now been retired, thanks to Joshua Hesketh!
>
>
> Thanks Josh, Andreas, Tony, and the rest of the Infra crew for sorting
> this out.

++ thanks all!

>
> Andy
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] office hours report 2017-7-7

2017-07-19 Thread Lance Bragstad
I was able to automate some of this report. I figured a follow up
containing data about what was worked on would be nice.


Bug #1703467 in OpenStack Identity (keystone): "assert_admin is checking
default policy rule not admin_required"
https://bugs.launchpad.net/keystone/+bug/1703467
participants: lbragstad, edmondsw
Triaged, tagged, set target milestone, and worked on patch

Bug #1696264 in OpenStack Identity (keystone): "Create OpenStack client
environment scripts in Installation Guide INCOMPLETE - doesn't state
path for file"
https://bugs.launchpad.net/keystone/+bug/1696264
participants: lbragstad, wingwj
Triaged and set target milestone

Bug #1703666 in OpenStack Identity (keystone): "Templated catalog does
not handle multi-regions properly"
https://bugs.launchpad.net/keystone/+bug/1703666
participants: lbragstad, eandersson
Triaged, set target milestone, discussed alternatives, and worked on a patch

Bug #1133435 in OpenStack Identity (keystone): "policy should return a
400 if a required field is missing"
https://bugs.launchpad.net/keystone/+bug/1133435
participants: lbragstad, edmondsw
Set status, discussed, and proposed  a possible solution due to the work
with policy in code

Bug #1689468 in OpenStack Identity (keystone): "odd keystone behavior
when X-Auth-Token ends with carriage return"
https://bugs.launchpad.net/keystone/+bug/1689468
participants: gagehugo, kaerie
Reproposed patch in review

Bug #1703369 in OpenStack Identity (keystone): "get_identity_providers
policy should be singular"
https://bugs.launchpad.net/keystone/+bug/1703369
participants: lbragstad, edmondsw
Set priority, target to series, set target milestone, proposed and
reviewed patch, discussed backport procedure

Bug #1703438 in keystoneauth: "Discover.version_data: Empty max_version
results in max_microversion=None even if version is specified"
https://bugs.launchpad.net/keystoneauth/+bug/1703438
participants: efried, mordred, morgan
Merged fix

Bug #1703447 in keystoneauth: "URL caching in
EndpointData._run_discovery is busted"
https://bugs.launchpad.net/keystoneauth/+bug/1703447
participants: efried, morgan
Merged fix

Bug #1689468 in keystonemiddleware: "odd keystone behavior when
X-Auth-Token ends with carriage return"
https://bugs.launchpad.net/keystonemiddleware/+bug/1689468
participants: gagehugo, kaerie
Reproposed patch in review


For what it's worth, I also apparently thought office hours occurred on
the 7th when it was actually on the 11th. 



On 07/11/2017 08:35 PM, Lance Bragstad wrote:
>
> Hey all,
>
> This is a summary of what was worked on today during office hours.
> Full logs of the meeting can be found below:
>
> http://eavesdrop.openstack.org/meetings/office_hours/2017/office_hours.2017-07-11-19.00.log.html
>
> *The future of the templated catalog backend
> *
>
> Some issues were uncovered, or just resurfaced, with the templated
> catalog backend. The net of the discussion boiled down to - do we fix
> it or remove it? The answer actually ended up being both. It was
> determined that instead of trying to maintain and fix the existing
> templated backend, we should deprecate it for removal [0]. Since it
> does provide some value, it was suggested that we can start
> implementing a new backend based on YAML to fill the purpose instead.
> The advantage here is that the approach is directed towards a specific
> format (YAML). This should hopefully make things easier for both
> developers and users.
>
> [0] https://review.openstack.org/#/c/482714/
>
> *Policy fixes*
>
> All the policy-in-code work has exposed several issues with policy
> defaults in keystone. We spent time as a group going through several
> of the bugs [0] [1] [2] [3], the corresponding fixes, and impact. One
> of which will be backported specifically for the importance of
> communicating a release note to stable users [0].
>
> [0] https://bugs.launchpad.net/keystone/+bug/1703369
> [1] https://bugs.launchpad.net/keystone/+bug/1703392
> [2] https://bugs.launchpad.net/keystone/+bug/1703467
> [3] https://bugs.launchpad.net/keystone/+bug/1133435
>
> *Additional bugs worked*
>
> Transient bug with security compliance or PCI-DSS:
> https://bugs.launchpad.net/keystone/+bug/1702211
> Request header issues: https://bugs.launchpad.net/keystone/+bug/1689468
>
>
> I hope to find ways to automate most of what is communicated in this
> summary. Until then I'm happy to hear feedback if you find the report
> lacking in a specific area.
>
>
> Thanks,
>
> Lance
>



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours report 2017-7-18

2017-07-19 Thread Lance Bragstad
Hi all,

This is a day late, but here is the summary for what we worked on during
office hours yesterday. The full log can be found below [0].

Bug #1689888 in OpenStack Identity (keystone): "/v3/users is
unproportionally slow"
https://bugs.launchpad.net/keystone/+bug/1689888
participants: lbragstad
Verified and triaged

Bug #1703245 in OpenStack Identity (keystone): "Assignment API doesn't
test GET for member urls"
https://bugs.launchpad.net/keystone/+bug/1703245
participants: lbragstad
Triaged

Bug #1704205 in OpenStack Identity (keystone): "GET
/v3/role_assignments?effective&include_names API fails with unexpected
500 error"
https://bugs.launchpad.net/keystone/+bug/1704205
participants: knikolla, lbragstad, edmondsw, prashkre
Discussed possible solutions, documented workarounds, triaged, and set
target milestone

Bug #1687401 in OpenStack Identity (keystone): "Keystone 403 Forbidden"
https://bugs.launchpad.net/keystone/+bug/1687401
participants: lbragstad
Marked as Incomplete until we have more information/details to recreate

Bug #1687888 in OpenStack Identity (keystone): "creating a federation
protocol returns Bad Request instead of Conflict"
https://bugs.launchpad.net/keystone/+bug/1687888
participants: lbragstad
Marked as Invalid based on the inability to recreate

Bug #1694589 in OpenStack Identity (keystone): "Federation protocol
creation gives error"
https://bugs.launchpad.net/keystone/+bug/1694589
participants: lbragstad
Marked as Invalid based on the inability to recreate

Bug #1697634 in OpenStack Identity (keystone): "AH01630: client denied
by server configuration"
https://bugs.launchpad.net/keystone/+bug/1697634
participants: lbragstad
Marked as Invalid based on configuration

Bug #1702230 in OpenStack Identity (keystone): "fernet token fails with
keystone HA"
https://bugs.launchpad.net/keystone/+bug/1702230
participants: lbragstad
Marked as Invalid based on configuration


[0]
http://eavesdrop.openstack.org/meetings/keystone_office_hours/2017/keystone_office_hours.2017-07-18-19.00.log.html




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-20 Thread Lance Bragstad


On 07/19/2017 09:27 PM, Monty Taylor wrote:
> On 07/19/2017 12:18 AM, Zane Bitter wrote:
>> On 18/07/17 10:55, Lance Bragstad wrote:
>>>>
>>>> Would Keystone folks be happy to allow persistent credentials once
>>>> we have a way to hand out only the minimum required privileges?
>>>>
>>>>
>>>> If I'm understanding correctly, this would make application
>>>> credentials dependent on several cycles of policy work. Right?
>>>
>>> I think having the ability to communicate deprecations though
>>> oslo.policy would help here. We could use it to move towards better
>>> default roles, which requires being able to set minimum privileges.
>>>
>>> Using the current workflow requires operators to define the minimum
>>> privileges for whatever is using the application credential, and
>>> work that into their policy. Is that the intended workflow that we
>>> want to put on the users and operators of application credentials?
>>
>> The plan is to add an authorisation mechanism that is user-controlled
>> and independent of the (operator-controlled) policy. The beginnings
>> of this were included in earlier drafts of the spec, but were removed
>> in patch set 19 in favour of leaving them for a future spec:
>>
>> https://review.openstack.org/#/c/450415/18..19/specs/keystone/pike/application-credentials.rst
>
>
> Yes - that's right - and I expect to start work on that again as soon
> as this next keystoneauth release with version discovery is out the door.
>
> It turns out there are different POVs on this topic, and it's VERY
> important to be clear which one we're talking about at any given point
> in time. A bunch of the confusion just in getting as far as we've
> gotten so far came from folks saying words like "policy" or "trusts"
> or "ACLs" or "RBAC" - but not clarifying which group of cloud users
> they were discussing and from what context.
>
> The problem that Zane and I are are discussing and advocating for are
> for UNPRIVILEDGED users who neither deploy nor operate the cloud but
> who use the cloud to run applications.
>
> Unfortunately, neither the current policy system nor trusts are useful
> in any way shape or form for those humans. Policy and trusts are tools
> for cloud operators to take a certain set of actions.
>
> Similarly, the concern from the folks who are not in favor of
> project-lifecycled application credentials is the one that Zane
> outlined - that there will be $someone with access to those
> credentials after a User change event, and thus $security will be
> compromised.
>
> There is a balance that can and must be found. The use case Zane and I
> are talking about is ESSENTIAL, and literally ever single human who is
> a actually using OpenStack to run applications needs it. Needed it
> last year in fact, and they are, in fact doing things like writing
> ssh-agent like daemons in which they can store their corporate LDAP
> credentials so that their automation will work because we're not
> giving them a workable option.
>
> That said, the concerns about not letting a thing out the door that is
> insecure by design like PHP4's globally scoped URL variables is also
> super important.
>
> So we need to find a design that meets both goals.
>
> I have thoughts on the topic, but have been holding off until
> version-discovery is out the door. My hunch is that, like application
> credentials, we're not going to make significant headway without
> getting humans in the room - because the topic is WAY too fraught with
> peril.
>
> I propose we set aside time at the PTG to dig in to this. Between Zane
> and I and the Keystone core team I have confidence we can find a way out.

Done. I've added this thread to keystone's planning etherpad under
cross-project things we need to talk about [0]. Feel free to elaborate
and fill in context as you see fit. I'll make sure the content makes
it's way into a dedicated etherpad before we have that discussion
(usually as I go through each topic and plan the schedule).


[0] https://etherpad.openstack.org/p/keystone-queens-ptg

>
> Monty
>
> PS. It will not help to solve limited-scope before we solve this.
> Limited scope is an end-user opt-in action and having it does not
> remove the concerns that have been expressed.
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [all] keystoneauth version discovery is here

2017-07-20 Thread Lance Bragstad
Happy Thursday,

We just released keystoneauth 3.0.0 [0], which contains a bunch of
built-in functionality to handle version discovery so that you don't
have to! Check out the documentation for all the details [1].

Big thanks to Eric and Monty for tackling this work, along with all the
folks who diligently reviewed it.


[0] https://review.openstack.org/#/c/485688/
[1] https://docs.openstack.org/keystoneauth/latest/using-sessions.html




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] keystoneauth1 3.0.0 broken keystonemiddleware

2017-07-21 Thread Lance Bragstad
I started noticing some trivial changes failing in the
keystonemiddleware gate [0]. The failures are in tests that use the
keystoneauth1 library (8 tests are failing by my count), which we
released a new version of yesterday [1]. I've proposed a patch to
blacklist keystoneauth1 3.0.0 from keystonemiddleware until we can
figure out what happened [2]. Status is being tracked in a bug against
keystonemiddleware [3], but might need to be broadened if these changes
are affecting other projects.

I'll be in -keystone working through some of the issues if you need me.

Thanks,

Lance

[0] https://review.openstack.org/#/c/486184/
[1] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119969.html
[2] https://review.openstack.org/#/c/486213/
[3] https://bugs.launchpad.net/keystonemiddleware/+bug/1705770




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] keystoneauth1 3.0.0 broken keystonemiddleware

2017-07-21 Thread Lance Bragstad
We have a patch up to blacklist version 3.0.0 from global-requirements
[0]. We're also going to hold bumping the minimum version of
keystoneauth until we have things back to normal [1].


[0] https://review.openstack.org/#/c/486223/
[1] https://review.openstack.org/#/c/486160/1

On 07/21/2017 03:00 PM, Lance Bragstad wrote:
> I started noticing some trivial changes failing in the
> keystonemiddleware gate [0]. The failures are in tests that use the
> keystoneauth1 library (8 tests are failing by my count), which we
> released a new version of yesterday [1]. I've proposed a patch to
> blacklist keystoneauth1 3.0.0 from keystonemiddleware until we can
> figure out what happened [2]. Status is being tracked in a bug against
> keystonemiddleware [3], but might need to be broadened if these changes
> are affecting other projects.
>
> I'll be in -keystone working through some of the issues if you need me.
>
> Thanks,
>
> Lance
>
> [0] https://review.openstack.org/#/c/486184/
> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119969.html
> [2] https://review.openstack.org/#/c/486213/
> [3] https://bugs.launchpad.net/keystonemiddleware/+bug/1705770
>
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] keystoneauth1 3.0.0 broken keystonemiddleware

2017-07-21 Thread Lance Bragstad
The patch to blacklist version 3.0.0 is working through the moment [0].
We also have a WIP patch proposed to handled the cases exposed by
keystonemiddleware [1].


[0] https://review.openstack.org/#/c/486223/
[1] https://review.openstack.org/#/c/486231/


On 07/21/2017 03:58 PM, Lance Bragstad wrote:
> We have a patch up to blacklist version 3.0.0 from global-requirements
> [0]. We're also going to hold bumping the minimum version of
> keystoneauth until we have things back to normal [1].
>
>
> [0] https://review.openstack.org/#/c/486223/
> [1] https://review.openstack.org/#/c/486160/1
>
> On 07/21/2017 03:00 PM, Lance Bragstad wrote:
>> I started noticing some trivial changes failing in the
>> keystonemiddleware gate [0]. The failures are in tests that use the
>> keystoneauth1 library (8 tests are failing by my count), which we
>> released a new version of yesterday [1]. I've proposed a patch to
>> blacklist keystoneauth1 3.0.0 from keystonemiddleware until we can
>> figure out what happened [2]. Status is being tracked in a bug against
>> keystonemiddleware [3], but might need to be broadened if these changes
>> are affecting other projects.
>>
>> I'll be in -keystone working through some of the issues if you need me.
>>
>> Thanks,
>>
>> Lance
>>
>> [0] https://review.openstack.org/#/c/486184/
>> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119969.html
>> [2] https://review.openstack.org/#/c/486213/
>> [3] https://bugs.launchpad.net/keystonemiddleware/+bug/1705770
>>
>>
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] keystoneauth1 3.0.0 broken keystonemiddleware

2017-07-21 Thread Lance Bragstad
After a little head scratching and a Pantera playlist later, we ended up
figuring out the main causes. The failures can be found in the gate [0].
The two failures are detailed below:

1.) Keystoneauth version 3.0.0 added a lot of functionality and might
return a different url depending on discovery. Keystonemiddleware use to
be able to mock urls to keystone in this case because keystoneauth
didn't modify the url in between. Keystonemiddleware didn't know how to
deal with the new url and the result was a Mock failure. This is
something that we can fix in keystonemiddleware once we have a version
of keystoneauth that covers all discovery cases and does the right
thing. NOTE: If you're mocking requests to keystone and using
keystoneauth somewhere in your project's tests, you'll have to deal with
this. More on that below.

2.) The other set of failures were because keystoneauth wasn't expecting
a URL without a path [1], causing an index error. I tested the fix [2]
against keystonemiddleware and it seems to take care of the issue. Eric
is working on a fix. Once that patch is fully tested and vetted we'll
roll another keystoneauth release (3.0.1) and use that to test
keystonemiddleware to handle the mocking issues described in #1. From
there we should be able to safely bump the minimum version to 3.0.1, and
avoid 3.0.0 all together.

Let me know if you see anything else suspicious with respect to
keystoneauth. Thanks!


[0]
http://logs.openstack.org/84/486184/1/check/gate-keystonemiddleware-python27-ubuntu-xenial/7c079da/testr_results.html.gz
[1]
https://github.com/openstack/keystoneauth/blob/5715035f42780d8979d458e9f7e3c625962b2749/keystoneauth1/discover.py#L947
[2] https://review.openstack.org/#/c/486231/1

On 07/21/2017 04:43 PM, Lance Bragstad wrote:
> The patch to blacklist version 3.0.0 is working through the moment [0].
> We also have a WIP patch proposed to handled the cases exposed by
> keystonemiddleware [1].
>
>
> [0] https://review.openstack.org/#/c/486223/
> [1] https://review.openstack.org/#/c/486231/
>
>
> On 07/21/2017 03:58 PM, Lance Bragstad wrote:
>> We have a patch up to blacklist version 3.0.0 from global-requirements
>> [0]. We're also going to hold bumping the minimum version of
>> keystoneauth until we have things back to normal [1].
>>
>>
>> [0] https://review.openstack.org/#/c/486223/
>> [1] https://review.openstack.org/#/c/486160/1
>>
>> On 07/21/2017 03:00 PM, Lance Bragstad wrote:
>>> I started noticing some trivial changes failing in the
>>> keystonemiddleware gate [0]. The failures are in tests that use the
>>> keystoneauth1 library (8 tests are failing by my count), which we
>>> released a new version of yesterday [1]. I've proposed a patch to
>>> blacklist keystoneauth1 3.0.0 from keystonemiddleware until we can
>>> figure out what happened [2]. Status is being tracked in a bug against
>>> keystonemiddleware [3], but might need to be broadened if these changes
>>> are affecting other projects.
>>>
>>> I'll be in -keystone working through some of the issues if you need me.
>>>
>>> Thanks,
>>>
>>> Lance
>>>
>>> [0] https://review.openstack.org/#/c/486184/
>>> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119969.html
>>> [2] https://review.openstack.org/#/c/486213/
>>> [3] https://bugs.launchpad.net/keystonemiddleware/+bug/1705770
>>>
>>>
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] keystoneauth1 3.0.0 broken keystonemiddleware

2017-07-21 Thread Lance Bragstad
On Fri, Jul 21, 2017 at 9:39 PM, Monty Taylor  wrote:

> On 07/22/2017 07:14 AM, Lance Bragstad wrote:
>
>> After a little head scratching and a Pantera playlist later, we ended up
>> figuring out the main causes. The failures can be found in the gate [0].
>> The two failures are detailed below:
>>
>> 1.) Keystoneauth version 3.0.0 added a lot of functionality and might
>> return a different url depending on discovery. Keystonemiddleware use to
>> be able to mock urls to keystone in this case because keystoneauth
>> didn't modify the url in between. Keystonemiddleware didn't know how to
>> deal with the new url and the result was a Mock failure. This is
>> something that we can fix in keystonemiddleware once we have a version
>> of keystoneauth that covers all discovery cases and does the right
>> thing. NOTE: If you're mocking requests to keystone and using
>> keystoneauth somewhere in your project's tests, you'll have to deal with
>> this. More on that below.
>>
>
> Upon further digging - this one is actually quite a bit easier. There are
> cases where keystoneauth finds an unversioned discovery endpoint from a
> versioned endpoint in the catalog. It's done for quite a while, so the
> behavior isn't new. HOWEVER - a bug snuck in that caused the url it infers
> to come back without a trailing '/'. So the requests_mock entry in
> keystonemiddleware was for http://keystone.url/admin/ and keystoneauth
> was doing a get on http://keystone.url/admin.
>
> It's a behavior change and a bug, so we're working up a fix for it. The
> short story is though that once we fix it it should not cause anyone to
> need to update requests_mock entries.


Ah - thanks for keeping me honest here. Good to know both issues will be
fixed with the same patch. For context, this was the thought process as we
worked through things earlier [0].

I appreciate the follow-up!


[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-07-21.log.html#t2017-07-21T19:57:30


>
> 2.) The other set of failures were because keystoneauth wasn't expecting
>> a URL without a path [1], causing an index error. I tested the fix [2]
>> against keystonemiddleware and it seems to take care of the issue. Eric
>> is working on a fix. Once that patch is fully tested and vetted we'll
>> roll another keystoneauth release (3.0.1) and use that to test
>> keystonemiddleware to handle the mocking issues described in #1. From
>> there we should be able to safely bump the minimum version to 3.0.1, and
>> avoid 3.0.0 all together.
>>
>
> Patch is up for this one, and we've confirmed it fixes this issue.
>
> Let me know if you see anything else suspicious with respect to
>> keystoneauth. Thanks!
>>
>>
>> [0]
>> http://logs.openstack.org/84/486184/1/check/gate-keystonemid
>> dleware-python27-ubuntu-xenial/7c079da/testr_results.html.gz
>> [1]
>> https://github.com/openstack/keystoneauth/blob/5715035f42780
>> d8979d458e9f7e3c625962b2749/keystoneauth1/discover.py#L947
>> [2] https://review.openstack.org/#/c/486231/1
>>
>> On 07/21/2017 04:43 PM, Lance Bragstad wrote:
>>
>>> The patch to blacklist version 3.0.0 is working through the moment [0].
>>> We also have a WIP patch proposed to handled the cases exposed by
>>> keystonemiddleware [1].
>>>
>>>
>>> [0] https://review.openstack.org/#/c/486223/
>>> [1] https://review.openstack.org/#/c/486231/
>>>
>>>
>>> On 07/21/2017 03:58 PM, Lance Bragstad wrote:
>>>
>>>> We have a patch up to blacklist version 3.0.0 from global-requirements
>>>> [0]. We're also going to hold bumping the minimum version of
>>>> keystoneauth until we have things back to normal [1].
>>>>
>>>>
>>>> [0] https://review.openstack.org/#/c/486223/
>>>> [1] https://review.openstack.org/#/c/486160/1
>>>>
>>>> On 07/21/2017 03:00 PM, Lance Bragstad wrote:
>>>>
>>>>> I started noticing some trivial changes failing in the
>>>>> keystonemiddleware gate [0]. The failures are in tests that use the
>>>>> keystoneauth1 library (8 tests are failing by my count), which we
>>>>> released a new version of yesterday [1]. I've proposed a patch to
>>>>> blacklist keystoneauth1 3.0.0 from keystonemiddleware until we can
>>>>> figure out what happened [2]. Status is being tracked in a bug against
>>>>> keystonemiddleware [3], but might need to be broadened if these change

Re: [openstack-dev] [keystone] keystoneauth1 3.0.0 broken keystonemiddleware

2017-07-22 Thread Lance Bragstad
Thanks Dims,

Looks like Morgan and Monty have it working through the gate now.

On Sat, Jul 22, 2017 at 7:26 AM, Davanum Srinivas  wrote:

> Lance, other keystone cores,
>
> there's a request for 3.0.1, but one of the reviews that it needs is
> not merged yet
>
> https://review.openstack.org/#/c/486231/
>
>
> Thansk,
> Dims
>
> On Fri, Jul 21, 2017 at 11:40 PM, Lance Bragstad 
> wrote:
> >
> >
> > On Fri, Jul 21, 2017 at 9:39 PM, Monty Taylor 
> wrote:
> >>
> >> On 07/22/2017 07:14 AM, Lance Bragstad wrote:
> >>>
> >>> After a little head scratching and a Pantera playlist later, we ended
> up
> >>> figuring out the main causes. The failures can be found in the gate
> [0].
> >>> The two failures are detailed below:
> >>>
> >>> 1.) Keystoneauth version 3.0.0 added a lot of functionality and might
> >>> return a different url depending on discovery. Keystonemiddleware use
> to
> >>> be able to mock urls to keystone in this case because keystoneauth
> >>> didn't modify the url in between. Keystonemiddleware didn't know how to
> >>> deal with the new url and the result was a Mock failure. This is
> >>> something that we can fix in keystonemiddleware once we have a version
> >>> of keystoneauth that covers all discovery cases and does the right
> >>> thing. NOTE: If you're mocking requests to keystone and using
> >>> keystoneauth somewhere in your project's tests, you'll have to deal
> with
> >>> this. More on that below.
> >>
> >>
> >> Upon further digging - this one is actually quite a bit easier. There
> are
> >> cases where keystoneauth finds an unversioned discovery endpoint from a
> >> versioned endpoint in the catalog. It's done for quite a while, so the
> >> behavior isn't new. HOWEVER - a bug snuck in that caused the url it
> infers
> >> to come back without a trailing '/'. So the requests_mock entry in
> >> keystonemiddleware was for http://keystone.url/admin/ and keystoneauth
> was
> >> doing a get on http://keystone.url/admin.
> >>
> >> It's a behavior change and a bug, so we're working up a fix for it. The
> >> short story is though that once we fix it it should not cause anyone to
> need
> >> to update requests_mock entries.
> >
> >
> > Ah - thanks for keeping me honest here. Good to know both issues will be
> > fixed with the same patch. For context, this was the thought process as
> we
> > worked through things earlier [0].
> >
> > I appreciate the follow-up!
> >
> >
> > [0]
> > http://eavesdrop.openstack.org/irclogs/%23openstack-
> keystone/%23openstack-keystone.2017-07-21.log.html#t2017-07-21T19:57:30
> >
> >>
> >>
> >>> 2.) The other set of failures were because keystoneauth wasn't
> expecting
> >>> a URL without a path [1], causing an index error. I tested the fix [2]
> >>> against keystonemiddleware and it seems to take care of the issue. Eric
> >>> is working on a fix. Once that patch is fully tested and vetted we'll
> >>> roll another keystoneauth release (3.0.1) and use that to test
> >>> keystonemiddleware to handle the mocking issues described in #1. From
> >>> there we should be able to safely bump the minimum version to 3.0.1,
> and
> >>> avoid 3.0.0 all together.
> >>
> >>
> >> Patch is up for this one, and we've confirmed it fixes this issue.
> >>
> >>> Let me know if you see anything else suspicious with respect to
> >>> keystoneauth. Thanks!
> >>>
> >>>
> >>> [0]
> >>>
> >>> http://logs.openstack.org/84/486184/1/check/gate-
> keystonemiddleware-python27-ubuntu-xenial/7c079da/testr_results.html.gz
> >>> [1]
> >>>
> >>> https://github.com/openstack/keystoneauth/blob/
> 5715035f42780d8979d458e9f7e3c625962b2749/keystoneauth1/discover.py#L947
> >>> [2] https://review.openstack.org/#/c/486231/1
> >>>
> >>> On 07/21/2017 04:43 PM, Lance Bragstad wrote:
> >>>>
> >>>> The patch to blacklist version 3.0.0 is working through the moment
> [0].
> >>>> We also have a WIP patch proposed to handled the cases exposed by
> >>>> keystonemiddleware [1].
> >>>>
> >>>>
> >>>> [0] https://review.openstack.org/#/c/486223/
>

Re: [openstack-dev] [keystone] [all] keystoneauth version discovery is here

2017-07-22 Thread Lance Bragstad
There is a new release of keystoneauth1 available (3.0.1) that includes
some fixes that broke various projects. Full context is available in
another thread [0]. Please use keystoneauth1 3.0.1 instead of 3.0.0, which
we've blacklisted [1].

Thanks!


[0] http://lists.openstack.org/pipermail/openstack-dev/2017-July/120012.html
[1] https://review.openstack.org/#/c/486223/

On Thu, Jul 20, 2017 at 5:41 PM, Lance Bragstad  wrote:

> Happy Thursday,
>
> We just released keystoneauth 3.0.0 [0], which contains a bunch of
> built-in functionality to handle version discovery so that you don't
> have to! Check out the documentation for all the details [1].
>
> Big thanks to Eric and Monty for tackling this work, along with all the
> folks who diligently reviewed it.
>
>
> [0] https://review.openstack.org/#/c/485688/
> [1] https://docs.openstack.org/keystoneauth/latest/using-sessions.html
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours report 2017-7-25

2017-07-25 Thread Lance Bragstad
Hey all,

Nearly all of today's activity in office hours consisted of bug triage.
We now have a list of target bugs for rc1 [0]. Full logs can be found
below [1]. The following is a summary of what was accomplished:


Bug #1669080 in OpenStack Identity (keystone): ""openstack role create"
should support "--description""
https://bugs.launchpad.net/keystone/+bug/1669080
updated and targeted for pike-3, reviewed proposed solution

The following were bumped to pike-rc1:

Bug #169 in OpenStack Identity (keystone): "ec2tokens errors in v2
api after Ocata upgrade"
https://bugs.launchpad.net/keystone/+bug/169

Bug #1694460 in OpenStack Identity (keystone): "Keystone docs need to be
migrated from the OpenStack-manuals"
https://bugs.launchpad.net/keystone/+bug/1694460

Bug #1705081 in OpenStack Identity (keystone): "DELETE project API is
failing in forbidden(403) error message"
https://bugs.launchpad.net/keystone/+bug/1705081

Bug #1610138 in OpenStack Identity (keystone): "openstack catalog list
error in multi region"
https://bugs.launchpad.net/keystone/+bug/1610138

Bug #1635389 in OpenStack Identity (keystone):
"keystone.contrib.ec2.controllers.Ec2Controller is untested"
https://bugs.launchpad.net/keystone/+bug/1635389

Bug #1694525 in OpenStack Identity (keystone): "keystone reports 404
User Not Found during grenade tests"
https://bugs.launchpad.net/keystone/+bug/1694525
needs additional investigation

Bug #1700847 in OpenStack Identity (keystone): "tempest plugin tests are
broken"
https://bugs.launchpad.net/keystone/+bug/1700847
patch proposed for review

Bug #1687616 in OpenStack Identity (keystone): "KeyError 'options' while
doing zero downtime upgrade from N to O"
https://bugs.launchpad.net/keystone/+bug/1687616
need to recreate

Bug #1693690 in OpenStack Identity (keystone): "keystone fedeartion
mapping rules with blacklist"
https://bugs.launchpad.net/keystone/+bug/1693690
the summary outlines next steps and a documentation fix with the proper
wording is in order

Bug #1692090 in OpenStack Identity (keystone): "_dn_to_id ignores
user_id_attribute"
https://bugs.launchpad.net/keystone/+bug/1692090
set importance, reviewed proposed solution

The following were marked as invalid:

Bug #1618705 in OpenStack Identity (keystone): "keystone.cache.redis
config arguments url conflict host and port"
https://bugs.launchpad.net/keystone/+bug/1618705

Bug #1685732 in OpenStack Identity (keystone): "create_keystone_accounts
error on latest devstack ocata branch(Ubuntu 16.04 LTS)"
https://bugs.launchpad.net/keystone/+bug/1685732

The following were merged as incomplete and waiting on more information:

Bug #1687073 in OpenStack Identity (keystone): "Keystone Memory usage
remains high "
https://bugs.launchpad.net/keystone/+bug/1687073

Bug #1694591 in OpenStack Identity (keystone): "Horizon gives 401
authorization error after oidc configuration"
https://bugs.launchpad.net/keystone/+bug/1694591

[0] https://goo.gl/9vuCjS
[1]
http://eavesdrop.openstack.org/meetings/keystone_office_hours/2017/keystone_office_hours.2017-07-25-19.00.log.html


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] canceling policy meeting 2017-07-26

2017-07-26 Thread Lance Bragstad
Hey all,

There isn't anything on the agenda for today's policy meeting [0] and I
know several members of the team are wrapping things up for pike-3. As a
result, I'm canceling the policy meeting today and we can reconvene next
week after the dust settles.

Thanks,

Lance


[0] https://etherpad.openstack.org/p/keystone-policy-meeting




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Queens PTG Planning

2017-07-27 Thread Lance Bragstad
I've added a section to the etherpad [0] for attendees. We need to start
getting an idea of how many people plan on attending the PTG (for
scheduling purposes). Please add your name and IRC nick to the list.

Thanks

[0] https://etherpad.openstack.org/p/keystone-queens-ptg


On 07/05/2017 11:22 AM, Lance Bragstad wrote:
> Hey all,
>
> I've started an etherpad [0] for us to collect topics and ideas for the
> PTG in September. I hope to follow the same planning format as last
> time. Everyone has the opportunity to add topics to the agenda and after
> some time we'll group related topics and start building a formal schedule.
>
> The etherpad has two lists. One for project-specific topics and one for
> cross-project topics. As soon as we firm up the things we need to
> collaborate on with other projects, I'll start coordinating with other
> teams. These were the sessions we had to work around last time due to
> schedules. We can sprinkle the project related topics in to fill the gaps.
>
> Let me know if you have any questions.
>
> Thanks!
>
>
> [0] https://etherpad.openstack.org/p/keystone-queens-ptg
>
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][release][oslo] FFE for oslo.policy

2017-08-01 Thread Lance Bragstad
I was cleaning up a few documentation things for keystone and noticed an
issue with how the configuration reference was rendering. It turns out
the oslo.policy library needed a few tweaks to the show-policy directive
along with some changes to keystone that allowed us to properly render
all default policies. I documented these in a bug report tagging both
projects [0].

Two fixes were made to the oslo.policy library (thanks, Doug!) that will
allow projects to render their entire policy document using the
show-policy directive. Both fixes have merged in oslo.policy master and
have been backported to stable/pike. I also have a release proposed to
cut a new version of oslo.policy for us to use for pike [1].

Opening this up for discussion to see if we can grant an FFE so that we
can use the proper version of oslo.policy. More context in IRC as well [2].

Let me know if you have any questions. Thanks!

Lance

[0] https://bugs.launchpad.net/keystone/+bug/1707246
[1] https://review.openstack.org/#/c/489599/
[2]
http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2017-08-01.log.html#t2017-08-01T18:14:57




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours report 2017-08-01

2017-08-01 Thread Lance Bragstad
Hey all,

Here is a condensed report of what was accomplished during office hours
today. Most activity focused on reviewing fixes in flight. Full log can
be found in IRC [0].

Bug #1635389 in OpenStack Identity (keystone):
"keystone.contrib.ec2.controllers.Ec2Controller is untested"
https://bugs.launchpad.net/keystone/+bug/1635389
participants: cmurphy, lbragstad, jose castro leon
Reviewed and approved fix

Bug #169 in OpenStack Identity (keystone): "ec2tokens errors in v2
api after Ocata upgrade"
https://bugs.launchpad.net/keystone/+bug/169
participants: cmurphy, lbragstad, jose castro leon
Reviewed and approved fix

Bug #1694460 in OpenStack Identity (keystone): "Keystone docs need to be
migrated from the OpenStack-manuals"
https://bugs.launchpad.net/keystone/+bug/1694460
participants: lbragstad, cmurphy, gagehugo
Proposed and reviewed remaining patches to complete documentation migration

Bug #1689468 in OpenStack Identity (keystone): "odd keystone behavior
when X-Auth-Token ends with carriage return"
https://bugs.launchpad.net/keystone/+bug/1689468
participants: cmurphy, samueldmq
Merged fix

Bug #1708005 in keystoneauth: "6 out 10
keystone.tests.unit.test_cert_setup.* unit test cases failed in
stable/newton branch"
https://bugs.launchpad.net/keystoneauth/+bug/1708005
participants: henglinyang
Reported and opened bug

Bug #1701324 in OpenStack Identity (keystone): "Removing duplicated
items doesn't work in case of federations"
https://bugs.launchpad.net/keystone/+bug/1701324
participants: lbragstad, dstepanenko
Confirmed and submitted patch to expose the bug

Bug #1707993 in keystoneauth: "EndpointData.url should regurgitate my
endpoint_override"
https://bugs.launchpad.net/keystoneauth/+bug/1707993
participants: efried
Opened and triaged bug

[0]
http://eavesdrop.openstack.org/meetings/keystone_office_hours/2017/keystone_office_hours.2017-08-01-18.58.log.html


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][policy] no policy meeting today 2017-08-02

2017-08-02 Thread Lance Bragstad
A lot of the team is focused on getting pike-rc1 out the door and
reviews. The agenda is also empty. Let's cancel today and pick up next
week or shortly before the PTG to organize our policy sessions there.

Thanks,

Lance




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   >