Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-22 Thread Morgan Fainberg
I can spin up the code and make it available. There is an example
(highly-flask specific right now, but would be easy to make it generic)
from keystone [0] has been implemented. Where should this code live? A new
library? oslo.?  The aforementioned example would need "external
middleware via config" loading capabilities, but that isn't hard to do,
just adding an oslo.cfg opt and referencing it.

[0]
https://github.com/openstack/keystone/blob/master/keystone/server/flask/core.py#L93
Cheers,
--Morgan

On Mon, Oct 22, 2018 at 8:12 AM Sean McGinnis  wrote:

> On Mon, Oct 22, 2018 at 07:49:35AM -0700, Morgan Fainberg wrote:
> > I should be able to do a write up for Keystone's removal of paste *and*
> > move to flask soon.
> >
> > I can easily extract the bit of code I wrote to load our external
> > middleware (and add an external loader) for the transition away from
> paste.
> >
> > I also think paste is terrible, and would be willing to help folks move
> off
> > of it rather than maintain it.
> >
> > --Morgan
> >
>
> Do I detect a volunteer to champion a cycle goal? :)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-22 Thread Morgan Fainberg
Also, doesn't bitbucket have a git interface now too (optionally)?

On Mon, Oct 22, 2018, 07:49 Morgan Fainberg 
wrote:

> I should be able to do a write up for Keystone's removal of paste *and*
> move to flask soon.
>
> I can easily extract the bit of code I wrote to load our external
> middleware (and add an external loader) for the transition away from paste.
>
> I also think paste is terrible, and would be willing to help folks move
> off of it rather than maintain it.
>
> --Morgan
>
> On Mon, Oct 22, 2018, 07:38 Thomas Goirand  wrote:
>
>> On 10/22/18 12:55 PM, Chris Dent wrote:
>> >> My assumption is that it's "something we plan to minimally maintain
>> >> because we depend on it". in which case all options would work: the
>> >> exact choice depends on whether there is anybody interested in helping
>> >> maintaining it, and where those contributors prefer to do the work.
>> >
>> > Thus far I'm not hearing any volunteers. If that continues to be the
>> > case, I'll just keep it on bitbucket as that's the minimal change.
>>
>> Could you please move it to Github, so that at least, it's easier to
>> check out? Mercurial is always a pain...
>>
>> Cheers,
>>
>> Thomas Goirand (zigo)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-22 Thread Morgan Fainberg
I should be able to do a write up for Keystone's removal of paste *and*
move to flask soon.

I can easily extract the bit of code I wrote to load our external
middleware (and add an external loader) for the transition away from paste.

I also think paste is terrible, and would be willing to help folks move off
of it rather than maintain it.

--Morgan

On Mon, Oct 22, 2018, 07:38 Thomas Goirand  wrote:

> On 10/22/18 12:55 PM, Chris Dent wrote:
> >> My assumption is that it's "something we plan to minimally maintain
> >> because we depend on it". in which case all options would work: the
> >> exact choice depends on whether there is anybody interested in helping
> >> maintaining it, and where those contributors prefer to do the work.
> >
> > Thus far I'm not hearing any volunteers. If that continues to be the
> > case, I'll just keep it on bitbucket as that's the minimal change.
>
> Could you please move it to Github, so that at least, it's easier to
> check out? Mercurial is always a pain...
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][cinder][nova][keystone] healthcheck

2018-10-12 Thread Morgan Fainberg
Keystone no longer uses paste (since Rocky) as paste is unmaintained. The
healthcheck app is permanently enabled for keystone at
/healthcheck. We chose to make it a default bit of
functionality in how we have Keystone deployed. We also have unit tests in
place to ensure we don't regress and healthcheck changes behavior down the
line (future releases). You should be able to configure additional bits for
healthcheck in keystone.conf (e.g. detailed mode, disable-by-file, etc).

Cheers,
--Morgan

On Fri, Oct 12, 2018 at 3:07 AM Florian Engelmann <
florian.engelm...@everyware.ch> wrote:

> Hi,
>
> I tried to configure the healthcheck framework (/healthcheck) for nova,
> cinder, glance and keystone but it looks like paste is not used with
> keystone anymore?
>
>
> https://github.com/openstack/keystone/commit/8bf335bb015447448097a5c08b870da8e537a858
>
> In our rocky deployment the healthcheck is working for keystone only and
> I failed to configure it for, eg. nova-api.
>
> Nova seems to use paste?
>
> Is there any example nova api-paste.ini with the olso healthcheck
> middleware enabled? To documentation is hard to understand - at least
> for me.
>
> Thank you for your help.
>
> All the best,
> Florian
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone Team Update - Week of 1 October 2018

2018-10-05 Thread Morgan Fainberg
On Fri, Oct 5, 2018, 07:04 Colleen Murphy  wrote:

> # Keystone Team Update - Week of 1 October 2018
>
> ## News
>
> ### JSON-home
>
> As Morgan works through the flaskification project, it's been clear that
> some of the JSON-home[1] code could use some refactoring and that the
> document itself is inconsistent[2], but we're unclear whether anyone uses
> this or cares if it changes. If you have ever used keystone's JSON-home
> implementation, come talk to us.
>
> [1] https://adam.younglogic.com/2018/01/using-json-home-keystone/
> [2]
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-10-02.log.html#t2018-10-02T18:22:25
>
> ## Open Specs
>
> Search query: https://bit.ly/2Pi6dGj
>
> We still only have three specs targeted at Stein, but Adam has revived
> several "ongoing" specs that can use some eyes, please take a look[3].
>
> [3] https://bit.ly/2OyDLTh
>
> ## Recently Merged Changes
>
> Search query: https://bit.ly/2pquOwT
>
> We merged 19 changes this week.
>
> ## Changes that need Attention
>
> Search query: https://bit.ly/2PUk84S
>
> There are 41 changes that are passing CI, not in merge conflict, have no
> negative reviews and aren't proposed by bots.
>
> One of these is a proposal to add rate-limiting to keystoneauth[4], would
> be good to get some more reactions to it.
>
> Another is the flaskification patch of doom[5] which will definitely need
> some close attention.
>
> [4] https://review.openstack.org/605043
> [5] https://review.openstack.org/603461
>
> ## Bugs
>
> This week we opened 5 new bugs and closed 7.
>
> Bugs opened (5)
> Bug #1795487 (keystone:Undecided) opened by Amy Marrich
> https://bugs.launchpad.net/keystone/+bug/1795487
> Bug #1795800 (keystone:Undecided) opened by Andy Ngo
> https://bugs.launchpad.net/keystone/+bug/1795800
> Bug #1796077 (keystone:Undecided) opened by Ching Kuo
> https://bugs.launchpad.net/keystone/+bug/1796077
> Bug #1796247 (keystone:Undecided) opened by Yang Youseok
> https://bugs.launchpad.net/keystone/+bug/1796247
> Bug #1795496 (oslo.policy:Undecided) opened by Adam Young
> https://bugs.launchpad.net/oslo.policy/+bug/1795496
>
> Bugs closed (3)
> Bug #1782687 (keystone:Undecided)
> https://bugs.launchpad.net/keystone/+bug/1782687
> Bug #1796077 (keystone:Undecided)
> https://bugs.launchpad.net/keystone/+bug/1796077
> Bug #1796247 (keystone:Undecided)
> https://bugs.launchpad.net/keystone/+bug/1796247
>
> Bugs fixed (4)
> Bug #1794552 (keystone:Medium) fixed by Morgan Fainberg
> https://bugs.launchpad.net/keystone/+bug/1794552
> Bug #1753585 (keystone:Low) fixed by Vishakha Agarwal
> https://bugs.launchpad.net/keystone/+bug/1753585
> Bug #1615076 (keystone:Undecided) fixed by Vishakha Agarwal
> https://bugs.launchpad.net/keystone/+bug/1615076
> Bug #1615076 (python-keystoneclient:Undecided) fixed by Vishakha Agarwal
> https://bugs.launchpad.net/python-keystoneclient/+bug/1615076
>
> ## Milestone Outlook
>
> https://releases.openstack.org/stein/schedule.html
>
> Now just 3 weeks away from the spec proposal freeze.
>
> ## Help with this newsletter
>
> Help contribute to this newsletter by editing the etherpad:
> https://etherpad.openstack.org/p/keystone-team-newsletter
> Dashboard generated using gerrit-dash-creator and
> https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67



As an update to JSON Home bits, I have worked around the possible needed
changes. The document should remain the same as before.

--Morgan

>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-28 Thread Morgan Fainberg
Ideally I would like to see it in the form of least specific to most
specific. But more importantly in a way that there is no additional
delimiters between the service type and the resource. Finally, I do not
like the change of plurality depending on action type.

I propose we consider

*::[:]*

Example for keystone (note, action names below are strictly examples I am
fine with whatever form those actions take):
*identity:projects:create*
*identity:projects:delete*
*identity:projects:list*
*identity:projects:get*

It keeps things simple and consistent when you're looking through overrides
/ defaults.
--Morgan

On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad  wrote:

> Bumping this thread again and proposing two conventions based on the
> discussion here. I propose we decide on one of the two following
> conventions:
>
> *::*
>
> or
>
> *:_*
>
> Where  is the corresponding service type of the project [0],
> and  is either create, get, list, update, or delete. I think
> decoupling the method from the policy name should aid in consistency,
> regardless of the underlying implementation. The HTTP method specifics can
> still be relayed using oslo.policy's DocumentedRuleDefault object [1].
>
> I think the plurality of the resource should default to what makes sense
> for the operation being carried out (e.g., list:foobars, create:foobar).
>
> I don't mind the first one because it's clear about what the delimiter is
> and it doesn't look weird when projects have something like:
>
> :::
>
> If folks are ok with this, I can start working on some documentation that
> explains the motivation for this. Afterward, we can figure out how we want
> to track this work.
>
> What color do you want the shed to be?
>
> [0] https://service-types.openstack.org/service-types.json
> [1]
> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
>
> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad 
> wrote:
>
>>
>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann 
>> wrote:
>>
>>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt <
>>> j...@johngarbutt.com> wrote 
>>>  > tl;dr+1 consistent names
>>>  > I would make the names mirror the API... because the Operator setting
>>> them knows the API, not the codeIgnore the crazy names in Nova, I certainly
>>> hate them
>>>
>>> Big +1 on consistent naming  which will help operator as well as
>>> developer to maintain those.
>>>
>>>  >
>>>  > Lance Bragstad  wrote:
>>>  > > I'm curious if anyone has context on the "os-" part of the format?
>>>  >
>>>  > My memory of the Nova policy mess...* Nova's policy rules
>>> traditionally followed the patterns of the code
>>>  > ** Yes, horrible, but it happened.* The code used to have the
>>> OpenStack API and the EC2 API, hence the "os"* API used to expand with
>>> extensions, so the policy name is often based on extensions** note most of
>>> the extension code has now gone, including lots of related policies* Policy
>>> in code was focused on getting us to a place where we could rename policy**
>>> Whoop whoop by the way, it feels like we are really close to something
>>> sensible now!
>>>  > Lance Bragstad  wrote:
>>>  > Thoughts on using create, list, update, and delete as opposed to
>>> post, get, put, patch, and delete in the naming convention?
>>>  > I could go either way as I think about "list servers" in the API.But
>>> my preference is for the URL stub and POST, GET, etc.
>>>  >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad 
>>> wrote:If we consider dropping "os", should we entertain dropping "api",
>>> too? Do we have a good reason to keep "api"?I wouldn't be opposed to simple
>>> service types (e.g "compute" or "loadbalancer").
>>>  > +1The API is known as "compute" in api-ref, so the policy should be
>>> for "compute", etc.
>>>
>>> Agree on mapping the policy name with api-ref as much as possible. Other
>>> than policy name having 'os-', we have 'os-' in resource name also in nova
>>> API url like /os-agents, /os-aggregates etc (almost every resource except
>>> servers , flavors).  As we cannot get rid of those from API url, we need to
>>> keep the same in policy naming too? or we can have policy name like
>>> compute:agents:create/post but that mismatch from api-ref where agents
>>> resource url is os-agents.
>>>
>>
>> Good question. I think this depends on how the service does policy
>> enforcement.
>>
>> I know we did something like this in keystone, which required policy
>> names and method names to be the same:
>>
>>   "identity:list_users": "..."
>>
>> Because the initial implementation of policy enforcement used a decorator
>> like this:
>>
>>   from keystone import controller
>>
>>   @controller.protected
>>   def list_users(self):
>>   ...
>>
>> Having the policy name the same as the method name made it easier for the
>> decorator implementation to resolve the policy needed to protect the API
>> because it just looked at the name of the wrapped method. The advantage was
>> that 

Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Morgan Fainberg
This discussion was also not about user assigned IDs, but predictable IDs
with the auto provisioning. We still want it to be something keystone
controls (locally). It might be hash domain ID and value from assertion (
similar.to the LDAP user ID generator). As long as within an environment,
the IDs are predictable when auto provisioning via federation, we should be
good. And the problem of the totally unknown ID until provisioning could be
made less of an issue for someone working within a massively federated edge
environment.

I don't want user/explicit admin set IDs.

On Wed, Sep 26, 2018, 04:43 Jay Pipes  wrote:

> On 09/26/2018 05:10 AM, Colleen Murphy wrote:
> > Thanks for the summary, Ildiko. I have some questions inline.
> >
> > On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
> >
> > 
> >
> >>
> >> We agreed to prefer federation for Keystone and came up with two work
> >> items to cover missing functionality:
> >>
> >> * Keystone to trust a token from an ID Provider master and when the auth
> >> method is called, perform an idempotent creation of the user, project
> >> and role assignments according to the assertions made in the token
> >
> > This sounds like it is based on the customizations done at Oath, which
> to my recollection did not use the actual federation implementation in
> keystone due to its reliance on Athenz (I think?) as an identity manager.
> Something similar can be accomplished in standard keystone with the mapping
> API in keystone which can cause dynamic generation of a shadow user,
> project and role assignments.
> >
> >> * Keystone should support the creation of users and projects with
> >> predictable UUIDs (eg.: hash of the name of the users and projects).
> >> This greatly simplifies Image federation and telemetry gathering
> >
> > I was in and out of the room and don't recall this discussion exactly.
> We have historically pushed back hard against allowing setting a project ID
> via the API, though I can see predictable-but-not-settable as less
> problematic. One of the use cases from the past was being able to use the
> same token in different regions, which is problematic from a security
> perspective. Is that that idea here? Or could someone provide more details
> on why this is needed?
>
> Hi Colleen,
>
> I wasn't in the room for this conversation either, but I believe the
> "use case" wanted here is mostly a convenience one. If the edge
> deployment is composed of hundreds of small Keystone installations and
> you have a user (e.g. an NFV MANO user) which should have visibility
> across all of those Keystone installations, it becomes a hassle to need
> to remember (or in the case of headless users, store some lookup of) all
> the different tenant and user UUIDs for what is essentially the same
> user across all of those Keystone installations.
>
> I'd argue that as long as it's possible to create a Keystone tenant and
> user with a unique name within a deployment, and as long as it's
> possible to authenticate using the tenant and user *name* (i.e. not the
> UUID), then this isn't too big of a problem. However, I do know that a
> bunch of scripts and external tools rely on setting the tenant and/or
> user via the UUID values and not the names, so that might be where this
> feature request is coming from.
>
> Hope that makes sense?
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-15 Thread Morgan Fainberg
I am generally opposed to needlessly prefixing things with "os".

I would advocate to drop it.


On Fri, Sep 14, 2018, 20:17 Lance Bragstad  wrote:

> Ok - yeah, I'm not sure what the history behind that is either...
>
> I'm mainly curious if that's something we can/should keep or if we are
> opposed to dropping 'os' and 'api' from the convention (e.g.
> load-balancer:loadbalancer:post as opposed to
> os_load-balancer_api:loadbalancer:post) and just sticking with the
> service-type?
>
> On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson 
> wrote:
>
>> I don't know for sure, but I assume it is short for "OpenStack" and
>> prefixing OpenStack policies vs. third party plugin policies for
>> documentation purposes.
>>
>> I am guilty of borrowing this from existing code examples[0].
>>
>> [0]
>> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html
>>
>> Michael
>> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad 
>> wrote:
>> >
>> >
>> >
>> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson 
>> wrote:
>> >>
>> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
>> >> which maps to the "os--api::" format.
>> >
>> >
>> > Thanks for explaining the justification, Michael.
>> >
>> > I'm curious if anyone has context on the "os-" part of the format? I've
>> seen that pattern in a couple different projects. Does anyone know about
>> its origin? Was it something we converted to our policy names because of
>> API names/paths?
>> >
>> >>
>> >>
>> >> I selected it as it uses the service-type[1], references the API
>> >> resource, and then the method. So it maps well to the API reference[2]
>> >> for the service.
>> >>
>> >> [0]
>> https://docs.openstack.org/octavia/latest/configuration/policy.html
>> >> [1] https://service-types.openstack.org/
>> >> [2]
>> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
>> >>
>> >> Michael
>> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
>> >> >
>> >> > So +1
>> >> >
>> >> >
>> >> >
>> >> > Tim
>> >> >
>> >> >
>> >> >
>> >> > From: Lance Bragstad 
>> >> > Reply-To: "OpenStack Development Mailing List (not for usage
>> questions)" 
>> >> > Date: Wednesday, 12 September 2018 at 20:43
>> >> > To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>, OpenStack Operators <
>> openstack-operat...@lists.openstack.org>
>> >> > Subject: [openstack-dev] [all] Consistent policy names
>> >> >
>> >> >
>> >> >
>> >> > The topic of having consistent policy names has popped up a few
>> times this week. Ultimately, if we are to move forward with this, we'll
>> need a convention. To help with that a little bit I started an etherpad [0]
>> that includes links to policy references, basic conventions *within* that
>> service, and some examples of each. I got through quite a few projects this
>> morning, but there are still a couple left.
>> >> >
>> >> >
>> >> >
>> >> > The idea is to look at what we do today and see what conventions we
>> can come up with to move towards, which should also help us determine how
>> much each convention is going to impact services (e.g. picking a convention
>> that will cause 70% of services to rename policies).
>> >> >
>> >> >
>> >> >
>> >> > Please have a look and we can discuss conventions in this thread. If
>> we come to agreement, I'll start working on some documentation in
>> oslo.policy so that it's somewhat official because starting to renaming
>> policies.
>> >> >
>> >> >
>> >> >
>> >> > [0] https://etherpad.openstack.org/p/consistent-policy-names
>> >> >
>> >> > ___
>> >> > OpenStack-operators mailing list
>> >> > openstack-operat...@lists.openstack.org
>> >> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > openstack-operat...@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] keystoneauth version auto discovery for internal endpoints in queens

2018-05-11 Thread Morgan Fainberg
Typically speaking if we broke a behavior via a change in KeystoneAuth
(not some behavior change in openstackclient or the way osc processes
requests), we are in the wrong and we will need to go back through and
fix the previous behavior.

I'll spend some time going through this to verify if this really is a
KSA change bug or something else. If it is in-fact a KSA
(keystoneauth) bug, we'll work to restore the previous behavior(s) as
reasonably quickly as possible.

Cheers,
--Morgan

On Fri, May 11, 2018 at 1:37 PM, Vlad Gusev  wrote:
> Hello.
>
> We faced a bug in keystoneauth, which haven't existed before Queens.
>
> In our OpenStack deployments we use urls like http://controller:5000/v3 for
> internal and admin endpoints and urls like
> https://api.example.org/identity/v3 for public endpoints. We set option
> public_endpoint in [default] section of the
> keystone.conf/nova.conf/cinder.conf/glance.conf/neutron.conf. For example,
> for keystone it is 'public_endpoint=https://api.example.org/identity/'.
>
> Since keystoneauth 3.2.0 or commit
> https://github.com/openstack/keystoneauth/commit/8b8ff830e89923ca6862362a5d16e496a0c0093c
> all internal client requests to the internal endpoints (for example,
> openstack server list from controller node) fail with 404 error, because it
> tries to do auto discovery at the http://controller:5000/v3. It gets
> {"href": "https://api.example.org/identity/v3/;, "rel": "self"} because of
> the public_endpoint option, and then in function _combine_relative_url()
> (keystoneauth1/discover.py:405) keystoneauth combines
> http://controller:5000/ with the path from public href. So after auto
> discovery attempt it goes to the wrong path
> http://controller:5000/identity/v3/
>
> Before this commit openstackclient made auth request to the
> https://api.example.org/identity/v3/auth/tokens (and it worked, because in
> our deployment internal services and console clients can access this public
> url). At best, we expect openstackclient always go to the
> http://controller:5000/v3/
>
> This problem partially could be solved by explicitly passing public
> --os-auth-url https://api.example.org/identity/identity/v3 to the console
> clients even if we want to use internal endpoints.
>
> I found similiar bug in launchpad, but it haven't received any attention:
> https://bugs.launchpad.net/keystoneauth/+bug/1733052
>
> What could be done with this behavior of keystoneauth auto discovery?
>
> - Vlad
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Changes to keystone-stable-maint members

2018-04-24 Thread Morgan Fainberg
Hi,

I am proposing making some changes to the Keystone Stable Maint team.
A lot of this is cleanup for contributors that have moved on from
OpenStack. For the most part, I've been the only one responsible for
Keystone Stable Maint reviews, and I'm not comfortable being this
bottleneck

Removals

Dolph Matthews
Steve Martinelli
Brant Knudson

Each of these members have left/moved on from OpenStack, or in the
case of Brant, less involved with Keystone (and I believe OpenStack as
a whole).

Additions
===
Lance Bragstad

Lance is the PTL and also highly aware (and does reviews for stable
keystone when I ask, so we have a second pair of eyes on them) of the
differences/stable policy.

This will bring us to a solid 2 contributors for Keystone that are
looking at the stable-maint reviews and ensuring we're not letting too
much sit in limbo (or dumping it all on the main stable-core team).

Long term I'd like to see a 3rd keystone stable maint, but I am unsure
who else should be nominated. Getting to a full 2 members actively
engaged is a big win for ensuring stable branches get appropriate love
within Keystone.

Cheers,
--Morgan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all] v2.0 API removal

2017-10-20 Thread Morgan Fainberg
In addendum, the final v2.0 (EC2-API) path will eventually be removed
(mitaka +7, the "T" release). The v3 versions (where they exist) will
continue to be maintained and not removed.

On Fri, Oct 20, 2017 at 5:16 PM, Morgan Fainberg
<morgan.fainb...@gmail.com> wrote:
> Let me clarify a few things regarding the V2.0 removal:
>
> * This has been planned for years at this point. At one time (I am
> looking for the documentation, once I find it I'll include it on this
> thread) we worked with Nova and the TC to set forth a timeline on the
> removal. Part of that agreement was that this was a one-time deal. We
> would remove the V2.0 API in favor of the v3 API but would never
> remove another API going forward.
>
>A few reasons for removing the V2.0 API that were discussed and
> drove the decision:
>
>1) The V2.0 API had behavior that was explicitly breaking the security 
> model:
>
>* A user could authenticate with a scope not the default
> domain) which could lead to oddities in enforcement when using v2.0
> APIs and introduced a number of edge cases. This could not be fixed
> without breaking the V2.0 API contract and every single change to V3
> and features required a lot of time to ensure V2.0 was not breaking
> and had appropriate translations to/from the different data formats.
>
>* The V2.0 AUTH API included the token (secure) data in the URL
> path, this means that all logs from apache (or other web servers and
> wsgi apps) had to be considered privileged and could not be exposed
> for debugging purposes (and in some environments may not be accessed
> without significant access-controls). This also could not be fixed
> without breaking the V2.0 API contract.
>
>* The V2.0 policy was effectively hard coded (effectively) to
> use "admin" and "member" roles. Retrofitting the APIs to support fully
> policy was extremely difficult and could break default behaviors
> (easily) in many environments. This was also deemed to be mostly
> unfix-able without breaking the V2.0 API contract.
>
>
>  In short, the maintenance on V2.0 API was significant, it was a
> lot of work to maintain especially since the API could not receive any
> active development due to lacking basic features introduced in v3.0.
> There were also a significant number of edge cases where v3 had some
> very hack-y support for features (required in openstack services) via
> auth to support the possibility of v2->v3 translations.
>
>
>2) V2.0 had been bit rotting. Many items had limited testing and
> were found to be broken. Adding tests that were both V3 and V2.0 aware
> added another layer of difficulty in maintaining the API, much of the
> time we had to spin many new patches to ensure that we didn't break
> v2.0 contracts with a non-breaking v3 change (or in fixing a v2 API
> call, we would be somewhat forced into breaking the API contract).
>
>
>3) The Keystone team is acutely aware that this was a painful
> transition and made the choice to drop the API even in that light. The
> choice of "breaking the API contract" a number of times verses
> lightening the developer load (we are strapped for resources working
> on Keystone as are many services, the overhead and added load makes it
> mostly untenable) and do a single (large) change with the
> understanding that V3 APIs cannot be removed was preferable.
>
>
> The TC agreed to this removal. The service teams agreed to this
> removal. This was telegraphed as much as we could via deprecation and
> many, many, many discussions on this topic. There really was no good
> solution, we took the solution that was the best for OpenStack in our
> opinion based upon the place where Keystone is.
>
> We can confidently commit to the following:
>   * v3 APIs (Even the ones we dislike) will not go away
>   * barring a massive security hole, we will not break the API
> contracts on V3] (we may add data, we will not remove/restructure
> data)
>   * If we implement microversions, you may see API changes (similar to
> how nova works), but as of today, we do not implement microversions
>
> We have worked with defcore/refstack, qa teams, all services (I think
> we missed one, it has since been fixed), clients, SDK(s), etc to
> ensure that as much support as possible is in place to make utilizing
> V3 easy.
>
>
>
>
> On Fri, Oct 20, 2017 at 3:50 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
>> No, I'm not saying its the TC teams job to bludgeon folks.
>>
>> I'm suggesting that some folks other then Keystone should look at the impact 
>> of the final removal an api that a lot of external clients may be coded 
>> against and since it effects a

Re: [openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all] v2.0 API removal

2017-10-20 Thread Morgan Fainberg
Let me clarify a few things regarding the V2.0 removal:

* This has been planned for years at this point. At one time (I am
looking for the documentation, once I find it I'll include it on this
thread) we worked with Nova and the TC to set forth a timeline on the
removal. Part of that agreement was that this was a one-time deal. We
would remove the V2.0 API in favor of the v3 API but would never
remove another API going forward.

   A few reasons for removing the V2.0 API that were discussed and
drove the decision:

   1) The V2.0 API had behavior that was explicitly breaking the security model:

   * A user could authenticate with a scope not the default
domain) which could lead to oddities in enforcement when using v2.0
APIs and introduced a number of edge cases. This could not be fixed
without breaking the V2.0 API contract and every single change to V3
and features required a lot of time to ensure V2.0 was not breaking
and had appropriate translations to/from the different data formats.

   * The V2.0 AUTH API included the token (secure) data in the URL
path, this means that all logs from apache (or other web servers and
wsgi apps) had to be considered privileged and could not be exposed
for debugging purposes (and in some environments may not be accessed
without significant access-controls). This also could not be fixed
without breaking the V2.0 API contract.

   * The V2.0 policy was effectively hard coded (effectively) to
use "admin" and "member" roles. Retrofitting the APIs to support fully
policy was extremely difficult and could break default behaviors
(easily) in many environments. This was also deemed to be mostly
unfix-able without breaking the V2.0 API contract.


 In short, the maintenance on V2.0 API was significant, it was a
lot of work to maintain especially since the API could not receive any
active development due to lacking basic features introduced in v3.0.
There were also a significant number of edge cases where v3 had some
very hack-y support for features (required in openstack services) via
auth to support the possibility of v2->v3 translations.


   2) V2.0 had been bit rotting. Many items had limited testing and
were found to be broken. Adding tests that were both V3 and V2.0 aware
added another layer of difficulty in maintaining the API, much of the
time we had to spin many new patches to ensure that we didn't break
v2.0 contracts with a non-breaking v3 change (or in fixing a v2 API
call, we would be somewhat forced into breaking the API contract).


   3) The Keystone team is acutely aware that this was a painful
transition and made the choice to drop the API even in that light. The
choice of "breaking the API contract" a number of times verses
lightening the developer load (we are strapped for resources working
on Keystone as are many services, the overhead and added load makes it
mostly untenable) and do a single (large) change with the
understanding that V3 APIs cannot be removed was preferable.


The TC agreed to this removal. The service teams agreed to this
removal. This was telegraphed as much as we could via deprecation and
many, many, many discussions on this topic. There really was no good
solution, we took the solution that was the best for OpenStack in our
opinion based upon the place where Keystone is.

We can confidently commit to the following:
  * v3 APIs (Even the ones we dislike) will not go away
  * barring a massive security hole, we will not break the API
contracts on V3] (we may add data, we will not remove/restructure
data)
  * If we implement microversions, you may see API changes (similar to
how nova works), but as of today, we do not implement microversions

We have worked with defcore/refstack, qa teams, all services (I think
we missed one, it has since been fixed), clients, SDK(s), etc to
ensure that as much support as possible is in place to make utilizing
V3 easy.




On Fri, Oct 20, 2017 at 3:50 PM, Fox, Kevin M  wrote:
> No, I'm not saying its the TC teams job to bludgeon folks.
>
> I'm suggesting that some folks other then Keystone should look at the impact 
> of the final removal an api that a lot of external clients may be coded 
> against and since it effects all projects and not just Keystone. And have 
> some say on delaying the final removal if appropriate.
>
> I personally would like to see v2 go away. But I get that the impact could be 
> far wider ranging and affecting many other teams then just Keystone due to 
> the unique position Keystone is in the architecture. As others have raised.
>
> Ideally, there should be an OpenStack overarching architecture team of some 
> sort to handle this kind of thing I think. Without such an entity though, I 
> think the TC is probably currently the best place to discuss it though?
>
> Thanks,
> Kevin
> 
> From: Jeremy Stanley [fu...@yuggoth.org]
> Sent: Friday, October 20, 2017 10:53 AM
> To: 

Re: [openstack-dev] [nova][keystone] keystoneauth1 and keystonemiddle setting

2017-08-16 Thread Morgan Fainberg
On Aug 16, 2017 11:31, "Brant Knudson"  wrote:



On Mon, Aug 14, 2017 at 2:48 AM, Chen CH Ji  wrote:

> In fixing bug 1704798, there's a proposed patch
> https://review.openstack.org/#/c/485121/7
> but we stuck at http_connection_timeout and timeout value in keystoneauth1
> and keystonemiddle repo
>
> basically we want to reuse the keystone_auth section in nova.conf to avoid
> create another section so we can
> use following to create a session
>
> sess = ks_loading.load_session_from_conf_options(CONF,
> 'keystone_authtoken', auth=context.get_auth_plugin())
>
> any comments or we have to create another section and configure it anyway?
> thanks
>
>
> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493 <+86%2010%208245%201493>
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I think reusing the keystone_authtoken config is a bad idea.
keystone_authtoken contains the configuration for the auth_token middleware
so this is what we keystone developers expect it to be used for. A
deployment may have different security needs for the auth_token middleware
vs checking quotas in which case they'll need different users or project
for the auth_token middleware and quota checking. And even if we don't need
it now we might need it in the future, and it's going to create a lot of
work going forward to rearchitect.

If a deployer wants to use the same authentication for both auth_token
middleware and the proxy, they can create a new section with the config and
point both keystone_authtoken and quota checking to it (by setting the
auth_section).

-- 
- Brant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



What Brant said. Please do not lean on the options from keystone middleware
for anything outside of keystone middleware. We have had to change these
options before and those changes should only ever impact the keystone
middleware code. If you re-use those options for something in Nova, it will
likely break and need to be split into it's own option block in the future.

Please create a new option block (even if a deployers uses the same
user/passord) rather than using the authtoken config section for anything
outside of authtoken.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] using only sql for resource backends

2017-08-15 Thread Morgan Fainberg
On Tue, Aug 15, 2017 at 7:36 AM, Lance Bragstad  wrote:
> During RC, Morgan's made quite a bit of progress on a bug found by the
> gate [0]. Part of the solution led to another patch that removes the
> ability to configure anything but sql for keystone's resource backend
> (`keystone.conf [resource] driver`). The reasoning behind this is that
> there were FK constraints introduced between the identity and resource
> tables [1] during the Ocata development cycle. This leaves us with two
> options moving forward:
>
> 1.) Drop the FK constraints entirely and backport those
> migrations/models to Ocata
> 2.) Ensure the resource backend is always configured as SQL - and keep
> the FKs setup between the resource and identity tables (note; this
> doesn't prevent the usage of non-sql identity backends, but just ensures
> that when sql is used for identity, resource is also used).
>
> Sending this out as a heads up for those deployments that might fall
> into this category. Let me know if you have any questions.
>
> Thanks,
>
> Lance
>
>
> [0] https://launchpad.net/bugs/1702211
> [1]
> https://github.com/openstack/keystone/commit/2bd88d30e1d2873470af7f40db45a99e07e12ce6
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

The removal of the FKs also requires backporting schema updates (which
is both painful and much higher risk). As it stands it is highly
unlikely anyone is using non-SQL resource backends as any use of the
SQL Identity backend requires resource to be SQL. The Resource data is
highly relational and very keystone/openstack specific. The lowest
impact choice is to make [resource] always SQL.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] rc2 updates

2017-08-11 Thread Morgan Fainberg
On Fri, Aug 11, 2017 at 11:10 AM, Lance Bragstad  wrote:
> Thanks for the update.
>
> Outside of the docs patches, we made some good progress on a bug 1702211
> (reported as https://bugs.launchpad.net/keystone/+bug/1703917 and
> https://bugs.launchpad.net/keystone/+bug/1702211 but we have reason to
> believe they are caused by the same issue).
>
> Morgan is currently working on a fix and we've targeted bug 1702211 to rc2.
> I'll keep an eye out for the translations patch and make sure that lands
> before we cut the next release candidate.
>
> On 08/11/2017 12:02 PM, Thierry Carrez wrote:
>
> Lance Bragstad wrote:
>
> We rolled out rc1 last night [0], but missed a couple important
> documentation patches and release notes [1]. I'll propose rc2 as soon as
> those merge.
>
> Note that we'll have to refresh the RC with translations updates closer
> to the release anyway (start of R-1 week), so if it's just
> doc/releasenotes updates, I'd advise you to wait a bit before cutting RC2.
>
> Regards,
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Initial patch for the password created_at/expires_at change is here:
https://review.openstack.org/493259 now waiting for review. It has
been cherry-picked to stable/pike as well. This likely will need a
couple more passes. Due to the complexity it only addresses the
password created_at/expires_at columns. I will look into other cases
of keystone storing datetime objects as a possible RC bug (in revoke
events specifically) once I get a little time to see how this change
works as is.

--Morgan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [keystone] Random Patrole failures related to Identity v3 Extensions API

2017-08-11 Thread Morgan Fainberg
On Fri, Aug 11, 2017 at 9:25 AM, Morgan Fainberg
<morgan.fainb...@gmail.com> wrote:
> On Fri, Aug 11, 2017 at 8:44 AM, Felipe Monteiro
> <felipe.carneiro.monte...@gmail.com> wrote:
>> Patrole tests occasionally fail while executing tests that test the
>> Identity v3 Extensions API [0]. Previously, this was not the case when
>> we used Fernet tokens and used a time.sleep(1) to allow for
>> role-switching to work correctly. However, we recently changed over to
>> UUID tokens in the Patrole gates to avoid doing a time.sleep(1), as a
>> time efficiency change. Ordinarily -- for well over 500 or so tests --
>> this approach works successfully, with the exception of what appears
>> to be *random* v3 API extension tests [1][2] (random means different
>> tests pass or fail randomly across separate test runs).
>>
>> While there are a few solutions that come to mind on how to solve this
>> Patrole-side (like re-introducing a time.sleep() for specific APIs or
>> even avoiding role-switching altogether which is not as
>> straightforward as it sounds), we would still not understand *why* the
>> issue is happening in the first place. Is it a data-race condition?
>> Something specific to the identity v3 extensions API? A potential bug
>> or intended behavior somewhere?
>>
>> [0] https://developer.openstack.org/api-ref/identity/v3-ext/
>> [1] 
>> http://status.openstack.org/openstack-health/#/test/patrole_tempest_plugin.tests.api.identity.v3.test_ep_filter_groups_rbac.EndpointFilterGroupsV3RbacTest.test_create_endpoint_group
>> [2] 
>> http://logs.openstack.org/41/490641/3/gate/gate-tempest-dsvm-patrole-py35-member-ubuntu-xenial/be95da4/console.html#_2017-08-11_14_47_57_515906
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Are your tests causing token revocations? if so, there is a case where
> a revocation event is issued in the same second as a token (we've seen
> similar cases even in fernet) meaning the token is invalid when it is
> issued according to keystone. It's a long running bug.
>
> For the record, UUID tokens are deprecated and slated for removal in
> the R release. I recommend reverting to using Fernet tokens sooner
> rather than later.
>
> Last of all, the endpoint-filtering is generally not a great tool to
> use. I highly recommend not using it (or encouraging the use of it),
> it makes the catalog different depending on scope and provides zero
> added security benefit (anyone who knows the endpoint can use it
> anyway).

I am spinning up a change for Pike RC (right now) to address the
revocations occurring in the same second as the issuance of the token
(similar to a bug with password updates, which will also be fixed).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [keystone] Random Patrole failures related to Identity v3 Extensions API

2017-08-11 Thread Morgan Fainberg
On Fri, Aug 11, 2017 at 8:44 AM, Felipe Monteiro
 wrote:
> Patrole tests occasionally fail while executing tests that test the
> Identity v3 Extensions API [0]. Previously, this was not the case when
> we used Fernet tokens and used a time.sleep(1) to allow for
> role-switching to work correctly. However, we recently changed over to
> UUID tokens in the Patrole gates to avoid doing a time.sleep(1), as a
> time efficiency change. Ordinarily -- for well over 500 or so tests --
> this approach works successfully, with the exception of what appears
> to be *random* v3 API extension tests [1][2] (random means different
> tests pass or fail randomly across separate test runs).
>
> While there are a few solutions that come to mind on how to solve this
> Patrole-side (like re-introducing a time.sleep() for specific APIs or
> even avoiding role-switching altogether which is not as
> straightforward as it sounds), we would still not understand *why* the
> issue is happening in the first place. Is it a data-race condition?
> Something specific to the identity v3 extensions API? A potential bug
> or intended behavior somewhere?
>
> [0] https://developer.openstack.org/api-ref/identity/v3-ext/
> [1] 
> http://status.openstack.org/openstack-health/#/test/patrole_tempest_plugin.tests.api.identity.v3.test_ep_filter_groups_rbac.EndpointFilterGroupsV3RbacTest.test_create_endpoint_group
> [2] 
> http://logs.openstack.org/41/490641/3/gate/gate-tempest-dsvm-patrole-py35-member-ubuntu-xenial/be95da4/console.html#_2017-08-11_14_47_57_515906
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Are your tests causing token revocations? if so, there is a case where
a revocation event is issued in the same second as a token (we've seen
similar cases even in fernet) meaning the token is invalid when it is
issued according to keystone. It's a long running bug.

For the record, UUID tokens are deprecated and slated for removal in
the R release. I recommend reverting to using Fernet tokens sooner
rather than later.

Last of all, the endpoint-filtering is generally not a great tool to
use. I highly recommend not using it (or encouraging the use of it),
it makes the catalog different depending on scope and provides zero
added security benefit (anyone who knows the endpoint can use it
anyway).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] PTL nomination period is now over

2017-08-09 Thread Morgan Fainberg
On Aug 9, 2017 16:48, "Kendall Nelson"  wrote:

Hello Everyone!

The PTL Nomination period is now over. The official candidate list is
available on the election website[0].

There are 2 projects without candidates, so according to this
resolution[1], the TC will have to appoint a new PTL for Storlets, and
Packaging-Deb.

There are 2 projects that will have elections: Ironic and Documentation. The
details for those will be posted shortly after we setup the CIVS system.

Thank you,

-Kendall Nelson (diablo_rojo)

[0]: http://governance.openstack.org/election/#
queens

-ptl-candidates

[1]: http://governance.openstack.org/resolutions/20141128-electio
ns-process-for-leaderless-programs.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I believe packaging deb is being retired[0].

[0] http://lists.openstack.org/pipermail/openstack-dev/
2017-August/120581.html

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][kubernetes] Webhook PoC for Keystone based Authentication and Authorization for Kubernetes

2017-08-08 Thread Morgan Fainberg
I shall take a look at the webhooks and see if I can help on this front.

--Morgan

On Tue, Aug 8, 2017 at 6:34 PM, joehuang  wrote:
> Dims,
>
> Integration of keystone and kubernetes is very cool and in high demand. Thank 
> you very much.
>
> Best Regards
> Chaoyi Huang (joehuang)
>
> 
> From: Davanum Srinivas [dava...@gmail.com]
> Sent: 01 August 2017 18:03
> To: kubernetes-sig-openst...@googlegroups.com; OpenStack Development Mailing 
> List (not for usage questions)
> Subject: [openstack-dev] [keystone][kubernetes] Webhook PoC for Keystone 
> based Authentication and Authorization for Kubernetes
>
> Team,
>
> Having waded through the last 4 attempts as seen in kubernetes PR(s)
> and Issues and talked to a few people on SIG-OpenStack slack channel,
> the consensus was that we should use the Webhook mechanism to
> integrate Keystone and Kubernetes.
>
> Here's the experiment : https://github.com/dims/k8s-keystone-auth
>
> Anyone interested in working on / helping with this? Do we want to
> create a repo somewhere official?
>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Morgan Fainberg
On Fri, Aug 4, 2017 at 3:09 PM, Kevin L. Mitchell <klmi...@mit.edu> wrote:
> On Fri, 2017-08-04 at 14:52 -0700, Morgan Fainberg wrote:
>> > Maybe not, but please do recall that there are many deployers out
>> > there
>> > that track master, not fixed releases, so we need to take that
>> > level of
>> > compatibility into account.
>> >
>>
>> I am going to go out on a limb and say that we should not assume that
>> if code merges ever it is a contract because someone might be
>> following master. The contract should be for releases. We should do
>> everything we can to avoid breaking people, but in the case of an API
>> contract (behavior) that was never part of a final release, it should
>> be understood this may change if needed until it is released.
>>
>> This is just my $0.002 as this leads rapidly to "why bother having
>> real releases" if everything is a contract, let someone take a
>> snapshot where they're happy with the code to run. You're devaluing
>> the actual releases.
>
> In my view, following master imposes risks that deployers should
> understand and be prepared to mitigate; but I believe that it is our
> responsibility to acknowledge that they're doing it, and make a
> reasonable effort to not break them.  There are, of course, times when
> no reasonable effort will avoid breaking them, and in those cases, I
> feel that the reasonable course of action is to try to notify them of
> the upcoming breakage.  That's why then I went on to suggest that
> fixing this problem in keystone shouldn't require a version bump in
> this case: it _is_ a breakage that's being fixed.

I appreciate that this specific case you view as being in that
category, I was commenting more on the general case. I would go so far
as to outline exactly what we wont break for markers in-between
releases rather than what you've implied. I can come up with exactly
one case that should never be broken between releases (fixes that
change no behavior but address edge cases are fine): DB Schemas.

I am going to continue to say we cannot and should not commit to
treating anything that lands as a contract, it devalues the release
and ability of the developers to make shifts while working on a
release.

You and I may not agree here, but tracking master has risks and we
should allow for projects to make API changes for un-released APIs as
they see fit without version bumps.

Thanks for the feedback for this fix in specific, I think we came to
much the same conclusion in IRC but wanted some outside eyes on it.

Cheers,
--Morgan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Morgan Fainberg
On Fri, Aug 4, 2017 at 2:43 PM, Kevin L. Mitchell  wrote:
> On Fri, 2017-08-04 at 16:45 -0400, Kristi Nikolla wrote:
>> Is this the case even if we haven’t made any final release with the change
>> that introduced this issue? [0]
>>
>> It was only included in the Pike milestones and betas so far, and was not
>> part of the Ocata release.
>
> Maybe not, but please do recall that there are many deployers out there
> that track master, not fixed releases, so we need to take that level of
> compatibility into account.
>

I am going to go out on a limb and say that we should not assume that
if code merges ever it is a contract because someone might be
following master. The contract should be for releases. We should do
everything we can to avoid breaking people, but in the case of an API
contract (behavior) that was never part of a final release, it should
be understood this may change if needed until it is released.

This is just my $0.002 as this leads rapidly to "why bother having
real releases" if everything is a contract, let someone take a
snapshot where they're happy with the code to run. You're devaluing
the actual releases.

>> Therefore the call which now returns a 403 in master, returned a 2xx in
>> Ocata. So we would be fixing something which is broken on master rather
>> than changing a ‘contract’.
>>
>> 0. 
>> https://github.com/openstack/keystone/commit/51d5597df729158d15b71e2ba80ab103df5d55f8
>
> I would be inclined to accept this specific change as a bug fix not
> requiring a version bump, because it is a corner case that I believe a
> deployer would view as a bug, if they encountered it, and because it
> was introduced prior to a named final release.
> --
> Kevin L. Mitchell 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][no-admin] Finally Rally can be run without admin user

2017-06-13 Thread Morgan Fainberg
On Tue, Jun 13, 2017 at 1:04 PM, Boris Pavlovic  wrote:
> Hi stackers,
>
> Intro
>
> Initially Rally was targeted for developers which means running it from
> admin was OK.
> Admin was basically used to simplify preparing environment for testing:
> create and setup users/tenants, networks, quotas and other resources that
> requires admin role.
> As well it was used to cleanup all resources after test was executed.
>
> Problem
>
> More and more operators were running Rally against their production
> environments, and they were not happy with the thing that they should
> provide admin, they would rather prepare environment by hand and provide
> already existing users than allow Rally to mess up with admin rights =)
>
> Solution
>
> After years of refactoring we changed almost everything;) and we managed to
> keep Rally as simple as it was and support Operators and Developers needs.
>
> Now Rally supports 3 different modes:
>
> admin mode -> Rally manages users that are used for testing
> admin + existing users mode -> Rally uses existing users for testing (if no
> user context)
> [new one] existing users mode -> Rally uses existing users for testing
>
> In every mode input task will look the same, however in case of only
> existing users mode you won't be able to use plugins that requires admin
> role.
>
> This patch finishes works: https://review.openstack.org/#/c/465495/
>
> Thanks to everybody that was involved in this huge effort!
>
>
> Best regards,
> Boris Pavlovic
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Good work, and fantastic news. This will make rally a more interesting
tool to use against real-world deployments.

Congrats on a job well done.
--Morgan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] All Hail our Newest Release Name - OpenStack Rocky

2017-04-28 Thread Morgan Fainberg
It would be nice if there was a bit more transparency on the "legal
risk" (conflicts with another project, etc), but thanks for passing on
the information none-the-less. I, for one, welcome our new "Rocky"
overlord project name :)

Cheers,
--Morgan

On Fri, Apr 28, 2017 at 2:54 PM, Monty Taylor  wrote:
> Hey everybody!
>
> There isn't a ton more to say past the subject. The "R" release of OpenStack
> shall henceforth be known as "Rocky".
>
> I believe it's the first time we've managed to name a release after a
> community member - so please everyone buy RockyG a drink if you see her in
> Boston.
>
> For those of you who remember the actual election results, you may recall
> that "Radium" was the top choice. Radium was judged to have legal risk, so
> as per our name selection process, we moved to the next name on the list.
>
> Monty
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Emails for OpenStack R Release Name voting going out - please be patient

2017-04-12 Thread Morgan Fainberg
I also have not received a poll email.

On Apr 12, 2017 6:13 AM, "Neil Jerram"  wrote:

> Nor me.
>
> On Wed, Apr 12, 2017 at 1:55 PM Doug Hellmann 
> wrote:
>
>> Excerpts from Dulko, Michal's message of 2017-04-12 12:09:30 +:
>> > On Wed, 2017-04-12 at 06:57 -0500, Monty Taylor wrote:
>> > > On 04/06/2017 07:34 AM, Monty Taylor wrote:
>> > > >
>> > > > Hey all!
>> > > >
>> > > > I've started the R Release Name poll and currently am submitting
>> > > > everyone's email address to the system. In order to not make our
>> fine
>> > > > friends at Carnegie Mellon (the folks who run the CIVS voting
>> service)
>> > > > upset, I have a script that submits the emails one at a time with a
>> > > > half-second delay between each email. That means at best, since
>> there
>> > > > are 40k people to process it'll take ~6 hours for them all to go
>> out.
>> > > >
>> > > > Which is to say - emails are on their way - but if you haven't
>> gotten
>> > > > yours yet, that's fine. I'll send another email when they've all
>> gone
>> > > > out, so don't worry about not receiving one until I've sent that
>> mail.
>> > > Well- that took longer than I expected. Because of some rate
>> limiting,
>> > > 1/2 second delay was not long enough...
>> > >
>> > > Anyway - all of the emails should have gone out now. Because that
>> took
>> > > so long, I'm going to hold the poll open until next Wednesday.
>> > >
>> > > Monty
>> >
>> > Not sure why, but I haven't received an email yet.
>> >
>> > Thanks,
>> > Michal
>>
>> Nor I.
>>
>> Doug
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-20 Thread Morgan Fainberg
On Mon, Mar 20, 2017 at 12:23 PM, Dave McCowan (dmccowan)
 wrote:
> +1 from me.  That looks easy to implement and maintain.
>
> On 3/20/17, 2:49 PM, "Davanum Srinivas"  wrote:
>
>>Dave,
>>
>>Here's the precendent from oslo.policy:
>>https://review.openstack.org/#/admin/groups/556,members
>>
>>The reason for setting it up this way with individuals + oslo core +
>>keystone core is to make sure both core teams are involved in the
>>review process and any future contributors who are not part of either
>>team can be give core rights in oslo.policy.
>>
>>Is it ok to continue this model?
>>
>>Thanks,
>>Dims
>>
>>On Mon, Mar 20, 2017 at 9:20 AM, Dave McCowan (dmccowan)
>> wrote:
>>> This sounds good to me.  I see it as a "promotion" for Castellan into
>>>the
>>> core of OpenStack.  I think a good first step in this direction is to
>>> create a castellan-drivers team in Launchpad and a castellan-core team
>>>in
>>> Gerrit.  We can seed the list with Barbican core reviewers and any Oslo
>>> volunteers.
>>>
>>> The Barbican/Castellan weekly IRC meeting is today at 2000UTC in
>>> #openstack-meeting-alt, if anyone want to join to discuss.
>>>
>>> Thanks!
>>> dave-mccowan
>>>
>>> On 3/16/17, 12:43 PM, "Davanum Srinivas"  wrote:
>>>
+1 from me to bring castellan under Oslo governance with folks from
both oslo and Barbican as reviewers without a project rename. Let's
see if that helps get more adoption of castellan

Thanks,
Dims

On Thu, Mar 16, 2017 at 12:25 PM, Farr, Kaitlin M.
 wrote:
> This thread has generated quite the discussion, so I will try to
> address a few points in this email, echoing a lot of what Dave said.
>
> Clint originally explained what we are trying to solve very well. The
>hope was
> that the rename would emphasize that Castellan is just a basic
> interface that supports operations common between key managers
> (the existing Barbican back end and other back ends that may exist
> in the future), much like oslo.db supports the common operations
> between PostgreSQL and MySQL. The thought was that renaming to have
> oslo part of the name would help reinforce that it's just an
>interface,
> rather than a standalone key manager. Right now, the only Castellan
> back end that would work in DevStack is Barbican. There has been talk
> in the past for creating other Castellan back ends (Vault or Tang),
>but
> no one has committed to writing the code for those yet.
>
> The intended proposal was to rename the project, maintain the current
> review team (which is only a handful of Barbican people), and bring on
> a few Oslo folks, if any were available and interested, to give advice
> about (and +2s for) OpenStack library best practices. However, perhaps
> pulling it under oslo's umbrella without a rename is blessing it
>enough.
>
> In response to Julien's proposal to make Castellan "the way you can do
> key management in Python" -- it would be great if Castellan were that
> abstract, but in practice it is pretty OpenStack-specific. Currently,
> the Barbican team is great at working on key management projects
> (including both Barbican and Castellan), but a lot of our focus now is
> how we can maintain and grow integration with the rest of the
>OpenStack
> projects, for which having the name and expertise of oslo would be a
> great help.
>
> Thanks,
>
> Kaitlin
>
>___
>__
>_
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>_
>>>_
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>--
>>Davanum Srinivas :: https://twitter.com/dims
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> 

Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-16 Thread Morgan Fainberg
On Mar 16, 2017 07:28, "Jeremy Stanley"  wrote:

On 2017-03-16 08:34:58 -0500 (-0500), Lance Bragstad wrote:
[...]
> These security-related corner cases have always come up in the past when
> we've talked about implementing reseller. Another good example that I
> struggle with is what happens when you flip the reseller bit for a project
> admin who goes off and creates their own entities but then wants support?
> What does the support model look like for the project admin that needs
help
> in a way that maintains data integrity?

It's still entirely unclear to me how giving someone the ability to
hide resources you've delegated them access to create in any way
enables "reseller" use cases. I can understand the global admins
wanting to have optional views where they don't see all the resold
hierarchy (for the sake of their own sanity), but why would a
down-tree admin have any expectation they could reasonably hide
resources they create from those who maintain the overall system?

In other multi-tenant software I've used where reseller
functionality is present, top-level admins have some means of
examining delegated resources and usually even of impersonating
their down-tree owners for improved supportability.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hiding projects is a lot like implementing Mandatory Access Control within
OpenStack. I would like to go on record and say we should squash the hidden
projects concept (within a single hierarchy). If we want to implement MAC
(SELinux equivalent) in OpenStack, we have a much, much, bigger scope to
cover than just in Keystone, and this feels outside the scope of any
heirarchical multi-tenancy work that has been done/will be done.

TL DR: let's not try and hide projects from users with rights in the same
(peer, or above) hierarchy.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][swg] per-project "Business only" moderated mailing lists

2017-02-27 Thread Morgan Fainberg
On Mon, Feb 27, 2017 at 9:18 AM, Thierry Carrez 
wrote:

> Dean Troyer wrote:
> > On Mon, Feb 27, 2017 at 3:31 AM, Clint Byrum  wrote:
> >> This is not for users who only want to see some projects. That is a well
> >> understood space and the mailman filtering does handle it. This is for
> >> those who want to monitor the overall health of the community, address
> >> issues with cross-project specs, or participate in so many projects it
> >> makes little sense to spend time filtering.
> >
> > Monday morning and the caffiene is just beginning to reach my brain,
> > but this seems counter-intuitive to me.  I consider myself someone who
> > _does_ want to keep in touch with the majority of the community, and
> > breaking things into N additional mailing lists makes that harder, not
> > easier.  I _do_ include core team updates, mascots, social meetings in
> > that set of things to pay a little attention to here, especially
> > around summit/PTG/Forum/etc times.
> >
> > I've seen a couple of descriptions of who this proposal is not
> > intended to address, who exactly is expected to benefit from more
> > mailing lists?
>
> I'm not (yet) convinced that getting rid of 10% of ML messages (the ones
> that would go to the -business lists) is worth the hassle of setting up
> 50 new lists, have people subscribe to them, and have overworked PTL
> moderate them...
>
> Also from my experience moderating such a -business list (the
> openstack-tc list) I can say that it takes significant effort to avoid
> having general-interest discussions there (or to close them when they
> start from an innocent thread). Over those 50+ -business mailing-lists
> I'm pretty sure a few would diverge and use the convenience of isolated
> discussions without "outsiders" potentially chiming in. And they would
> be pretty hard to detect...
>
>
FWIW, If I was a PTL and had a list like that to moderate on top of all the
other things, I'd just send a message that the list was going to be turned
off effectively (no messages being passed through).

Moderated lists are important for some tasks. This really doesn't seem like
a good use of someone's time. I agree with Thierry, this seems like a lot
of hassle for a very small benefit.

With all that said, I am not a PTL and would not be moderating these new
lists.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-02-27 Thread Morgan Fainberg
On Mon, Feb 27, 2017 at 7:26 AM, Sean Dague <s...@dague.net> wrote:

> On 02/27/2017 10:22 AM, Morgan Fainberg wrote:
> 
> > I agree we should kill the discovery hack, however that is a break in
> > the keystoneauth contract. Simply put, we cannot. Keystoneauth is one of
> > the few things (similar to how shade works) where behavior, exposed
> > elements, etc are considered a strict contract that will not change. If
> > we could have avoided stevedore and PBR we would have.
> >
> > The best we can provide is a way to build the instances from
> > keystoneauth that does not include that hack.
> >
> > The short is, we can't remove it. Similar to how we cannot change the
> > raise of exceptions for non-200 responses (the behavior is already
> encoded).
>
> Ok, I'm going to go back to not using the version= parameter then.
> Because it's not actually doing the right thing.
>
> I also am a bit concerned that basically through some client changes
> that people didn't understand, we've missed a break in the upstream
> transition that will impact real clouds. :(
>
>
We can make an adapter that does what you want (requests adapters are
cool). I was just chatting with Monty about this, and we can help you out
on this front.

The adapter should make things a lot easier once the lifting is done.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-02-27 Thread Morgan Fainberg
On Mon, Feb 27, 2017 at 5:56 AM, Sean Dague  wrote:

> We recently implemented a Nova feature around validating that project_id
> for quotas we real in keystone. After that merged, trippleo builds
> started to fail because their undercloud did not specify the 'identity'
> service as the unversioned endpoint.
>
> https://github.com/openstack/nova/blob/8b498ce199ac4acd94eb33a7f47c05
> ee0c743c34/nova/api/openstack/identity.py#L34-L36
> - (code merged in Nova).
>
> After some debug, it was clear that '/v2.0/v3/projects/...' was what was
> being called. And after lots of conferring in the Keystone room, we
> definitely made sure that the code in question was correct. The thing I
> wanted to do was make the failure more clear.
>
> The suggestion was made to use the following code approach:
>
> https://review.openstack.org/#/c/438049/6/nova/api/openstack/identity.py
>
> resp = sess.get('/projects/%s' % project_id,
> endpoint_filter={
> 'service_type': 'identity',
> 'version': (3, 0)
> },
> raise_exc=False)
>
>
> However, I tested that manually with an identity =>
> http:///v2.0 endpoint, and it passes. Which confused me.
>
> Until I found this -
> https://github.com/openstack/keystoneauth/blob/
> 3364703d3b0e529f7c1b7d1d8ea81726c4f5f121/keystoneauth1/discover.py#L313
>
> keystonauth is specifically coding around the keystone transition from a
> versioned /v2.0 endpoint to an unversioned one.
>
>
> While that is good for the python ecosystem using it, it's actually
> *quite* bad for the rest of our ecosystem (direct REST, java, ruby, go,
> js, php), because it means that all other facilities need the same work
> around. I actually wonder if this is one of the in the field reasons for
> why the transition from v2 -> v3 is going slow. That's actually going to
> potentially break a lot of software.
>
> It feels like this whole discovery version hack bit should be removed -
> https://review.openstack.org/#/c/438483/. It also feels like a migration
> path for non python software in changing the catalog entries needs to be
> figured out as well.
>
> I think on the Nova side we need to go back to looking for bogus
> endpoint because we don't want issues like this hidden from us.
>
>
I agree we should kill the discovery hack, however that is a break in the
keystoneauth contract. Simply put, we cannot. Keystoneauth is one of the
few things (similar to how shade works) where behavior, exposed elements,
etc are considered a strict contract that will not change. If we could have
avoided stevedore and PBR we would have.

The best we can provide is a way to build the instances from keystoneauth
that does not include that hack.

The short is, we can't remove it. Similar to how we cannot change the raise
of exceptions for non-200 responses (the behavior is already encoded).

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia][sdk] service name for octavia

2017-02-15 Thread Morgan Fainberg
On Wed, Feb 15, 2017 at 7:25 AM, Monty Taylor  wrote:

> On 02/15/2017 09:12 AM, Hayes, Graham wrote:
> > On 15/02/2017 15:00, Monty Taylor wrote:
> >> On 02/14/2017 07:08 PM, Qiming Teng wrote:
> >>> When reviewing a recent patch that adds openstacksdk support to
> octavia,
> >>> I found that octavia is using 'octavia' as its service name instead of
> >>> 'loadbalancing' or 'loadbalancingv2' or something similar.
> >>
> >> Please not loadbalancingv2. As dean says in his email, we should be
> >> using service_type not service_name for this. And service type should
> >> not contain a version (please ignore what cinder did for v2 and v3
> >> entries in the service catalog, it is a pattern that should not happen)
> >
> > +1000
> >
> >> All the services should have a version discovery endpoint on their
> >> unversioned endpoint. If there is a v1 and a v2, then a user looking for
> >> the loadbalancing service, if they want v2, should be able to get there
> >> through version discovery.
> >>
> >> Also, if you haven't used loadbalancing anywhere yet, can I suggest
> >> load-balancing instead to match object-store and key-manager?
> >>
> >>> The overall suggestion is to use a word/phrase that indicates what a
> >>> service do instead of the name of the project providing that service.
> >>>
> >>> Below is the list of the service types currently supported by
> >>> openstacksdk:
> >>>
> >>> 'alarming',# aodh
> >>> 'baremetal',   # ironic
> >>> 'clustering',  # senlin
> >>> 'compute', # nova
> >>> 'database',# trove
> >>> 'identity',# keystone
> >>> 'image',   # glance
> >>> 'key-manager', # barbican
> >>> 'messaging',   # zaqar
> >>> 'metering',# ceilometer
> >>> 'network', # neutron
> >>> 'object-store',   # swift
> >>> 'octavia',# <--- this is an exception
> >>> 'orchestration',  # heat
> >>> 'volume', # cinder
> >>> 'workflowv2', # mistral
>
> Also - while we're on the topic - can we fix that to just be workflow ^^ ?
>
>
> ++
Please change that to workflow if possible.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-06 Thread Morgan Fainberg
On Thu, Feb 2, 2017 at 2:28 PM, Octave J. Orgeron  wrote:

> That refers to the total length of the row. InnoDB has a limit of 65k and
> NDB is limited to 14k.
>
> A simple example would be the volumes table in Cinder where the row length
> goes beyond 14k. So in the IF logic block, I change columns types that are
> vastly oversized such as status and attach_status, which by default are 255
> chars. So to determine a more appropriate size, I look through the Cinder
> code to find where the possible options/states are for those columns. Then
> I cut it down to a more reasonable size. I'm very careful when I cut the
> size of a string column to ensure that all of the possible values can be
> contained.
>
> In cases where a column is extremely large for capturing the outputs of a
> command, I will change the type to Text or TinyText depending on the length
> required. A good example of this is in the agents table of Neutron where
> there is a column for configurations that has a string length of 4096
> characters, which I change to Text. Text blobs are stored differently and
> do not count against the row length.
>

So
https://github.com/openstack/keystone/blob/master/keystone/common/sql/core.py#L117
would not be an issue with the 14k limit, simply limits for things such as
VARCHAR would be affected (in other words, we wouldn't need to change
keystone's implementation since we already use sql.text here)?

>
> I've also observed differences between Kilo, Mitaka, and tip where even
> for InnoDB some of these tables are getting wider than can be supported. So
> in the case of Cinder, some of the columns have been shifted to separate
> tables to fit within 65k. I've seen the same thing in Neutron. So I fully
> expect that some of the services that have table bloat will have to cut the
> lengths or break the tables up over time anyways. As that happens, it
> reduces the amount of work for me, which is a good thing.
>
> The most complicated database schemas to patch up are cinder, glance,
> neutron, and nova due to the size and complexity of their tables. Those
> also have a lot of churn between releases where the schema changes more
> often. Other services like keystone, heat, and ironic are considerably
> easier to work with and have well laid out tables that don't change much.
>
>
FTR: Keystone also supports "no-downtime-upgrades" (just pending some
functional tests before we apply for the tag) and we will be looking to
move towards Alembic, so make sure that the code supplied can easily be
swapped out between SQL-A-Migrate and Alembic (IIRC most projects want to
move to alembic, but it is varying levels of difficulty to do so and
therefore different priorities).

I look forward to solid NDB support; having using NDB in the past to
support another project, I always thought it could be an interesting choice
to back OpenStack (++ to what Monty said eariler).



> Thanks,
> Octave
>
>
> On 2/2/2017 1:25 PM, Mike Bayer wrote:
>
>
>
> On 02/02/2017 02:52 PM, Mike Bayer wrote:
>
>
> But more critically I noticed you referred to altering the names of
> columns to suit NDB.  How will this be accomplished?   Changing a column
> name in an openstack application is no longer trivial, because online
> upgrades must be supported for applications like Nova and Neutron.  A
> column name can't just change to a new name, both columns have to exist
> and logic must be added to keep these columns synchronized.
>
>
> correction, the phrase was "Row character length limits 65k -> 14k" - does
> this refer to the total size of a row?  I guess rows that store JSON or
> tables like keystone tokens are what you had in mind here, can you give
> specifics ?
>
>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
>
> [image: Oracle] 
> Octave J. Orgeron | Sr. Principal Architect and Software Engineer
> Oracle Linux OpenStack
> Mobile: +1-720-616-1550 <+17206161550>
> 500 Eldorado Blvd. | Broomfield, CO 80021
> [image: Certified Oracle Enterprise Architect: Systems Infrastructure]
> 
> [image: Green Oracle]  Oracle is
> committed to developing practices and products that help protect the
> environment
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] gate jobs - papercuts

2017-01-31 Thread Morgan Fainberg
On Tue, Jan 31, 2017 at 1:55 PM, Morgan Fainberg <morgan.fainb...@gmail.com>
wrote:

>
>
> On Tue, Jan 31, 2017 at 10:37 AM, Matthew Treinish <mtrein...@kortar.org>
> wrote:
>
>> On Tue, Jan 31, 2017 at 01:19:41PM -0500, Steve Martinelli wrote:
>> > On Tue, Jan 31, 2017 at 12:49 PM, Davanum Srinivas <dava...@gmail.com>
>> > wrote:
>> >
>> > > Folks,
>> > >
>> > > Here's the list of job failures that failed in the gate queue.
>> > > captured with my script[1][2] since around 10:00 AM today. All jobs
>> > > failed with just one bad test.
>> > >
>> > > http://logs.openstack.org/48/423548/11/gate/gate-keystone-
>> > > python27-db-ubuntu-xenial/a1f55ca/
>> > >- keystone.tests.unit.test_v3_auth.TestMFARules
>> > >
>> > > <http://logs.openstack.org/61/424961/1/gate/gate-tempest-dsv
>> m-cells-ubuntu-xenial/8a1f9e7/>
>> >
>> >
>> > This was due to a race condition between token issuance and validation,
>> > should be fixed.
>>
>> Is there a bug open for this? If so lets get an elastic-recheck query up
>> for it
>> so we can track it and get it off the uncategorized page:
>>
>>
> No bug. Also this is not really fixable because time resolution within
> tokens and revocations is 1 second. The answer is
> to use freezegun and freeze time when doing things that can cause
> revocations at the same time as issuance (usually can only really be hit
> within keystone's unit tests). It is also unlikely to be something that can
> easily be searched for in elastic search as it revolves around a "token
> cannot be validated" message (token Not found/revoked/etc), which is used
> in many cases where tokens cannot be validated (both correctly and in cases
> like this).
>
> The other case(es) that hit this actually were so bad they only passed at
> a ~5% rate.
>

Meaning it didn't get to the point where it could gate that was less than
5% and it was hit in multiple tests at once.

>
> So in short, an elastic-recheck-query would be pointless here short of
> looking specifically for the test name as a failure.
>
>
>> http://status.openstack.org/elastic-recheck/data/integrated_gate.html
>>
>> Our categorization rate is quite low right now and it'll only make things
>> harder
>> to debug other failures if we've got a bunch of unknown races going on.
>>
>> We have a lot of tools to make debugging the gate easier and making
>> everyone more
>> productive. But, it feels like we haven't been utilizing them fully
>> lately which
>> makes gate backups more likely and digging out of the hole harder.
>>
>> Thanks,
>>
>> Matt Treinish
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] gate jobs - papercuts

2017-01-31 Thread Morgan Fainberg
On Tue, Jan 31, 2017 at 10:37 AM, Matthew Treinish 
wrote:

> On Tue, Jan 31, 2017 at 01:19:41PM -0500, Steve Martinelli wrote:
> > On Tue, Jan 31, 2017 at 12:49 PM, Davanum Srinivas 
> > wrote:
> >
> > > Folks,
> > >
> > > Here's the list of job failures that failed in the gate queue.
> > > captured with my script[1][2] since around 10:00 AM today. All jobs
> > > failed with just one bad test.
> > >
> > > http://logs.openstack.org/48/423548/11/gate/gate-keystone-
> > > python27-db-ubuntu-xenial/a1f55ca/
> > >- keystone.tests.unit.test_v3_auth.TestMFARules
> > >
> > >  dsvm-cells-ubuntu-xenial/8a1f9e7/>
> >
> >
> > This was due to a race condition between token issuance and validation,
> > should be fixed.
>
> Is there a bug open for this? If so lets get an elastic-recheck query up
> for it
> so we can track it and get it off the uncategorized page:
>
>
No bug. Also this is not really fixable because time resolution within
tokens and revocations is 1 second. The answer is
to use freezegun and freeze time when doing things that can cause
revocations at the same time as issuance (usually can only really be hit
within keystone's unit tests). It is also unlikely to be something that can
easily be searched for in elastic search as it revolves around a "token
cannot be validated" message (token Not found/revoked/etc), which is used
in many cases where tokens cannot be validated (both correctly and in cases
like this).

The other case(es) that hit this actually were so bad they only passed at a
~5% rate.

So in short, an elastic-recheck-query would be pointless here short of
looking specifically for the test name as a failure.


> http://status.openstack.org/elastic-recheck/data/integrated_gate.html
>
> Our categorization rate is quite low right now and it'll only make things
> harder
> to debug other failures if we've got a bunch of unknown races going on.
>
> We have a lot of tools to make debugging the gate easier and making
> everyone more
> productive. But, it feels like we haven't been utilizing them fully lately
> which
> makes gate backups more likely and digging out of the hole harder.
>
> Thanks,
>
> Matt Treinish
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Morgan Fainberg
On Wed, Jan 18, 2017 at 5:18 PM, Clint Byrum  wrote:

> Excerpts from Morgan Fainberg's message of 2017-01-18 15:33:00 -0800:
> > On Wed, Jan 18, 2017 at 11:23 AM, Brant Knudson  wrote:
> >
> > >
> > >
> > > On Wed, Jan 18, 2017 at 9:58 AM, Dave McCowan (dmccowan) <
> > > dmcco...@cisco.com> wrote:
> > >
> > >>
> > >> On Mon, Jan 16, 2017 at 7:35 AM, Ian Cordasco  >
> > >> wrote:
> > >>
> > >>> Hi everyone,
> > >>>
> > >>> I've seen a few nascent projects wanting to implement their own
> secret
> > >>> storage to either replace Barbican or avoid adding a dependency on
> it.
> > >>> When I've pressed the developers on this point, the only answer I've
> > >>> received is to make the operator's lives simpler.
> > >>>
> > >>>
> > >> This is my opinion, but I'd like to see Keystone use Barbican for
> storing
> > >> credentials. It hasn't happened yet because nobody's had the time or
> > >> inclination (what we have works). If this happened, we could
> deprecate the
> > >> current way of storing credentials and require Barbican in a couple of
> > >> releases. Then Barbican would be a required service. The Barbican team
> > >> might find this to be the easiest route towards convincing other
> projects
> > >> to also use Barbican.
> > >>
> > >> - Brant
> > >>
> > >>
> > >> Can you provides some details on how you'd see this work?
> > >> Since Barbican typically uses Keystone to authenticate users before
> > >> determining which secrets they have access to, this leads to a
> circular
> > >> logic.
> > >>
> > >> Barbican's main purpose is a secret manager.  It supports a variety of
> > >> RBAC and ACL access control methods to determine if a request to
> > >> read/write/delete a secret should be allowed or not.  For secret
> storage,
> > >> Barbican itself needs a secure backend for storage.  There is a
> > >> customizable plugin interface to access secure storage.  The current
> > >> implementations can support a database with encryption, an HSM via
> KMIP,
> > >> and Dogtag.
> > >>
> > >>
> > > I haven't thought about it much so don't have details figured out.
> > > Keystone stores many types of secrets for users, and maybe you're
> thinking
> > > about the user password being tricky. I'm thinking about the users' EC2
> > > credentials (for example). I don't think this would be difficult and
> would
> > > involve creating a credentials backend for keystone that supports
> barbican.
> > > Maybe have a 'keystone' project for credentials keystone is storing? If
> > > you're familiar with the Barbican interface, compare with keystone's
> > > credential interface[0].
> > >
> > > [0] http://git.openstack.org/cgit/openstack/keystone/tree/
> > > keystone/credential/backends/base.py#n26
> > >
> > > - Brant
> > >
> > >
> > The user passwords and the MFA tokens would be particularly difficult as
> > they are to be used for authentication purposes. Anything tied to the
> main
> > AuthN path would require something akin to a "service wide" secret store
> > that could be accessed/controlled by keystone itself and not "on behalf
> of
> > user" where the user still owns the data stored in barbican.
> >
> > I can noodle over this a bit more and see if I can come up with a
> mechanism
> > that (without too much pain) utilizes barbican for the AuthN paths in the
> > current architecture.
> >
> > I think it is doable, but I hesitate to make Keystone's AuthN path rely
> on
> > any external service so we don't run into a circular dependency of
> services
> > causing headaches for users. Keystone has provided a fairly stable base
> for
> > other projects including Barbican to be built on.
> >
> > Now... If the underlying tech behind Barbican could be pushed into
> keystone
> > as the credential driver (and possibly store for passwords?) without
> > needing to lean on Barbican's Server APIs (restful), I think that is
> quite
> > viable and could be of value since we could offload the credentials to a
> > more secure store without needing a "restful service" that uses keystone
> as
> > an AuthN/AuthZ source to determine who has access to what secret.
>
> Things like Barbican are there for the times where it's worth it to
> try and minimize exposure for something _ever_ leaking, so you can't do
> something like record all encrypted traffic and then compromise a key
> later, decrypt the traffic, and gain access to still-secret data.
>
> I'm not sure passwords would fall into that category. You'd be adding
> quite a bit of overhead for something that can be mitigated simply by
> rotating accounts and/or passwords.


I totally agree. Most everything in Keystone falls into this category. We
could use the same tech Barbican uses to be smarter about storing the data,
but I don't think we can use the Rest APIa for the reasons you outlined.

__
> OpenStack Development Mailing List (not for usage questions)
> 

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-18 Thread Morgan Fainberg
On Wed, Jan 18, 2017 at 11:23 AM, Brant Knudson  wrote:

>
>
> On Wed, Jan 18, 2017 at 9:58 AM, Dave McCowan (dmccowan) <
> dmcco...@cisco.com> wrote:
>
>>
>> On Mon, Jan 16, 2017 at 7:35 AM, Ian Cordasco 
>> wrote:
>>
>>> Hi everyone,
>>>
>>> I've seen a few nascent projects wanting to implement their own secret
>>> storage to either replace Barbican or avoid adding a dependency on it.
>>> When I've pressed the developers on this point, the only answer I've
>>> received is to make the operator's lives simpler.
>>>
>>>
>> This is my opinion, but I'd like to see Keystone use Barbican for storing
>> credentials. It hasn't happened yet because nobody's had the time or
>> inclination (what we have works). If this happened, we could deprecate the
>> current way of storing credentials and require Barbican in a couple of
>> releases. Then Barbican would be a required service. The Barbican team
>> might find this to be the easiest route towards convincing other projects
>> to also use Barbican.
>>
>> - Brant
>>
>>
>> Can you provides some details on how you'd see this work?
>> Since Barbican typically uses Keystone to authenticate users before
>> determining which secrets they have access to, this leads to a circular
>> logic.
>>
>> Barbican's main purpose is a secret manager.  It supports a variety of
>> RBAC and ACL access control methods to determine if a request to
>> read/write/delete a secret should be allowed or not.  For secret storage,
>> Barbican itself needs a secure backend for storage.  There is a
>> customizable plugin interface to access secure storage.  The current
>> implementations can support a database with encryption, an HSM via KMIP,
>> and Dogtag.
>>
>>
> I haven't thought about it much so don't have details figured out.
> Keystone stores many types of secrets for users, and maybe you're thinking
> about the user password being tricky. I'm thinking about the users' EC2
> credentials (for example). I don't think this would be difficult and would
> involve creating a credentials backend for keystone that supports barbican.
> Maybe have a 'keystone' project for credentials keystone is storing? If
> you're familiar with the Barbican interface, compare with keystone's
> credential interface[0].
>
> [0] http://git.openstack.org/cgit/openstack/keystone/tree/
> keystone/credential/backends/base.py#n26
>
> - Brant
>
>
The user passwords and the MFA tokens would be particularly difficult as
they are to be used for authentication purposes. Anything tied to the main
AuthN path would require something akin to a "service wide" secret store
that could be accessed/controlled by keystone itself and not "on behalf of
user" where the user still owns the data stored in barbican.

I can noodle over this a bit more and see if I can come up with a mechanism
that (without too much pain) utilizes barbican for the AuthN paths in the
current architecture.

I think it is doable, but I hesitate to make Keystone's AuthN path rely on
any external service so we don't run into a circular dependency of services
causing headaches for users. Keystone has provided a fairly stable base for
other projects including Barbican to be built on.

Now... If the underlying tech behind Barbican could be pushed into keystone
as the credential driver (and possibly store for passwords?) without
needing to lean on Barbican's Server APIs (restful), I think that is quite
viable and could be of value since we could offload the credentials to a
more secure store without needing a "restful service" that uses keystone as
an AuthN/AuthZ source to determine who has access to what secret.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] python 3 tests hate my exception handling

2017-01-03 Thread Morgan Fainberg
On Jan 3, 2017 19:29, "Matt Riedemann"  wrote:

On 1/3/2017 8:48 PM, Michael Still wrote:

> So...
>
> Our python3 tests hate [1] my exception handling for continued
> vendordata implementation [2].
>
> Basically, it goes a bit like this -- I need to move from using requests
> to keystoneauth1 for external vendordata requests. This is because we're
> adding support for sending keystone headers with the request so that the
> external service can verify that it is nova talking. That bit isn't too
> hard.
>
> However, keystoneauth1 uses different exceptions to report errors.
> Conveniently, it has variables which list all of the connection and http
> exceptions which it might raise. Inconveniently, they're listed as
> strings, so I have to construct a list of them like this:
>
> # NOTE(mikal): keystoneauth makes me jump through hoops to get these
> # exceptions, which are listed as strings. Mutter.
> KEYSTONEAUTH_EXCEPTIONS = [TypeError, ValueError]
> for excname in ks_exceptions.connection.__all__ +
> ks_exceptions.http.__all__:
> KEYSTONEAUTH_EXCEPTIONS.append(getattr(ks_exceptions, excname))
>
> Then when it comes time to catch exceptions from keystoneauth1, we can
> just do this thing:
>
> except tuple(KEYSTONEAUTH_EXCEPTIONS) as e:
> LOG.warning(_LW('Error from dynamic vendordata service '
> '%(service_name)s at %(url)s: %(error)s'),
> {'service_name': service_name,
>  'url': url,
>  'error': e},
> instance=self.instance)
> return {}
>
> Which might be a bit horrible, but is nice in that if keystoneauth1 adds
> new connection or http exceptions, we get to catch them for free.
>
> This all works and is tested. However, it causes the py3 tests to fail
> with this exception:
>
> 'TypeError: catching classes that do not inherit from BaseException is
> not allowed'
>
> Which is bemusing to me because I'm not very smart.
>
> So, could someone smarter than me please look at [1] and tell me why I
> get [2] and how to not get that thing? Answers involving manually
> listing many exceptions will result in me making a sad face and
> sarcastic comment in the code, so something more elegant than that would
> be nice.
>
> Discuss.
>
> Thanks,
> Michael
>
>
> 1: http://logs.openstack.org/91/416391/1/check/gate-nova-python
> 35-db/7835df3/console.html#_2017-01-04_01_10_35_520409
> 2: https://review.openstack.org/#/c/415597/3/nova/api/metadata/
> vendordata_dynamic.py
>
> --
> Rackspace Australia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
My first question is, does the KSA team consider the 'connection' and
'http' variables as public / contractual to the KSA API in the library? If
not, they could change/remove those and break nova which wouldn't be cool.


For what it is worth, Keystoneauth has been built very carefully so that
everything that is public should be public (not prefixed with "_"), short
of a massive security issue, we will not change/break an interface that is
public (even not intentionally public); we may deprecated and warn if we
don't want you to use the interface, but it will remain.

The only time a public interface will be removed from KSA will be if we
move to "keystoneauth2". In short, connection and HTTP variables are public
today and will remain so (even if it was unintentional).


For what it's worth, this is what we handle when making requests to the
placement service using KSA:

https://github.com/openstack/nova/blob/34f4b1bd68d6011da76e6
8c4ddae9f28e37eed9a/nova/scheduler/client/report.py#L37

If nothing else, maybe that's all you'd need?

Another alternative is building whatever you need into KSA itself with the
types you need, get that released before 1/19 (non-client library release
freeze), and then use that in nova with a minimum required version bump in
global-requirements.

Or try to hack this out some other magical way.


I am not opposed to seeing an enhancement to KSA to make your job easier
when handling exceptions.


-- 

Thanks,

Matt Riedemann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Morgan Fainberg
On Mon, Dec 5, 2016 at 3:21 PM, Andrey Grebennikov <
agrebenni...@mirantis.com> wrote:

>
>>
>> On Mon, Dec 5, 2016 at 2:31 PM, Andrey Grebennikov <
>> agrebenni...@mirantis.com> wrote:
>>
>>> -Original Message-
 From: Andrey Grebennikov 
 Reply: OpenStack Development Mailing List (not for usage questions)
 
 Date: December 5, 2016 at 12:22:09
 To: openstack-dev@lists.openstack.org 
 Subject:  [openstack-dev] [keystone] Custom ProjectID upon creation

 > Hi keystoners,

 I'm not a keystoner, but I hope youu don't mind my replying.
>>>
>>>
 > I'd like to open the discussion about the little feature which I'm
 trying
 > to push forward for a while but I need some
 feedbacks/opinions/concerns
 > regarding this.
 > Here is the review I'm talking about https://review.
 > openstack.org/#/c/403866/
 >
 > What I'm trying to cover is multi-region deployment, which includes
 > geo-distributed cloud with independent Keystone in every region.
 >
 > There is a number of use cases for the change:
 > 1. Allow users to re-use their tokens in all regions across the
 distributed
 > cloud. With global authentication (LDAP backed) and same roles names
 this
 > is only one missing piece which prevents the user to switch between
 regions
 > even withing single Horizon session.

 So this just doesn't sound right to me. You say above that there are
 independent Keystone deployments in each region. What token type are
 you using that each region could validate a token (assuming project
 IDs that are identical across regions) that would do this for
 "independent Keystone" deployments?

 Specifically, let's presume you use Keystone's new default of fernet
 tokens and you have independently deployed Keystone in each region.
 Without synchronizing the keys Keystone uses to generate and validate
 fernet tokens, I can't imagine how one token would work across all
 regions. This sounds like a lofty goal.

 Further, if Keystone is backed by LDAP, why are there projects being
 created in the Keystone database at all? I thought using LDAP as a
 backend would avoid that necessity. (Again, I'm not a keystone
 developer ;))

 Sorry that I didn't mention this in the beginning.
>>> Yes, it is supposed to be fernet tokens installation for sure, UUID will
>>> not work by default, PKI is deprecated. Keys are supposed to be
>>> synchronized. Without it multi-site will never work even if I replicate the
>>> database.
>>> This is what I started from about half a year ago immediately after
>>> receiving the usecase. I created 2 clouds, replicated the key, set up each
>>> Keystone to know about both sites as Regions, made project IDs same, and
>>> voilla - having global LDAP for authentication in place I could even switch
>>> between these regions within one Horizon session. So that one works.
>>>
>>> Next, the ability to store projects in LDAP was removed 2 releases ago.
>>> From my personal opinion (and in fact not just mine but hundreds of other
>>> users as well) this was one of the biggest mistakes.
>>> This is one of the major questions from my side to the community - if it
>>> was always possible to store project IDs in the external provider, and if
>>> it is still possible to do it for the userIDs - what is the point of
>>> preventing it now?
>>>
>>>
>> I want to go on record that we (the maintainers of Keystone) and those of
>> us who have spent a significant amount of time working through the LDAP
>> code came to the community and asked who used this feature (Through many
>> channels). There were exactly 2 responses of deployers using LDAP to store
>> project IDs. Both of them were open or actively working towards moving
>> projects into SQL (or alternatively, developing their own "resource"
>> store). The LDAP backend for resources (projects/domains) was poorly
>> supported, had limited interest from the community (for improvements) and
>> generally was a very large volume of work to bring up to being on-par with
>> the SQL implementation.
>>
>> Without the interest of stakeholders (and with few/no active users), it
>> wasn't feasible to continue to maintain it. There is nothing stopping you
>> from storing projects externally. You can develop a driver to communicate
>> with the backend of your choice. The main reason projects were stored in
>> LDAP was due to the "all or nothing" original design of "KeystoneV2 "; you
>> could store users in LDAP but you also had to store projects, role
>> assignments, etc. Most deployments only wanted Users in LDAP but suffered
>> through the rest because it was required (there was no split of User,
>> Resource, and Assignment like there is today).
>>
>> Maybe it was the time when I haven't yet actively 

Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Morgan Fainberg
On Mon, Dec 5, 2016 at 2:31 PM, Andrey Grebennikov <
agrebenni...@mirantis.com> wrote:

> -Original Message-
>> From: Andrey Grebennikov 
>> Reply: OpenStack Development Mailing List (not for usage questions)
>> 
>> Date: December 5, 2016 at 12:22:09
>> To: openstack-dev@lists.openstack.org 
>> Subject:  [openstack-dev] [keystone] Custom ProjectID upon creation
>>
>> > Hi keystoners,
>>
>> I'm not a keystoner, but I hope youu don't mind my replying.
>
>
>> > I'd like to open the discussion about the little feature which I'm
>> trying
>> > to push forward for a while but I need some feedbacks/opinions/concerns
>> > regarding this.
>> > Here is the review I'm talking about https://review.
>> > openstack.org/#/c/403866/
>> >
>> > What I'm trying to cover is multi-region deployment, which includes
>> > geo-distributed cloud with independent Keystone in every region.
>> >
>> > There is a number of use cases for the change:
>> > 1. Allow users to re-use their tokens in all regions across the
>> distributed
>> > cloud. With global authentication (LDAP backed) and same roles names
>> this
>> > is only one missing piece which prevents the user to switch between
>> regions
>> > even withing single Horizon session.
>>
>> So this just doesn't sound right to me. You say above that there are
>> independent Keystone deployments in each region. What token type are
>> you using that each region could validate a token (assuming project
>> IDs that are identical across regions) that would do this for
>> "independent Keystone" deployments?
>>
>> Specifically, let's presume you use Keystone's new default of fernet
>> tokens and you have independently deployed Keystone in each region.
>> Without synchronizing the keys Keystone uses to generate and validate
>> fernet tokens, I can't imagine how one token would work across all
>> regions. This sounds like a lofty goal.
>>
>> Further, if Keystone is backed by LDAP, why are there projects being
>> created in the Keystone database at all? I thought using LDAP as a
>> backend would avoid that necessity. (Again, I'm not a keystone
>> developer ;))
>>
>> Sorry that I didn't mention this in the beginning.
> Yes, it is supposed to be fernet tokens installation for sure, UUID will
> not work by default, PKI is deprecated. Keys are supposed to be
> synchronized. Without it multi-site will never work even if I replicate the
> database.
> This is what I started from about half a year ago immediately after
> receiving the usecase. I created 2 clouds, replicated the key, set up each
> Keystone to know about both sites as Regions, made project IDs same, and
> voilla - having global LDAP for authentication in place I could even switch
> between these regions within one Horizon session. So that one works.
>
> Next, the ability to store projects in LDAP was removed 2 releases ago.
> From my personal opinion (and in fact not just mine but hundreds of other
> users as well) this was one of the biggest mistakes.
> This is one of the major questions from my side to the community - if it
> was always possible to store project IDs in the external provider, and if
> it is still possible to do it for the userIDs - what is the point of
> preventing it now?
>
>
I want to go on record that we (the maintainers of Keystone) and those of
us who have spent a significant amount of time working through the LDAP
code came to the community and asked who used this feature (Through many
channels). There were exactly 2 responses of deployers using LDAP to store
project IDs. Both of them were open or actively working towards moving
projects into SQL (or alternatively, developing their own "resource"
store). The LDAP backend for resources (projects/domains) was poorly
supported, had limited interest from the community (for improvements) and
generally was a very large volume of work to bring up to being on-par with
the SQL implementation.

Without the interest of stakeholders (and with few/no active users), it
wasn't feasible to continue to maintain it. There is nothing stopping you
from storing projects externally. You can develop a driver to communicate
with the backend of your choice. The main reason projects were stored in
LDAP was due to the "all or nothing" original design of "KeystoneV2 "; you
could store users in LDAP but you also had to store projects, role
assignments, etc. Most deployments only wanted Users in LDAP but suffered
through the rest because it was required (there was no split of User,
Resource, and Assignment like there is today).

> 2. Automated tools responsible for statistics collection may access all
>> > regions using one token (real customer's usecase)
>>
>> Why can't the automated tools be updated to talk to each Keystone and
>> get a token while talking to that region?
>>
>>
> They may. Depending on what is currently being used in production. It is
> not always so easy to completely 

Re: [openstack-dev] [keystone] team logo (initial draft)

2016-12-01 Thread Morgan Fainberg
Looks good! Commented on the Form, but the "grey section" might be even
better if there was a little color to it. As it is, It might be too "stark"
a contrast as it is to a black laptop/background (white alone tends to be)
if the white sections are opaque, and it might fade into a "white" or
silver background (aka macbook (pro) style).

It might even be cooler if the grey sections were provided with a variety
of color differences.

Overall the turtle looks great stylistically, I like that the shell almost
has a "keystone" (as in what goes in an arch) shape to it.

Cheers,
--Morgan

On Thu, Dec 1, 2016 at 2:09 PM, Steve Martinelli 
wrote:

> keystoners, we finally have a logo! well a draft version of it :)
>
> Please provide feedback by Tuesday, Dec. 13 (good or bad) at:
> www.tinyurl.com/OSmascot
> Heidi (cc'ed) will be out of the office Dec. 2-12 but promises to respond
> to questions as swiftly as possible when she returns.
>
> All hail the turtle!
>
> stevemar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][devstack][rally][python-novaclient][magnum] switching to keystone v3 by default

2016-12-01 Thread Morgan Fainberg
On Dec 1, 2016 8:25 AM, "Andrey Kurilin"  wrote:
>
> As I replied at IRC, please do not mix two separate issues!
> Yes, we have several scenarios which are not support keystone v3 yet. It
is an issue, but it is unrelated issue to described in the first mail.
> We have a job which is configured with proper IDENTITY_API_VERSION flag
and should be launched against Keystone v2, but there is only keystone v3
and it is a real issue.
>
> On Thu, 1 Dec 2016 at 17:48, Lance Bragstad  wrote:
>>
>> FWIW - i'm seeing a common error in several of the rally failures [0]
[1] [2] [3]. Dims also pointed out a few bugs in rally for keystone v3
support [4].
>>
>> I checked with the folks in #openstack-containers to see if they were
experiencing anymore fallout, but it looks like the magnum gate is under
control [5]. We're currently in #openstack-keystone talking through options
for the rally situation in case anyone feels like joining.
>>
>>
>> [0]
http://logs.openstack.org/87/404887/4/check/gate-rally-dsvm-neutron-existing-users-rally/ff60a83/console.html#_2016-12-01_08_05_55_268772
>> [1]
http://logs.openstack.org/43/405143/3/check/gate-rally-dsvm-neutron-existing-users-rally/3ee975b/console.html#_2016-12-01_08_39_02_618302
>> [2]
http://logs.openstack.org/83/394583/26/check/gate-rally-dsvm-cli/af28c0f/console.html#_2016-12-01_14_09_19_584427
>> [3]
http://logs.openstack.org/83/394583/26/check/gate-rally-dsvm-neutron-existing-users-rally/26cd009/console.html#_2016-12-01_14_15_17_147016
>> [4] https://bugs.launchpad.net/rally?field.searchtext=keystone+v3
>> [5]
http://eavesdrop.openstack.org/irclogs/%23openstack-containers/%23openstack-containers.2016-12-01.log.html#t2016-12-01T14:57:00
>>
>> On Thu, Dec 1, 2016 at 6:39 AM, Spyros Trigazis 
wrote:
>>>
>>> I think for magnum we are OK.
>>>
>>> This job [1] finished using keystone v3 [2]
>>>
>>> Spyros
>>>
>>> [1]
http://logs.openstack.org/93/400593/9/check/gate-functional-dsvm-magnum-api/93e8c14/
>>> [2]
http://logs.openstack.org/93/400593/9/check/gate-functional-dsvm-magnum-api/93e8c14/logs/devstacklog.txt.gz#_2016-12-01_11_32_58_033
>>>
>>> On 1 December 2016 at 12:26, Davanum Srinivas  wrote:

 It has taken years to get here with a lot of work from many folks.

 -1 for Any revert!

 https://etherpad.openstack.org/p/v3-only-devstack
 http://markmail.org/message/aqq7itdom36omnf6

https://review.openstack.org/#/q/status:merged+project:openstack-dev/devstack+branch:master+topic:bp/keystonev3

 Thanks,
 Dims

 On Thu, Dec 1, 2016 at 5:38 AM, Andrey Kurilin 
wrote:
 > Hi folks!
 >
 > Today devstack team decided to switch to keystone v3 by default[0].
 > Imo, it is important thing, but it was made in silent, so other
project was
 > unable to prepare to that change. Also, proposed way to select
Keystone API
 > version via devstack configuration doesn't work(IDENTITY_API_VERSION
 > variable doesn't work [1] ).
 >
 > Switching to keystone v3 broke at least Rally and Magnum(based on
comment to
 > [0])  gates. Also, python-novaclient has two separate jobs for
checking
 > compatibility with keystone V2 and V3. One of these jobs became
redundant.
 >
 > That is why I submitted a revert [2] .
 >
 > PS: Please, do not make such changes in silent!
 >
 > [0] - https://review.openstack.org/#/c/386183
 > [1] -
 >
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/rally.yaml#L70-L74
 > [2] - https://review.openstack.org/405264
 >
 > --
 > Best regards,
 > Andrey Kurilin.
 >
 >
__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >



 --
 Davanum Srinivas :: https://twitter.com/dims


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

Re: [openstack-dev] oaktree - a friendly end-user oriented API layer - anybody want to help?

2016-11-15 Thread Morgan Fainberg
On Tue, Nov 15, 2016 at 5:16 PM, Jay Pipes  wrote:

> Awesome start, Monty :) Comments inline.
>
> On 11/15/2016 09:56 AM, Monty Taylor wrote:
>
>> Hey everybody!
>>
>> At this past OpenStack Summit the results of the Interop Challenge were
>> shown on stage. It was pretty awesome - 17 different people from 17
>> different clouds ran the same workload. And it worked!
>>
>> However, one of the reasons it worked is because they all used the
>> Ansible modules we wrote that are based on the shade library that
>> contains the business logic needed to hide vendor differences in clouds.
>> That means that there IS a fantastic OpenStack interoperability story -
>> but only if you program in Python. That's less awesome.
>>
>> With that in mind - I'm pleased to announce a new project that aims to
>> address that - oaktree.
>>
>> oaktree is a gRPC-based API porcelain service for OpenStack that is
>> based on the shade library and I'd love some help in writing it.
>>
>> Basing oaktree on shade gets not only the business logic. Shade already
>> understands a multi-cloud world. And because we use shade in Infra for
>> nodepool, it already has caching, batching and thundering herd
>> protection sorted to be able to hand very high loads efficiently. So
>> while oaktree is new, the primary logic and fundamentals are all shade
>> and are battle-tested.
>>
>
> ++ muy bueno.
>
> The barrier to deployers adding it to their clouds needs to be as low as
>> humanly possible. So as we work on it, ensuring that we keep it
>> dead-simple to install, update and operate must be a primary concern.
>>
>> Where are we and what's next?
>>
>> oaktree doesn't do a whole lot that's terribly interesting at the
>> moment. We have all of the development scaffolding and gate jobs set up
>> and a few functions implemented.
>>
>> oaktree exists currently as two repos - oaktree and oaktreemodel:
>>
>>   http://git.openstack.org/cgit/openstack/oaktree
>>   http://git.openstack.org/cgit/openstack/oaktreemodel
>>
>> oaktreemodel contains the Protobuf definitions and the build scripts to
>> produce Python, C++ and Go code from them. The python code is published
>> to PyPI as a normal pure-python library. The C++ code is published as a
>> source tarball and the Go code is checked back in to the same repo so
>> that go works properly.
>>
>
> Very nice. I recently started playing around with gRPC myself for some
> ideas I had about replacing part of nova-compute with a Golang worker
> service that can tolerate lengthy disconnections with a centralized control
> plane (hello, v[E]CPE!).
>
> It's been (quite) a few years since I last used protobufs (hey, remember
> Drizzle?) but it's been a blast getting back into protobufs development.
> Now that I see you're using a similar approach for oaktree, I'm definitely
> interested in contributing.
>
> oaktree depends on the python oaktreemodel library, and also on shade.
>> It implements the server portion of the gRPC service definition.
>>
>> Currently, oaktree can list and search for flavors, images and floating
>> ips. Exciting right? Most of the work to expose the rest of the API that
>> shade can provide at the moment is going to be fairly straightforward -
>> although in each case figuring out the best mapping will take some care.
>>
>> We have a few major things that need some good community design. These
>> are also listed in a todo.rst file in the oaktree repo which is part of
>> the docs:
>>
>>   http://oaktree.readthedocs.io/en/latest/
>>
>> The auth story. The native/default auth for gRPC is oauth. It has the
>> ability for pluggable auth, but that would raise the barrier for new
>> languages. I'd love it if we can come up with a story that involves
>> making API users in keystone and authorizing them to use oaktree via an
>> oauth transaction.
>>
>
> ++
>
> > The keystone auth backends currently are all about
>
>> integrating with other auth management systems, which is great for
>> environments where you have a web browser, but not so much for ones
>> where you need to put your auth credentials into a file so that your
>> scripts can work. I'm waving my hands wildly here - because all I really
>> have are problems to solve and none of the solutions I have are great.
>>
>> Glance Image Uploads and Swift Object Uploads (and downloads). Having
>> those two data operations go through an API proxy seems inefficient.
>>
>
> Uh, yeah :)
>
> However, having them not in the API seems like a bad user experience.
>> Perhaps if we take advantage of the gRPC streaming protocol support
>> doing a direct streaming passthrough actually wouldn't be awful. Or
>> maybe the better approach would be for the gRPC call to return a URL and
>> token for a user to POST/PUT to directly. Literally no clue.
>>
>> In any case - I'd love help from anyone who thinks this sounds like a
>> good idea. In a perfect world we'll have something ready for 1.0 by
>> Atlanta.
>>
>
> I'll try my best to dig into the 

Re: [openstack-dev] [keystone] meeting format poll

2016-11-15 Thread Morgan Fainberg
I agree with Steve. I just want to highlight that the wiki is viable again
if we wanted to change. The move to etherpad was a necessity, now we have
options we should be sure eveyrone is still happy with it.

On Tue, Nov 15, 2016 at 12:31 PM, Steve Martinelli 
wrote:

> I really like the etherpad approach, its nice to see the history from
> previous meetings
>
> On Tue, Nov 15, 2016 at 2:39 PM, Lance Bragstad 
> wrote:
>
>> Hey folks,
>>
>> In today's keystone meeting, Morgan mentioned that we had the ability to
>> go back to using OpenStack Wikis for meeting agendas. I created a poll to
>> get feedback [0].
>>
>> Let's keep it open for the week and look at the results as a team at our
>> next meeting.
>>
>> Thanks!
>>
>> [0] https://goo.gl/forms/Gs4lZxgktRzlwHAn2
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Anyone want to meetup at KubeCon?

2016-11-09 Thread Morgan Fainberg
On Nov 8, 2016 4:53 PM, "Stephen McQuaid"  wrote:
>
> We have been developing a keystone authz webhook for easy integration.
If anyone is interested we can look at open-sourcing it
>
>
>
> Stephen McQuaid
>
> Sr. Software Engineer | Kubernetes & Openstack
>
> GoDaddy
>
>
>
> M: 484.727.8383 | O: 669.600.5852
>
> smcqu...@godaddy.com
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

If I was not out of town and away from the Pacific Northwest I would have
enjoyed meeting up for these discussions. Unfortunately, I won't be back to
PNW (and closeish to Seattle) until Saturday.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][dev][python] constructing a deterministic representation of a python data structure

2016-11-03 Thread Morgan Fainberg
On Thu, Nov 3, 2016 at 1:04 PM, Amrith Kumar  wrote:

> Gordon,
>
> You can see a very quick-and-dirty prototype of the kind of thing I'm
> looking to do in Trove at
> https://gist.github.com/amrith/6a89ff478f81c2910e84325923eddebe
>
> Uncommenting line 51 would simulate a bad hash.
>
> I'd be happy to propose something similar in oslo.messaging if you think
> that would pass muster there.
>
> -amrith
>
> -Original Message-
> From: gordon chung [mailto:g...@live.ca]
> Sent: Thursday, November 3, 2016 3:09 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][dev][python] constructing a
> deterministic
> representation of a python data structure
>
>
>
> On 03/11/16 02:24 PM, Amrith Kumar wrote:
>
> >
> > So, just before calling call() or cast(), I could compute the hash and
> > stuff it into the dictionary that is being sent over, and I can do the
> > same on the receiving side. But since I cannot guarantee that the
> > representation on the receiving side is necessarily identical to the
> > representation on the sending side, I have issues computing the hash.
> >
> >
>
> based on description, you're trying to sign the messages? there was some
> effort done in oslo.messaging[1]
>
> we do something similar in Ceilometer to sign IPC messages[2]. it does add
> overhead though.
>
> [1] https://review.openstack.org/#/c/205330/
> [2]
> https://github.com/openstack/ceilometer/blob/
> ffc9ee99c10ede988769907fdb0594a
> 512c890cd/ceilometer/publisher/utils.py#L43-L58
>
> cheers,
> --
> gord
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I had to solve a similar issue for deterministic key generation in dogpile
(key for memcache/etc) when memoizing methods/functions with kwargs. There
are a couple issues you run into, default args are not represented in
**kwargs, and non-positional args can come in any order.

If you want an example of what we did to generate the cache-key
programatically you can look here:

https://bitbucket.org/zzzeek/dogpile.cache/src/669582c2e5bf12b1303f50c4b7ba3dad308eb1cc/dogpile/cache/util.py?at=master=file-view-default#util.py-67:118

You don't need all the namespace and probably not fn/module info, but this
can look at the call and handle / ensure defaults also match (or be used to
extract default kwargs if needed) for passing down to RPC.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][lbaas] gunicorn to g-r

2016-10-17 Thread Morgan Fainberg
On Oct 17, 2016 17:32, "Thomas Goirand"  wrote:
>
> On 10/17/2016 08:43 PM, Adam Harwell wrote:
> > Jim, that is exactly my thought -- the main focus of g-r as far as I was
> > aware is to maintain interoperability between project dependencies for
> > openstack deploys, and since our amphora image is totally separate, it
> > should not be restricted to g-r requirements.
>
> The fact that we have a unified version number of a given lib in all of
> OpenStack is also because that's a requirement of downstream distros.
>
> Imagine that someone would like to build the Octavia image using
> exclusively packages from ...
>
> > I brought this up, but
> > others thought it would be prudent to go the g-r route anyway.
>
> It is, and IMO you should go this route.
>
> Cheers,
>
> Thomas Goirand (zigo)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

For the record uwsgi was not (at least at one point) allowed in g-r as it
was not a "runtime dependency" it was to be installed more like apache
mod_wsgi at the time. Gunicorn could fall into the same category, it is
meant to be used in conjunction with the runtime but not be a hard
requirement for the runtime itself.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-17 Thread Morgan Fainberg
On Oct 17, 2016 12:15, "Clint Byrum"  wrote:
>
> Excerpts from Chris Dent's message of 2016-10-17 10:38:25 +0100:
> > On Mon, 17 Oct 2016, Renat Akhmerov wrote:
> >
> > > If you are a developer, of course, PTG is an important event to
> > > attend. But… Being a developer, I would also love to attend summits
> > > also. For a bunch of reasons like catching up with the activities
> > > wider than my current focus of development, participating in getting
> > > user feedback and help clarifying possible misunderstanding of
> > > technical things being discussed which makes feedback gathering
> > > process more valuable (I believe that often users don’t know what
> > > they want, at least, in details, we need to help them understand by
> > > sharing our experience) etc. Also, for purely psychological reason I
> > > think it’s very important for people who mostly focus on very
> > > specific tasks to sometimes go and see events like an OpenStack
> > > summit. From my experience, people often change their attitude to
> > > their work when they see how many people are interested in what they
> > > are working on in labs. And it’s almost impossible to find a better
> > > way of getting that feeling of participating in something tremendous
> > > than attending summits.
> >
> > This. A thousand times this.
> >
> > Summit is one of the few ways that I get to see the big picture and
> > that there are actual real people doing real things with OpenStack.
> >
>
> Agree with all of these things. However, a lot of us felt that the
> pressure to do both at the same time was compromising both to the point
> that the sum of these two things was not even equal to the parts.
>
> Perhaps more dev focused people will find themselves going to 2 PTG's,
> and just one summit, the one closer to them geographically, every year.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I have very mixed feeling about the PTG. The more I spend thinking of it
the less I am a fan of the distinction between the summit and the PTG as it
stands. If the summit became a yearly thing and the PTG twice yearly I
think much of the concerns would subside. Seeing as that is not the case, I
see the PTG to bee currently poorly communicated.

Given the choice as a developer between summit and PTG I would pick summit.
I hope "OpenStack: the gathering" solves the split midcycles. But frankly I
am at this point in agreement that the lack of direct OPS feedback and
worry about lacking the larger picture is a serious detractor for what has
been proposed for the PTG.

It comes down to needing to now keep a solid presence for both for the
cores and PTLs, which is contrary to the original proposal. I hope I am
wrong in this assessment.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Project name DB length

2016-10-06 Thread Morgan Fainberg
On Thu, Oct 6, 2016 at 7:06 AM, gordon chung  wrote:

>
>
> On 05/10/16 07:55 AM, Sean Dague wrote:
> > Except... the 64 char field in keystone isn't required to be a uuid4.
> > Which we ran into when attempting to remove it from the URLs in Nova.
> > There is no validation anywhere that requires that of keystone values.
> >
> > For instance, Rackspace uses ints.
>
> yeah, that's basically why we had to revert our attempt in Ceilometer.
> we tried to enforce uuid with some buffer and it was too difficult to
> track down every one we broke.
>
> >
> > Yes this is debt. And yes, a few things would be nicer if this was more
> > constrained, but as was recently stated on twitter, we've been calling
> > the 9th month the seventh month for 2000 years -
> > https://twitter.com/GonzoHacker/status/781890649444519937. Some times
> > the cost of fixing the thing really just isn't worth the potential
> > breaks to operators that were operating within the old constraints fine.
>
> i think this sums up my feeble attempt at initiating this as a cross
> project topic a while back. too many projects, too few participants, too
> little gain. we avoided the issue in Gnocchi and the original reason i
> brought it up became less important.
>
> it'd be nice if we could somehow enforce new attributes/columns to
> follow uuid standards in hopes that older stuff just eventually dies
> off. that said, i'm guessing stuff like user_id and project_id columns
> are sticking around for a while...
>
> cheers,
>
> --
> gord
>
>
The SHA256() rendering that does not conform to UUID is required to
identify federated identity users and easily map them to the correct
identity backend(s). It means that "UUID" standards cannot be used. Partly
this requirement comes from needing to be able to programmatically generate
the ID for external consumers since the ID is "Element of ID from IDP" and
"Domain ID" and SHA256() hash of these two data elements.

The older legacy information such as LDAP partial DNs should in the long
term eventually go away if the current path continues. Project IDs will
likewise become more and more consistent as long as Keystone's upstream
code is used as LDAP-stored projects no longer exists and Project IDs are
always generated by Keystone (cannot be user supplied).

In short, user_id is a little bit more flexible than project id with older
installations potentially having some legacy data. New installations
(barring custom driver code) should be much more consistent.

The original part of this thread talking about project name length, it is
largely historic and could be changed. 64 was convenient and fits nicely in
UI; there is no specific technical reason this was limited to 64. It falls
on Steve and the rest of the Keystone team to determine if the request for
longer project_names outweighs the design/usability of the current length.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elections][tc]Thoughts on the TC election process

2016-10-03 Thread Morgan Fainberg
On Oct 3, 2016 14:15, "Edward Leafe"  wrote:
>
> On Oct 3, 2016, at 12:18 PM, Clay Gerrard  wrote:
> >
> >> After the nominations close, the election officials will assign each
candidate a non-identifying label, such as a random number, and those
officials will be the only ones who know which candidate is associated with
which number.
> >>
> > I'm really uneasy about this suggestion.  Especially when it comes to
re-election, for the purposes of accountability I think it's really
important that voters be able to identify the candidates.  For some people
there's a difference in what they say and what they end up doing when left
calling shots from the bubble for too long.
>
> This was a concern of mine, too, but IMO there haven't been too many
cases where a TC member has said they would support X and then fail to do
so. They might not prevail, being one of 13, but when that issue came up
they were almost always consistent with what they said.
>
> > As far as the other stuff... idk if familiarity == bias.  I'm sure lots
of occasions people vote for people they know because they *trust* them;
but I don't think that's bias?  I think a more common problem is when
people vote for a *name* they recognize without really knowing that person
or what they're about.  Or perhaps just as bad - *not* voting because they
realize they have on context to consider these candidates beyond name
familiarity and an (optional) email.
>
> I think that with so many candidates for so few seats, most people simply
don't have the time or the interest to look very deeply into things. I know
that that shows up in the voting. Take the election from a year ago: there
were 619 votes cast for 19 candidates. Out of these:
> - 35 ballots only voted for one candidate
> - 102 ballots voted for three or fewer
> - 175 didn't even bother to vote for 6
> - only 159 bothered to rank all the candidates
>

I want to point out that the last statistic is not super useful. The very
nature of CIVS allows for duplicated ranks. I rank folks where I would like
them and explicitly stack the bottom for those not in the top X, as I see
them all as equally viable but lower on my priority. So I am lumped into
that last statistic without it meaning I didn't actively and consciously
choose to do so for ease of voting. The web form on mobile (usually where I
vote) is not as responsive and sometimes might mis-rank folks).

So in short. Don't use the "failed to rank everyone" as a real metric. It
isn't representative  of what you're implying.

> So I think that there is evidence that unless you are already well-known,
most people aren't going to take the time to dig deeper. Maybe anonymous
campaigns aren't the answer, but they certainly would help in this regard.
>
> > I think a campaign period, and especially some effort [1] to have
candidates verbalize their viewpoints on topics that matter to the
constituency could go a long way towards giving people some more context
beyond "i think this name looks familiar; I don't really recognize this
name"
>
> Agreed 100%! It was made worse this year because the nominations closed
on a Saturday, and with the late rush of people declaring their candidacy,
gave no time at all for any sort of campaign discussions before voting
began. There really needs to be a decent period of time allowed for people
to get answers to whatever questions they may have.
>
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Morgan Fainberg
On Sep 21, 2016 09:37, "Adam Lawson"  wrote:
>
> But something else struck me, the velocity and sheer NUMBER of emails
that must be filtered to find and extract these key announcements is tricky
so I don't fault anyone for missing the needle in the haystack. Important
needle no doubt but I wonder if there are more efficient ways to ensure
important info is highlighted.
>
> My knee jerk idea is a way for individuals to subscribe to certain topics
that come into their inbox. I don't have a good way within Gmail to
sub-filter these which has been a historical problem for me in terms of
awareness of following hot topics.
>
> //adam
>
>
> Adam Lawson
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
> On Wed, Sep 21, 2016 at 9:28 AM, Adam Lawson  wrote:
>>
>> You know something that struck me, I noticed there were several teams
last cycle that did not elect a PTL so this round I was watching to see if
any teams did not have a PTL elected and presumed it was because of many of
the reasons surfaced in previous emails in this thread including being
heads down, watching other channels and potentially insufficient numbers of
individuals interested in the PTL role.
>>
>> So I waited and noticed Astara, Security and a handful of other projects
did not have a PTL elected so I picked Astara because I am an OpenStack
architect who specializes in SDN, security and distributed storage and
applied. Of course I missed the deadline by about 2 hours but Security was
another project I was interested in.
>>
>> So all this said, there are individuals interested in the PTL role to
ensure project teams have someone handling the logistics and coordination.
My issue however was that I was not yet eligible to be a candidate which
I'll remedy moving forward.
>>
>> I'm still interested in serving as a PTL for a project that needs one. I
personally believe that in the case of Security, there needs to be a
dedicated team due to the nature and impact of security breaches that
directly influence the perception of OpenStack as a viable cloud solution
for enterprises looking (or re-looking) at it for the first time.
>>
>> I'm not a full-time developer but an architect so I am planning to open
a new discussion about how PTL candidates are currently being qualified.
Again, different thread.
>>
>> For this thread, if there is a concern about PTL interest - it's there
and I would be open to helping the team in this regard if it helps keep the
team activity in the OpenStack marquee.
>>
>> //adam
>>
>>
>> Adam Lawson
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>> On Wed, Sep 21, 2016 at 8:56 AM, Clint Byrum  wrote:
>>>
>>> Excerpts from Filip Pytloun's message of 2016-09-21 14:58:52 +0200:
>>> > Hello,
>>> >
>>> > it's definately our bad that we missed elections in OpenStackSalt
>>> > project. Reason is similar to Rob's - we are active on different
>>> > channels (mostly IRC as we keep regular meetings) and don't used to
>>> > reading mailing lists with lots of generic topics (it would be good to
>>> > have separate mailing list for such calls and critical topics or
>>> > individual mails to project's core members).
>>> >
>>> > Our project is very active [1], trying to do things the Openstack way
>>> > and I think it would be a pitty to remove it from Big Tent just
because
>>> > we missed mail and therefore our first PTL election.
>>> >
>>> > Of course I don't want to excuse our fault. In case it's not too late,
>>> > we will try to be more active in mailing lists like openstack-dev and
>>> > not miss such important events next time.
>>> >
>>> > [1] http://stackalytics.com/?module=openstacksalt-group
>>> >
>>>
>>> Seems like we need a bit added to this process which makes sure big tent
>>> projects have their primary IRC channel identified, and a list of core
>>> reviewer and meeting chair IRC nicks to ping when something urgent comes
>>> up. This isn't just useful for elections, but is probably something the
>>> VMT would appreciate as well, and likely anyone else who has an urgent
>>> need to make contact with a team.
>>>
>>> I think it might also be useful if we could make the meeting bot remind
>>> teams of any pending actions they need to take such as elections upon
>>> #startmeeting.
>>>
>>> Seems like all of that could be automated.
>>>
>>>
__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> 

Re: [openstack-dev] Too many mails on announce list again :)

2016-09-20 Thread Morgan Fainberg
On Tue, Sep 20, 2016 at 9:18 AM, Doug Hellmann 
wrote:

> Excerpts from Thierry Carrez's message of 2016-09-20 10:19:04 +0200:
> > Steve Martinelli wrote:
> > > I think bundling the puppet, ansible and oslo releases together would
> > > cut down on a considerable amount of traffic. Bundling or grouping new
> > > releases may not be the most accurate, but if it encourages the right
> > > folks to read the content instead of brushing it off, I think thats
> > > worth while.
> >
> > Yeah, I agree that the current "style" of announcing actively trains
> > people to ignore announces. The trick is that it's non-trivial to
> > regroup announces (as they are automatically sent as a post-job for each
> > tag).
> >
> > Solutions include:
> >
> > * A daily job that catches releases of the day and batches them into a
> > single announce (issue being you don't get notified as soon as the
> > release is available, and the announce email ends up being extremely
> long)
> >
> > * A specific -release ML where all announces are posted, with a daily
> > job to generate an email (one to -announce for services, one to -dev for
> > libraries) that links to them, without expanding (issue being you don't
> > have the natural thread in -dev to react to a broken oslo release)
> >
> > * Somehow generate the email from the openstack/release request rather
> > than from the tags
>
> One email, with less detail, generated when a file merges into
> openstack/release is my preference because it's easier to implement.
>
> Alternately we could move all of the announcements we have now to
> a new -release list and folks that only want one email a day can
> subscribe using digest delivery. Of course they could do that with
> the list we have now, too.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

A release list makes a lot of sense. If you also include clear metadata in
the subject such as including the owning project aka: keystone (for
keystone auth, keystonemiddleware, keystoneclient), people can do direct
filtering for what they care about ( as well digest mode).

--/morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] venting -- OpenStack wiki reCAPTCHA

2016-09-09 Thread Morgan Fainberg
On Fri, Sep 9, 2016 at 8:43 AM, Tom Fifield  wrote:

>
>
> On 廿十六年九月九日 朝 10:41, Tom Fifield wrote:
>
>>
>>
>> On 廿十六年九月八日 暮 08:36, Jeremy Stanley wrote:
>>
>>> On 2016-09-09 01:10:15 + (+), Bhandaru, Malini K wrote:
>>>
 Is it just me who likes to hit the save button often?
 It gets tedious proving often that you are not a robot. Wiki
 reCAPTCHA likes proof even if saves are spaced less than a minute
 apart!
 Wiki Gods, hear my plea!

>>>
>>> I sympathize. That captcha addition is one of several tools we're
>>> leveraging currently to combat the ongoing spam/scammer/vandalism
>>> problems on wiki.openstack.org, as an alternative to shutting it
>>> down completely. Unfortunately even now I still spend a chunk of
>>> every day blocking new accounts created by abusers and cleaning up
>>> all their modifications, but am hoping that with other improvements
>>> we have pending the onslaught will lessen and we can revisit some of
>>> the more intrusive mechanisms on which we've been forced to rely
>>> (for example, I think we should be able to configure it so that
>>> users whose previous edits have been confirmed by a wiki admin are
>>> added to a group that bypasses the captcha, and then get a team of
>>> wiki groomers in the habit of adding known good accounts to that
>>> group).
>>>
>>>
>> Indeed - fungi has been doing amazing work :(
>>
>> I have been adding "known good" accounts to such a group - there's about
>> 64 so far:
>>
>> https://wiki.openstack.org/w/index.php?title=Special:ListUse
>> rs=autopatrol
>>
>>
>>
>> Is it possible to disable CAPTCHA for users in group "autopatrol"?
>>
>
> aha! It is indeed.
>
> Adding
>
> $wgGroupPermissions['autopatrol']['skipcaptcha'] = true;
>
>
> will solve this annoyance for Malini and others
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

For what it's worth, the captcha is the reason I stopped using / updating
the wiki and was a driver for keystone to move to using etherpad for the
weekly meeting agenda instead.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] new core reviewer (rderose)

2016-09-02 Thread Morgan Fainberg
On Sep 2, 2016 08:44, "Brad Topol"  wrote:
>
> Congratulations Ron!!! Very well deserved!!!
>
> --Brad
>
>
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
>
> Steve Martinelli ---09/01/2016 10:47:49 AM---I want to welcome Ron De
Rose (rderose) to the Keystone core team. In a short time Ron has shown a v
>
> From: Steve Martinelli 
>
> To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
> Date: 09/01/2016 10:47 AM
>
> Subject: [openstack-dev] [keystone] new core reviewer (rderose)
> 
>
>
>
> I want to welcome Ron De Rose (rderose) to the Keystone core team. In a
short time Ron has shown a very positive impact. Ron has contributed
feature work for shadowing LDAP and federated users, as well as enhancing
password support for SQL users. Implementing these features and picking up
various bugs along the way has helped Ron to understand the keystone code
base. As a result he is able to contribute to the team with quality code
reviews.
>
> Thanks for all your hard work Ron, we sincerely appreciate it.
>
>
Steve__
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Ahahaha! Another person to direct questions to now! ;)

Congrats Ron!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] "admin" role and "rule:admin_or_owner" confusion

2016-09-02 Thread Morgan Fainberg
On Sep 2, 2016 09:39, "rezroo"  wrote:
>
> Hello - I'm using Liberty release devstack for the below scenario. I have
created project "abcd" with "john" as Member. I've launched one instance, I
can use curl to list the instance. No problem.
>
> I then modify /etc/nova/policy.json and redefine "admin_or_owner" as
follows:
>
> "admin_or_owner":  "role:admin or is_admin:True or
project_id:%(project_id)s",
>
> My expectation was that I would be able to list the instance in abcd
using a token of admin. However, when I use the token of user "admin" in
project "admin" to list the instances I get the following error:
>
> stack@vlab:~/token$ curl
http://localhost:8774/v2.1/378a4b9e0b594c24a8a753cfa40ecc14/servers/detail
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token:
f221164cd9b44da6beec70d6e1f3382f"
> {"badRequest": {"message": "Malformed request URL: URL's project_id
'378a4b9e0b594c24a8a753cfa40ecc14' doesn't match Context's project_id
'f73175d9cc8b4fb58ad22021f03bfef5'", "code": 400}}
>
> 378a4b9e0b594c24a8a753cfa40ecc14 is project id of abcd and
f73175d9cc8b4fb58ad22021f03bfef5 is project id of admin.
>
> I'm confused by this behavior and the reported error, because if the
project id used to acquire the token is the same as the project id in
/servers/detail then I would be an "owner". So where is the "admin" in
"admin_or_owner"? Shouldn't the "role:admin" allow me to do whatever
functionality "rule:admin_or_owner" allows in policy.json, regardless of
the project id used to acquire the token?
>
> I do understand that I can use the admin user and project to get all
instances of all tenants:
> curl
http://localhost:8774/v2.1/f73175d9cc8b4fb58ad22021f03bfef5/servers/detail?all_tenants=1
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: $1"
>
> My question is more centered around why nova has the additional check to
make sure that the token project id matches the url project id - and
whether this is a keystone requirement, or only nova/cinder and programs
that have a project-id in their API choose to do this. In other words, is
it the developers of each project that decide to only expose some APIs for
administrative functionality (such all-tenants), but restrict everything
else to owners, or keystone requires this check?
>
> Thanks,
>
> Reza
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I believe this is a nova specific extra check. There is (iirc) a way to
list out the instances for a given tenant but I do not recall the
specifics.

Keystone does not know anything about the resource ownership in Nova. The
Nova check is fully self-contained.

--Morgan
Please excuse brevity and typos, sent from a mobile device.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Stepping Down.

2016-08-02 Thread Morgan Fainberg
Based upon my personal time demands among a number of other reasons I will
be stepping down from the Technical Committee. This is planned to take
effect with the next TC election so that my seat will be up to be filled at
that time.

For those who elected me in, thank you.

Regards,
--Morgan Fainberg
IRC: notmorgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Retirement of openstack/cloud-init repository

2016-07-29 Thread Morgan Fainberg
On Jul 29, 2016 17:13, "Joshua Harlow"  wrote:
>
> Hi all,
>
> I'd like to start the retirement (well actually it's more of shifting) of
the openstack/cloud-init repository to its new location that *finally*
removes the old bzr version of itself.
>
> The long story is that the cloud-init folks (myself included) moved the
bzr repository to openstack/cloud-init and cloud-init 2.0 work was started
there while 0.7.x work was still done in bzr.
>
> The 0.7.x branches of openstack/cloud-init then tried to stay up with the
0.7.x work but constantly feel behind, and 2.0 work has somewhat slowed
down (not entirely stalled just yet) so in order to help out the whole
thing here the canonical folks (mainly scott and friends) have finally
moved the old bzr repository off of bzr and now it's connected into the
launchpad git system and all history has been moved there and such (the 2.0
branch from openstack/cloud-init is also mirrored there) so at this point
there isn't a need to have git and bzr when now one location (and one
location that can please all the folks) exists.
>
> https://git.launchpad.net/cloud-init
>
> So sometime next week I'm going to start the move of the
openstack/cloud-init (which is outdated) to the attic and direct new people
to the new location (or perhaps we can have infra just point to that in
some kind of repo notes?).
>
> Anyways, TLDR; git at launchpad for cloud-init, no more bzr or need to
have out of sync ~sort of~ mirror in openstack, win!
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

As I recall we no longer "move" the git repositories. We simply remove the
permissions/ACLs so new reviews aren't added/approved, and often the repo
is emptied with only a readme pointing to the new location.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread Morgan Fainberg
On Jul 28, 2016 22:50, "Joshua Harlow"  wrote:
>
> Hi folks,
>
> I was thinking it might be useful to see what other folks think about
switching (or migrating all the current bots we have in openstack) to be
based on errbot plugins.
>
> Errbot @ http://errbot.io/en/latest/ takes a slightly different approach
to bots and treats each bot 'feature' as a plugin that can be activated and
deactivated with-in the context of the same bot (even doing so
dynamically/at runtime).
>
> It also allows for those that use slack (or other backend @
http://errbot.io/en/latest/features.html) to be able to 'seamlessly' use
the same plugins and just switching a tiny amount config to use a different
'bot backend'.
>
> I've been experimenting with it more recently and have a gerritbot (sort
of equivalent) @ https://github.com/harlowja/gerritbot2 and also have been
working on a oslobot plugin @ https://review.openstack.org/#/c/343857/ and
during this exploration it has gotten me to think that we could move most
of the functionality of the various bots in openstack (patchbot, openstack
- really meetbot, gerritbot and others?) under the same umbrella (or at
least convert them into plugins that folks can run on IRC or if they want
to run them on some other backend that's cool to).
>
> The hardest one I can think would be meetbot, although the code @
https://github.com/openstack-infra/meetbot doesn't look impossible (or
really that hard to convert to an errbot plugin).
>
> What do people think?
>
> Any strong preference?
>
> I was also thinking that as a result we could then just have a single
'openstack' bot and also turn on plugins like:
>
> - https://github.com/aherok/errbot_plugins (helps with timezone
conversions that might be useful to have for folks that keep on getting
them wrong).
> - some stackalytics integration bot?
> - something even better???
> - some other plugin @ https://github.com/errbotio/errbot/wiki
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

As I recall this has been on a long list of "we want to do it". It really
just comes down to someone putting effort into making it happen.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Multi-factor Auth with Keystone and TOTP

2016-07-18 Thread Morgan Fainberg
On Sun, Jul 17, 2016 at 10:37 PM, Steve Martinelli 
wrote:

> Several comments inline
>
> On Mon, Jul 18, 2016 at 12:20 AM, Adrian Turjak 
> wrote:
>
>> Hello,
>>
>> I've been looking at options for doing multi-factor auth (MFA) on our
>> infrastructure and I'm just wanting to know if the option I've decided
>> to go with seems sensible.
>>
>> As context, we are running stock Keystone (to be backed by LDAP), we
>> wanted to be able to enable MFA on a per user basis, and a user with MFA
>> enabled should either be blocked from using the APIs or required MFA to
>> use the APIs.
>>
>>
> We discussed adding MFA support in Newton at the Austin summit and didn't
> come to a happy conclusion (aside from "let federated users have MFA since
> it's built in"). Part of the trouble was deciding where to enforce MFA.
> Should a user *always* require MFA just because he has a TOTP credential?
> What if the user has multiple projects, and wants MFA to access projectA,
> but not projectB.
>
>
>> I was looking at the current TOTP module in keystone, but seeing as that
>> simply adds another optional Auth method to keystone it seems fairly
>> useless for our needs. Unless I'm missing something, there seems to be
>> no way in Keystone to enforce "use these two auth methods together". Is
>> that the case? If not, it is something that has been considered? Or it
>> is assumed people will write their own auth plugins rather than
>> combining existing ones?
>>
>
> It was definitely not assumed that folks would write there own auth
> plugins. The TOTP auth plugin was meant to be just be an alternative to
> password auth. We had hoped it would provide the building blocks for MFA,
> but see above comment.
>
> From there I went toward writing our own Keystone Auth plugin and had a
>> lot of success with that. The current iteration is a combination of the
>> password and totp plugins where for users with TOTP credentials we
>> expect a 6 digit password appended to the password. In the config I then
>> replace the default password plugin with my own.
>>
>
> I assume appending the TOTP password to the regular password is gonna be a
> big 'nope' from UX folks :)
> Though points for being clever and avoiding the whole negotiate/challenge
> the user for extra information.
>
>

For what it's worth, this is typically how TOTP works with federated users,
you have a fixed known number of digits (6) added to a known password. For
example, when I auth, I have my "password" and then the TOTP token
concatenated. The server then splits the passed in value and handles
authentication. I've seen it where it is done both with a fixed length
"pin/password" and a "up to XXX characters" password.

I don't know why the UX folks would have issues with this, since it is a
common and relatively painless option. The hard part still is "does this
user always need 2FA or only for certain projects?" As to that, my
inclination is the user must always use 2FA if it is enabled for the user;
some projects may require a user to have 2FA. So in short, if you use a
project that requires 2FA, the user will always use 2FA. Alternatively, the
user could disable 2FA but would be restricted from using the projects that
require it.

Since we aren't guaranteed a web-interface, the really cool things like
U2F, wont always work (which is more of the challenge/response style).


>
>> In testing this seems to work as intended. All normal users are
>> unaffected while users with a TOTP credential now must append their
>> passcode to their password.
>>
>> I've made a blueprint for this plugin:
>> https://blueprints.launchpad.net/keystone/+spec/password-totp-plugin
>>
>> and the code I am currently testing is in the associated review:
>> https://review.openstack.org/#/c/343422/
>>
>> If this plugin is useful to others, and this seemed like a sensible
>> solution, I will write some unit tests and work on getting it merged.
>>
>>
>> So, my main question, does this plugin seem like a sensible solution to
>> MFA in OpenStack in the way we needed or are there other paths I should
>> be going down?
>>
>> Cheers,
>> -Adrian Turjak
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [infra][gear] Making Gear easier to consume ( less .encode() and .decode() )

2016-06-20 Thread Morgan Fainberg
As I have been converting Zuul and NodePool to python3, I have had to do a
bunch of changes around encode() and decode() of strings since gear is
(properly) an implementation of a protocol that requires binary data
(rather than text_strings).

What this has highlighted is that gear should be made a bit more friendly
to use in the python world. We already explicitly assume a utf-8 encoding
for when things are turned into "binary" when crafting the job object in
certain cases [1].  I have discussed this with Jim Blair, and we both agree
that the ability to still reference attributes such as "job.name" (in a
simple manner that is straight forward), is important.

Here is the outline of the change I'm proposing:

* The main consumable part of gear (public classes) will convert the
"string" data we assign ( name[2], unique[3]) into utf-8-encoded bytes via
@property, @property.setter, and @property.getter for public consumption.

* Arguments are explicitly supposed to be a binary_blob [4]. I am unsure if
this should also be automatically converted *or* if it should be the
inverse, .arguments and .arguments_string ?

* Internally gear will reference the encoded bits via the underlying
_binary form, which will allow direct access to a non-"string" form
of the data itself in the case that there is interesting things that need
to be handled via binary packing (for example) instead of "stringified"
versions.

* For compatibility the main @property.setter will handle either
binary_type or string_type (so we don't break everyone).

* The "_binary" will enforce that the data with be a binary_type
only.


I think this can be done in a single release of gear with minimal impact to
those using it. For what it is worth, it is unlikely that anyone has used
gear extensively in python3 as of yet because of recent bug fixes that
addressed py2->py3 compat issues around dict.values() and similar list() ->
iter() changes.

See the one question in the above proposal for "arguments".

[1]
https://github.com/openstack-infra/gear/blob/59d29104cb9c370e49b44f313e568bd43b172e1b/gear/__init__.py#L86
[2]
https://github.com/openstack-infra/gear/blob/59d29104cb9c370e49b44f313e568bd43b172e1b/gear/__init__.py#L2054
[3]
https://github.com/openstack-infra/gear/blob/59d29104cb9c370e49b44f313e568bd43b172e1b/gear/__init__.py#L2058
[4]
https://github.com/openstack-infra/gear/blob/59d29104cb9c370e49b44f313e568bd43b172e1b/gear/__init__.py#L2056

Thanks,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-19 Thread Morgan Fainberg
On Sun, Jun 19, 2016 at 6:51 PM, Adam Young  wrote:

> On 06/16/2016 02:19 AM, Jamie Lennox wrote:
>
> Thanks everyone for your input.
>
> I generally agree that there is something that doesn't quite feel right
> about purely trusting this information to be passed from service to
> service, this is why i was keen for outside input and I have been
> rethinking the approach.
>
>
> They really feel like a variation on Trust tokens.
>
> From the service perspective, they are tokens, just not the one the user
> originally requested.
>
> The "reservation" as I see it is an implicit trust created by the user
> requesting the operation on the initial service.
>
> When the service validates the token, it can get back the,  lets call it a
> "reserved token" in keeping with the term reservation above.  That token
> will have a longer life span than the one the user originally requested,
> but (likely) fewer roles.
>
> When nova calls glance, and then glance calls Swift, we can again
> transition to different reserved tokens if needs be.
>
>
>
I would really, really, really, prefer not to build in the need to
"transition" between "reserved" tokens when jumping between services. This
wont be "impossible", but I really don't want to start from the simpler
proposal; It's a lot of moving parts.

The big difference here is that trusts are explicit and have a LOT of
overhead (and frankly would be clunky for this) as currently implemented,
this is closer to an evolved version of the composite tokens we talked over
in Paris.

--Morgan



>
>
>
> To this end i've proposed reservations (a name that doesn't feel right):
> https://review.openstack.org/#/c/330329/
>
> At a gut feeling level i'm much happier with the concept. I think it will
> allow us to handle the distinction between user->service and
> service->service communication much better and has the added bonus of
> potentially opening up some policy options in future.
>
> Please let me know of any concerns/thoughts on the new approach.
>
> Once again i've only written the proposal part of the spec as there will
> be a lot of details to figure out if we go forward. It is also fairly rough
> but it should convey the point.
>
>
> Thanks
>
> Jamie
>
> On 3 June 2016 at 03:06, Shawn McKinney  wrote:
>
>>
>> > On Jun 2, 2016, at 10:58 AM, Adam Young < 
>> ayo...@redhat.com> wrote:
>> >
>> > Any senseible RBAC setup would support this, but we are not using a
>> sensible one, we are using a hand rolled one. Replacing everything with
>> Fortress implies a complete rewrite of what we do now.  Nuke it from orbit
>> type stuff.
>> >
>> > What I would rather focus on is the splitting of the current policy
>> into two parts:
>> >
>> > 1. Scope check done in code
>> > 2. Role check done in middleware
>> >
>> > Role check should be donebased on URL, not on the policy key like
>> identity:create_user
>> >
>> >
>> > Then, yes, a Fortress style query could be done, or it could be done by
>> asking the service itself.
>>
>> Mostly in agreement.  I prefer to focus on the model (RBAC) rather than a
>> specific impl like Fortress. That is to say support the model and allow the
>> impl to remain pluggable.  That way you enable many vendors to participate
>> in your ecosystem and more important, one isn’t tied to a specific backend
>> (ldapv3, sql, …)
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Morgan Fainberg
On Wed, Jun 15, 2016 at 11:54 PM, Ken'ichi Ohmichi 
wrote:

> This discussion was expected when we implemented the Tempest patch,
> then I sent a mail to defcore comittee[1]
> As the above ml, "A DefCore Guideline typically covers three OpenStack
> releases".
> That means the latest guideline needs to cover Mitaka, Liberty and Kilo,
> right?
>
> In the Kilo development, we(nova team) have already considered
> additional properties are not good for the interoperability.
> And the stable_api.rst of [2] which is contained in Kilo says we need
> to implement new features without extensions.
> However, there are Kilo+ clouds which are extended with vendors' own
> extensions, right?
>
> My concern of allowing additional properties on interoperability tests is
> that
>  - users can move from pure OpenStack clouds to non-pure OpenStack
> clouds which implement vender specific properties
>  - but users cannot move from non-pure OpenStack clouds if users
> depend on the properties
> even if these clouds are certificated on the same interoperability tests.
>
>
The end goal is 100% to get everyone consistent with no "extra" data being
passed out of the APIs and certified on the same tests.

However, right now we have an issue where vendors/operators are lagging on
getting this cleaned up. Since this is the first round of certifications
(among other things), the proposal is to support/manage this in a way that
gives a bit more of a grace period while the deployers/operators finish
moving away from custom properties (as i understand it the ones affected
have communicated that they are working on meeting this goal; Chris, please
correct me if I am wrong).

Your concerns are spot on, and at the end of this "greylist" window ( at
the " 2017.01" defcore guideline ), this grace period will expire and
everyone will be expected to be compatible without the "Extra" data. Part
of the process of doing these programs is working to refine the process
(and sometimes make exceptions in the early stages) until the workflow
is established and understood. It is not expected to continue nor extend
the period beyond the firm end point Chris highlighted. I would not support
this proposal if it was open ended.

Cheers,
--Morgan


> Thanks
> Ken Ohmichi
>
> ---
> [1]:
> http://lists.openstack.org/pipermail/defcore-committee/2015-June/000849.html
> [2]: https://review.openstack.org/#/c/162912
>
> 2016-06-14 16:37 GMT-07:00 Chris Hoge :
> > Top posting one note and direct comments inline, I’m proposing
> > this as a member of the DefCore working group, but this
> > proposal itself has not been accepted as the forward course of
> > action by the working group. These are my own views as the
> > administrator of the program and not that of the working group
> > itself, which may independently reject the idea outside of the
> > response from the upstream devs.
> >
> > I posted a link to this thread to the DefCore mailing list to make
> > that working group aware of the outstanding issues.
> >
> > On Jun 14, 2016, at 3:50 PM, Matthew Treinish 
> wrote:
> >
> > On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
> >
> > Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
> >
> > On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
> >
> > Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
> >
> > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> >
> > Last year, in response to Nova micro-versioning and extension updates[1],
> > the QA team added strict API schema checking to Tempest to ensure that
> > no additional properties were added to Nova API responses[2][3]. In the
> > last year, at least three vendors participating the the OpenStack Powered
> > Trademark program have been impacted by this change, two of which
> > reported this to the DefCore Working Group mailing list earlier this
> > year[4].
> >
> > The DefCore Working Group determines guidelines for the OpenStack Powered
> > program, which includes capabilities with associated functional tests
> > from Tempest that must be passed, and designated sections with associated
> > upstream code [5][6]. In determining these guidelines, the working group
> > attempts to balance the future direction of development with lagging
> > indicators of deployments and user adoption.
> >
> > After a tremendous amount of consideration, I believe that the DefCore
> > Working Group needs to implement a temporary waiver for the strict API
> > checking requirements that were introduced last year, to give downstream
> > deployers more time to catch up with the strict micro-versioning
> > requirements determined by the Nova/Compute team and enforced by the
> > Tempest/QA team.
> >
> >
> > I'm very much opposed to this being done. If we're actually concerned
> with
> > interoperability and verify that things behave in the same manner between
> > multiple
> > clouds then doing 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Morgan Fainberg
On Jun 14, 2016 14:42, "Doug Hellmann"  wrote:
>
> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
> > On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
> > > Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
> > > > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> > > > > Last year, in response to Nova micro-versioning and extension
updates[1],
> > > > > the QA team added strict API schema checking to Tempest to ensure
that
> > > > > no additional properties were added to Nova API responses[2][3].
In the
> > > > > last year, at least three vendors participating the the OpenStack
Powered
> > > > > Trademark program have been impacted by this change, two of which
> > > > > reported this to the DefCore Working Group mailing list earlier
this year[4].
> > > > >
> > > > > The DefCore Working Group determines guidelines for the OpenStack
Powered
> > > > > program, which includes capabilities with associated functional
tests
> > > > > from Tempest that must be passed, and designated sections with
associated
> > > > > upstream code [5][6]. In determining these guidelines, the
working group
> > > > > attempts to balance the future direction of development with
lagging
> > > > > indicators of deployments and user adoption.
> > > > >
> > > > > After a tremendous amount of consideration, I believe that the
DefCore
> > > > > Working Group needs to implement a temporary waiver for the
strict API
> > > > > checking requirements that were introduced last year, to give
downstream
> > > > > deployers more time to catch up with the strict micro-versioning
> > > > > requirements determined by the Nova/Compute team and enforced by
the
> > > > > Tempest/QA team.
> > > >
> > > > I'm very much opposed to this being done. If we're actually
concerned with
> > > > interoperability and verify that things behave in the same manner
between multiple
> > > > clouds then doing this would be a big step backwards. The
fundamental disconnect
> > > > here is that the vendors who have implemented out of band
extensions or were
> > > > taking advantage of previously available places to inject extra
attributes
> > > > believe that doing so means they're interoperable, which is quite
far from
> > > > reality. **The API is not a place for vendor differentiation.**
> > >
> > > This is a temporary measure to address the fact that a large number
> > > of existing tests changed their behavior, rather than having new
> > > tests added to enforce this new requirement. The result is deployments
> > > that previously passed these tests may no longer pass, and in fact
> > > we have several cases where that's true with deployers who are
> > > trying to maintain their own standard of backwards-compatibility
> > > for their end users.
> >
> > That's not what happened though. The API hasn't changed and the tests
haven't
> > really changed either. We made our enforcement on Nova's APIs a bit
stricter to
> > ensure nothing unexpected appeared. For the most these tests work on
any version
> > of OpenStack. (we only test it in the gate on supported stable
releases, but I
> > don't expect things to have drastically shifted on older releases) It
also
> > doesn't matter which version of the API you run, v2.0 or v2.1.
Literally, the
> > only case it ever fails is when you run something extra, not from the
community,
> > either as an extension (which themselves are going away [1]) or another
service
> > that wraps nova or imitates nova. I'm personally not comfortable saying
those
> > extras are ever part of the OpenStack APIs.
> >
> > > We have basically three options.
> > >
> > > 1. Tell deployers who are trying to do the right for their immediate
> > >users that they can't use the trademark.
> > >
> > > 2. Flag the related tests or remove them from the DefCore enforcement
> > >suite entirely.
> > >
> > > 3. Be flexible about giving consumers of Tempest time to meet the
> > >new requirement by providing a way to disable the checks.
> > >
> > > Option 1 goes against our own backwards compatibility policies.
> >
> > I don't think backwards compatibility policies really apply to what
what define
> > as the set of tests that as a community we are saying a vendor has to
pass to
> > say they're OpenStack. From my perspective as a community we either
take a hard
> > stance on this and say to be considered an interoperable cloud (and to
get the
> > trademark) you have to actually have an interoperable product. We
slowly ratchet
> > up the requirements every 6 months, there isn't any implied backwards
> > compatibility in doing that. You passed in the past but not in the
newer stricter
> > guidelines.
> >
> > Also, even if I did think it applied, we're not talking about a change
which
> > would fall into breaking that. The change was introduced a year and
half ago
> > during kilo and landed a year ago during liberty:
> >
> > https://review.openstack.org/#/c/156130/
> >
> 

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-14 Thread Morgan Fainberg
On Jun 14, 2016 00:46, "Henry Nash" <henryna...@mac.com> wrote:

>
> On 14 Jun 2016, at 07:34, Morgan Fainberg <morgan.fainb...@gmail.com>
> wrote:
>
>
>
> On Mon, Jun 13, 2016 at 3:30 PM, Henry Nash <henryna...@mac.com> wrote:
>
>> So, I think it depends what level of compatibility we are aiming at. Let
>> me articulate them, and we can agree which we want:
>>
>> C1) In all version of the our APIs today (v2 and v3.0 to v3.6), you have
>> been able to issue an auth request which used project/tenant name as the
>> scoping directive (with v3 you need a domain component as well, but that’s
>> not relevant for this discussion). In these APIs, we absolutely expect that
>> if you could issue an auth request to. say project “test”, in, say, v3.X,
>> then you could absolutely issue the exact same command at V3.(X+1). This
>> has remained true, even when we introduced project hierarchies, i.e.: if I
>> create:
>>
>> /development/myproject/test
>>
>> ...then I can still scope directly to the test project by simply
>> specifying “test” as the project name (since, of course, all project names
>> must still be unique in the domain). We never want to break this for so
>> long as we formally support any APIs that once allowed this.
>>
>> C2) To aid you issuing an auth request scoped by project (either name or
>> id), we support a special API as part of the auth url (GET/auth/projects)
>> that lists the projects the caller *could* scope to (i.e. those they have
>> any kind of role on). You can take the “name” or “id” returned by this API
>> and plug it directly into the auth request. Again for any API we currently
>> support, we can’t break this.
>>
>> C3) The name attribute of a project is its node-name in the hierarchy. If
>> we decide to change this in a future API, we would not want a client using
>> the existing API to get surprised and suddenly receive a path instead of
>> the just the node-name (e.g. what if this was a UI of some type).
>>
>> Given all the above, there is no solution that can keep the above all
>> true and allow more than one project of the same name in, say, v3.7 of the
>> API. Even if we relaxed C2 and C2 -  C1 can never be guaranteed to be still
>> supported. Neither of the original proposed solutions can address this
>> (since it is a data modelling problem, not an API problem).
>>
>> However, given that we will have, for the first time, the ability to
>> microversion the Identity API starting with 3.7, there are things we can do
>> to start us down this path. Let me re-articulate the options I am proposing:
>>
>> Option 1A) In v3.7 we add a ‘path_name' attribute to a project entity,
>> which is hence returned by any API that returns a project entity. The
>> ‘path_name' attribute will contain the full path name, including the
>> project itself. (Note that clients speaking 3.6 and earlier will not see
>> this new attribute). Further, for clients speaking 3.7 and later, we add
>> support to allow a ‘path_name' (as an alternative to ‘name' or ‘id') to be
>> used in auth scoping. We do not (yet) relax any uniqueness constraints, but
>> mark API 3.6 and earlier as deprecated, as well as using the ‘name’
>> attribute in the auth request. (we still support all these, we just mark
>> them as deprecated). At some time in the future (e.g. 3.8), we remove
>> support for using ‘name’ for auth, insisting on the use of ‘path_name’
>> instead. Sometime later (e.g. 3.10) we remove support for 3.8 and earlier.
>> Then and only then, do we relax the uniqueness constraint allowing projects
>> with duplicate node-names (but with different parents).
>>
>> Option 1B) The same as 1A, but we insist on path_name use in auth in v3.7
>> (i.e. no grace-period for still using just ’name', instead relying on the
>> fact that 3.6 clients will still work just fine). Then later (e.g. perhaps
>> v3.9), we remove support for v3.6 and before…and relax the uniqueness
>> constraint.
>>
>>
> Let say the assumption that we are "removing" 3.6 should be stopped right
> now. I don't want to embark on the "when we remove this" as an option or
> discuss how we remove previous versions. Please lets assume for the sake of
> this conversation unless we have a major API version increase (API v4 and
> do not expose v4 projects via v3 API) this is unlikely happen. Deprecated
> or not, planning the removal of current supported API auth functionality is
> not on the table. In v3 we are not going to "relax" the uniqueness
> constraint in the foreseeable future. Just 

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-14 Thread Morgan Fainberg
rn the full path name where the project entity ‘name’ attribute is
> returned. For auth, we allow a full path name to specified in the ‘name’
> scope attribute - but we also still support just a name without a path
> (which we can guarantee to honour, since, so far, we haven’t relaxed the
> uniqueness constraint - however we mark that support as deprecated). At
> some time in the future (e.g. 3.8), we remove support for using an
> un-pathed ‘name’ for auth. Sometime later (e..g. 3.10) we remove support
> for 3.8 and earlier. Then and only then, do we relax the uniqueness
> constraint allowing projects with duplicate node-names (but with different
> parents).
>
> Option 2B) The same as 2A, but we insist on using ‘name’ with a full path
> use in auth in v3.7 (i.e. no grace-period for still using an un-pathed
>  ’name', instead relying on the fact that 3.6 clients will still work just
> fine). Then later (e.g. perhaps v3.9), we remove support for v3.6 and
> before…and relax the uniqueness constraint.
>
> The downside for option 2A and 2B is that a client must do work to not be
> “surprised” by 3.7 (since the ‘name’ attribute of a project entity may not
> contain what they expect). For 1A no changes are required at all for a
> client to work with 3.7 and maintain the current experience, although
> changes ARE of course required to start moving away from using the
> non-pathed ‘name’ attribute, but that doesn’t have to be done straight
> away, we give them a grace cycle. For 1B, you need to make changes to
> support 3.7 (since a path is required for auth).
>
> As I have said before, my preference is Option 1, since I think it results
> in a more logical end position, and neither 1 or 2 get us there any more
> quickly. For option 1, I prefer the more gradual approach of 1A, just so
> that we give clients a grace period. Given we need multiple cycles to
> achieve any of the options, let’s decide the final conceptual model we want
> - and then move towards it. Options 1's conceptual mode is that ‘name’
> remains the same, and we add separate ‘path’ attribute, Option 2’s other
> redefines ‘name’ to always be a full path.
>
> Henry
>
>
>
I gave an alternative to this whole mess that would work and get the
non-unique requirements for a specific project. I'll go over the only
viable option (see my comment above, uniqueness constraint cannot go away
in the forseeable future; not in v3.10, not in v3.11, etc).

* For all projects created in v3.7 or later, the "name" is explicitly the
full path. This keeps compatibility with v3.6 (you can auth, use the
project's "name" which is now the full path).

* All projects get a way to map the full path (full_path attr, as you said
for option 2[A/B]). We can support authentication with *either* full path
or name, the advantage of full_path is you never need to supply domain
identification for the "user friendly" name -- keep in mind, this also will
need to explore (at least documentation around) what occurs if a project
name is "changed" as project names are mutable, it would change the path;
should project names become immutable?

All of this means that current auth workflows *and* new "full_path"
workflows play nicely and no compatibility is broken. We aren't re-defining
anything here as redefining things will break current workflows.

In short, do not expect an api break/compat break is going to be possible
in v3 for older API versions.

--Morgan

> On 10 Jun 2016, at 18:48, Morgan Fainberg <morgan.fainb...@gmail.com>
> wrote:
>
>
>
> On Fri, Jun 10, 2016 at 6:37 AM, Henry Nash <henryna...@mac.com> wrote:
>
>> On further reflection, it seems to me that we can never simply enable
>> either of these approaches in a single release. Even a v4.0 version of the
>> API doesn’t help - since presumably a sever supporting v4 would want to be
>> able to support v3.x for a signification time; and, already discussed, as
>> soon as you allow multiple none-names to have the same name, you can no
>> longer guarantee to support the current API.
>>
>> Hence the only thing I think we can do (if we really do want to change
>> the current functionality) is to do this over several releases with a
>> typical deprecation cycle, e.g.
>>
>> 1) At release 3.7 we allow you to (optionally) specify path names for
>> auth….but make no changes to the uniqueness constraints. We also change the
>> GET /auth/projects to return a path name. However, you can still auth
>> exactly the way we do today (since there will always only be a single
>> project of a given node-name). If however, you do auth without a path (to a
>> project that isn’t a top level project), we log a warning to say this is
>> deprecated (2 cycle

[openstack-dev] [tc][pbr][packaging][all] Definition of Data Files (config) in setup.cfg

2016-06-10 Thread Morgan Fainberg
There has been a bit of back[1] and forth[2][3][4][5] between at least one
packaging group and a few folks who are trying to define data files
(config) in the setup.cfg to aid/ease installation within virtual
environments.

>From what I can tell, there has been an issue with setuptools that makes
this a particularly sticky situation and difficult to address in PBR via an
option like --sysconfdir due to a disagreement between setuptools and
disttools disagreeing upon the meaning of "data-files".  [6]

Before this turns into a nightmare of add-patches/revert/add/revert and
making things highly inconsistent within OpenStack, I'd like to get
feedback from the packaging teams on where this impacts them. I know that a
number of folks carry effectively this patch internally to make working
with VENVs easier.

We should absolutely address this in setuptools/distutils/etc but a clear
direction forward so that the projects under OpenStack remain consistent
for users, developers, and packagers would be good at this point.

I know that keystone has had a lot of work done to ensure we can work in
most "general" environments, but installing the data-files within a VENV is
much more simple than a bunch of special casing to "find" the config files.

In short, I think OpenStack needs to define (even if it's a short period of
time) what we consider "data-files", we can always revisit this when/if we
have a clear path forward via PBR, setuptools, disttools, etc.

[1] https://review.openstack.org/#/c/322086/
[2] https://review.openstack.org/#/c/326152/
[3] http://git.openstack.org/cgit/openstack/neutron/tree/setup.cfg#n24
[4] http://git.openstack.org/cgit/openstack/gnocchi/tree/setup.cfg#n87
[5] http://git.openstack.org/cgit/openstack/aodh/tree/setup.cfg#n30
[6] https://review.openstack.org/#/c/274077/

Cheers,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-10 Thread Morgan Fainberg
ng in a central location, and we're
talking something amazingly useful and easy for the reviewers and
developers to consume.

[1]
http://logs.openstack.org/29/312929/16/check/gate-rally-dsvm-cinder/b7bab27/rally-plot/results.html.gz#/Authenticate.validate_cinder/overview

Just my $0.02 on where we stand here. I feel like I've now contributed to a
large derailing of this whole topic and will not be coming back to discuss
OSProfiler or Rally in the context of the past here.

--Morgan


Best regards,
> Boris Pavlovic
>
>>
>>
>>
>>
>>
> On Fri, Jun 10, 2016 at 3:58 PM, Morgan Fainberg <
> morgan.fainb...@gmail.com> wrote:
>
>>
>>
>> On Fri, Jun 10, 2016 at 3:26 PM, Lance Bragstad <lbrags...@gmail.com>
>> wrote:
>>
>>>
>>>1. I care about performance. I just believe that a big hurdle has
>>>been finding infrastructure that allows us to run performance tests in a
>>>consistent manner. Dedicated infrastructure plays a big role in this,
>>> which is hard (if not impossible) to obtain in the gate - making the 
>>> gate
>>>a suboptimal place for performance testing. Consistency is also an issue
>>>because the gate is comprised of resources donated from several different
>>>providers. Matt lays this out pretty well in his reply above. This sounds
>>>like a TODO to hook rally into the keystone-performance/ansible pipeline,
>>>then we would have rally and keystone running on bare metal.
>>>
>>> This was one of the BIGGEST reasons rally was not given much credence in
>> keystone. The wild variations made the rally data mostly noise. We can't
>> even tell if the data from similar nodes (same provider/same az) was
>> available. This made it a best guess effort of "is this an issue with a
>> node being slow, or the patch" at the time the gate was enabled. This is
>> also why I wouldn't support re-enabling rally as an in-infra gate/check
>> job. The data was extremely difficult to consume as a developer because I'd
>> have to either directly run rally here locally (fine, but why waste infra
>> resources then?) or try and correlate data from across different patches
>> and different AZ providers. It's great to see this being addressed here.
>>
>>>
>>>1.
>>>2. See response to #5.
>>>3. What were the changes made to keystone that caused rally to fail?
>>>If you have some links I'd be curious to revisit them and improve them 
>>> if I
>>>can.
>>>
>>> When there were failures, the failures were both not looked at by the
>> Rally team and was not performance reasons at the time, it was rally not
>> able to be setup/run at all.
>>
>>>
>>>1. Blocked because changes weren't reviewed? As far as I know
>>>OSProfiler is in keystone's default pipeline.
>>>
>>> OSProfiler etc had security concerns and issues that were basically left
>> in "review state" after being given clear "do X to have it approved". I
>> want to point out that once the performance team came back and addressed
>> the issues we landed support for OSProfiler, and it is in keystone. It is
>> not enabled by default (profiling should be opt in, and I stand by that),
>> but you are correct we landed it.
>>
>>>
>>>1. It doesn't look like there are any open patches for rally
>>>integration with keystone [0]. The closed ones have either been
>>>merged [1][2][3][4] or abandon [5][6][7][8] because they are
>>>work-in-progress or unattended.
>>>
>>> I'm only looking for this bot to leave a comment. I don't intend on it
>>> being a voting job any time soon, it's just providing a datapoint for
>>> patches that we suspect to have an impact on performance. It's running on
>>> dedicated hardware, but only from a single service provider - so mileage
>>> may vary depending on where and how you run keystone. But, it does take us
>>> a step in the right direction. People don't have to use it if they don't
>>> want to.
>>>
>>>
>> I'm super happy to see a consistent report leaving data about
>> performance, specifically in a consistent environment that isn't going to
>> vary massively between runs (hopefully). Longterm I'd like to also see this
>> [if it isn't already] do a delta-over-time of keystone performance on
>> merged patches, so we can see the timeline of performance.
>>
>>
>>> Thanks for the feedback!
>>>
>>> [0]
>>> https://review.openstack.org/#/q/

Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-10 Thread Morgan Fainberg
On Fri, Jun 10, 2016 at 3:26 PM, Lance Bragstad  wrote:

>
>1. I care about performance. I just believe that a big hurdle has been
>finding infrastructure that allows us to run performance tests in a
>consistent manner. Dedicated infrastructure plays a big role in this,
> which is hard (if not impossible) to obtain in the gate - making the gate
>a suboptimal place for performance testing. Consistency is also an issue
>because the gate is comprised of resources donated from several different
>providers. Matt lays this out pretty well in his reply above. This sounds
>like a TODO to hook rally into the keystone-performance/ansible pipeline,
>then we would have rally and keystone running on bare metal.
>
> This was one of the BIGGEST reasons rally was not given much credence in
keystone. The wild variations made the rally data mostly noise. We can't
even tell if the data from similar nodes (same provider/same az) was
available. This made it a best guess effort of "is this an issue with a
node being slow, or the patch" at the time the gate was enabled. This is
also why I wouldn't support re-enabling rally as an in-infra gate/check
job. The data was extremely difficult to consume as a developer because I'd
have to either directly run rally here locally (fine, but why waste infra
resources then?) or try and correlate data from across different patches
and different AZ providers. It's great to see this being addressed here.

>
>1.
>2. See response to #5.
>3. What were the changes made to keystone that caused rally to fail?
>If you have some links I'd be curious to revisit them and improve them if I
>can.
>
> When there were failures, the failures were both not looked at by the
Rally team and was not performance reasons at the time, it was rally not
able to be setup/run at all.

>
>1. Blocked because changes weren't reviewed? As far as I know
>OSProfiler is in keystone's default pipeline.
>
> OSProfiler etc had security concerns and issues that were basically left
in "review state" after being given clear "do X to have it approved". I
want to point out that once the performance team came back and addressed
the issues we landed support for OSProfiler, and it is in keystone. It is
not enabled by default (profiling should be opt in, and I stand by that),
but you are correct we landed it.

>
>1. It doesn't look like there are any open patches for rally
>integration with keystone [0]. The closed ones have either been
>merged [1][2][3][4] or abandon [5][6][7][8] because they are
>work-in-progress or unattended.
>
> I'm only looking for this bot to leave a comment. I don't intend on it
> being a voting job any time soon, it's just providing a datapoint for
> patches that we suspect to have an impact on performance. It's running on
> dedicated hardware, but only from a single service provider - so mileage
> may vary depending on where and how you run keystone. But, it does take us
> a step in the right direction. People don't have to use it if they don't
> want to.
>
>
I'm super happy to see a consistent report leaving data about performance,
specifically in a consistent environment that isn't going to vary massively
between runs (hopefully). Longterm I'd like to also see this [if it isn't
already] do a delta-over-time of keystone performance on merged patches, so
we can see the timeline of performance.


> Thanks for the feedback!
>
> [0]
> https://review.openstack.org/#/q/project:openstack/keystone+message:%22%255E%2540rally%2540%22
> [1] https://review.openstack.org/#/c/240251/
> [2] https://review.openstack.org/#/c/188457/
> [3] https://review.openstack.org/#/c/188352/
> [4] https://review.openstack.org/#/c/90405/
> [5] https://review.openstack.org/#/c/301367/
> [6] https://review.openstack.org/#/c/188479/
> [7] https://review.openstack.org/#/c/98836/
> [8] https://review.openstack.org/#/c/91677/
>
>
Great work lance!

--Morgan


> On Fri, Jun 10, 2016 at 4:26 PM, Boris Pavlovic  wrote:
>
>> Lance,
>>
>> It is amazing effort, I am wishing you good luck with Keystone team,
>> however i faced some issues when I started similar effort
>> about 3 years ago with Rally. Here are some points, that are going to be
>> very useful for you:
>>
>>1. I think that Keystone team doesn't care about performance &
>>scalability at all
>>2. Keystone team ignored/discard all help from Rally team to make
>>this effort successful
>>3. When Rally job started failing, because of introduced performance
>>issues in Keystone, they decided to remove job
>>4. They blocked almost forever work on OSProfiler so we are blind and
>>can't see where is the issue in code
>>5. They didn't help to develop any Rally plugin or even review the
>>Rally test cases that we proposed to them
>>
>>
>> Best regards,
>> Boris Pavlovic
>>
>> On Mon, Jun 6, 2016 at 10:45 AM, Clint Byrum  wrote:

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-10 Thread Morgan Fainberg
On Fri, Jun 10, 2016 at 6:37 AM, Henry Nash <henryna...@mac.com> wrote:

> On further reflection, it seems to me that we can never simply enable
> either of these approaches in a single release. Even a v4.0 version of the
> API doesn’t help - since presumably a sever supporting v4 would want to be
> able to support v3.x for a signification time; and, already discussed, as
> soon as you allow multiple none-names to have the same name, you can no
> longer guarantee to support the current API.
>
> Hence the only thing I think we can do (if we really do want to change the
> current functionality) is to do this over several releases with a typical
> deprecation cycle, e.g.
>
> 1) At release 3.7 we allow you to (optionally) specify path names for
> auth….but make no changes to the uniqueness constraints. We also change the
> GET /auth/projects to return a path name. However, you can still auth
> exactly the way we do today (since there will always only be a single
> project of a given node-name). If however, you do auth without a path (to a
> project that isn’t a top level project), we log a warning to say this is
> deprecated (2 cycles, 4 cycles?)
> 2) If you connect with a 3.6 client, then you get the same as today for
> GET /auth/projects and cannot use a path name to auth.
> 3) At sometime in the future, we deprecate the “auth without a path”
> capability. We can debate as to whether this has to be a major release.
>
> If we take this gradual approach, I would be pushing for the “relax
> project name constraints” approach…since I believe this leads to a cleaner
> eventual solution (and there is no particular advantage with the
> hierarchical naming approach) - and (until the end of the deprecation)
> there is no break to the existing API.
>
> Henry
>
>
How do you handle relaxed project name constraints without completely
breaking 3.6 auth - regardless of future microversion that requires the
full path. This is a major api change and will result in very complex
matrix of project name mapping (old projects that can be accessed without
the full path, new that must always have the path)?

Simply put, I do not see relaxing project name constraints as viable
without a major API change and projects that simply are unavailable for
scoping a token to under the base api version (pre-microversions) of 3.6

I am certain that if all projects post API version 3.6 are created with the
full-path name only and that is how they are represented to the user for
auth, we get both things for free. Old projects pre-full-path would need
optional compatibility for deconstructing a full-path for them.  Basically
you end up with "path" and "name", in old projects these differ, in new
projects they are the same.  No conflicts, not breaking "currently working
auth", no "major API version" needed.

--Morgan


> On 7 Jun 2016, at 09:47, Henry Nash <henryna...@mac.com> wrote:
>
> OK, so thanks for the feedback - understand the message.
>
> However, in terms of compatibility, the one thing that concerns me about
> the hierarchical naming approach is that even with microversioing, we might
> still surprise a client. An unmodified client (i.e. doesn’t understand 3.7)
> would still see a change in the data being returned (the project names have
> suddenly become full path names). We have to return this even if they don’t
> ask for 3.7, since otherwise there is no difference between this approach
> and relaxing the project naming in terms of trying to prevent auth
> breakages.
>
> In more detail:
>
> 1) Both approaches were planned to return the path name (instead of the
> node name) in GET /auth/projects - i.e. the API you are meant to use to
> find out what you can scope to
> 2) Both approaches were planned to accept the path name in the auth
> request block
> 3) The difference in hierarchical naming is the if I do a regular GET
> /project(s) I also see the full path name as the “project name”
>
> if we don’t do 3), then code that somehow authenticates, and then uses the
> regular GET /project(s) calls to find a project name and then re-scopes (or
> re-auths) to that name, will fail if the project they want is not a top
> level project. However, the flip side is that if there is code that uses
> these same calls to, say, display projects to the user (e.g. a home grown
> UI) - then it might get confused until it supports 3.7 (i.e. asking for the
> old microversion won’t help it) since all the names include the
> hierarchical path.
>
> Just want to make sure we understand the implications….
>
> Henry
>
> On 4 Jun 2016, at 08:34, Monty Taylor <mord...@inaugust.com> wrote:
>
> On 06/04/2016 01:53 AM, Morgan Fainberg wrote:
>
>
> On Jun 3, 2016 12:42, "Lan

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-03 Thread Morgan Fainberg
On Jun 3, 2016 12:42, "Lance Bragstad"  wrote:
>
>
>
> On Fri, Jun 3, 2016 at 11:20 AM, Henry Nash  wrote:
>>
>>
>>> On 3 Jun 2016, at 16:38, Lance Bragstad  wrote:
>>>
>>>
>>>
>>> On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash  wrote:


> On 3 Jun 2016, at 01:22, Adam Young  wrote:
>
> On 06/02/2016 07:22 PM, Henry Nash wrote:
>>
>> Hi
>>
>> As you know, I have been working on specs that change the way we
handle the uniqueness of project names in Newton. The goal of this is to
better support project hierarchies, which as they stand today are
restrictive in that all project names within a domain must be unique,
irrespective of where in the hierarchy that projects sits (unlike, say, the
unix directory structure where a node name only has to be unique within its
parent). Such a restriction is particularly problematic when enterprise
start modelling things like test, QA and production as branches of a
project hierarchy, e.g.:
>>
>> /mydivsion/projectA/dev
>> /mydivsion/projectA/QA
>> /mydivsion/projectA/prod
>> /mydivsion/projectB/dev
>> /mydivsion/projectB/QA
>> /mydivsion/projectB/prod
>>
>> Obviously the idea of a project name (née tenant) being unique has
been around since near the beginning of (OpenStack) time, so we must be
cautions. There are two alternative specs proposed:
>>
>> 1) Relax project name constraints:
https://review.openstack.org/#/c/310048/
>> 2) Hierarchical project naming:
https://review.openstack.org/#/c/318605/
>>
>> First, here’s what they have in common:
>>
>> a) They both solve the above problem
>> b) They both allow an authorization scope to use a path rather than
just a simple name, hence allowing you to address a project anywhere in the
hierarchy
>> c) Neither have any impact if you are NOT using a hierarchy - i.e.
if you just have a flat layer of projects in a domain, then they have no
API or semantic impact (since both ensure that a project’s name must still
be unique within a parent)
>>
>> Here’s how the differ:
>>
>> - Relax project name constraints (1), keeps the meaning of the
‘name’ attribute of a project to be its node-name in the hierarchy, but
formally relaxes the uniqueness constraint to say that it only has to be
unique within its parent. In other words, let’s really model this a bit
like a unix directory tree.
>>>
>>> I think I lean towards relaxing the project name constraint. The reason
is because we already expose `domain_id`, `parent_id`, and `name` of a
project. By relaxing the constraint we can give the user everything the
need to know about a project without really changing any of these. When
using 3.7, you know what domain your project is in, you know the identifier
of the parent project, and you know that your project name is unique within
the parent.
>>
>> - Hierarchical project naming (2), formally changes the meaning of
the ‘name’ attribute to include the path to the node as well as the node
name, and hence ensures that the (new) value of the name attribute remains
unique.
>>>
>>> Do we intend to *store* the full path as the name, or just build it out
on demand? If we do store the full path, we will have to think about our
current data model since the depth of the organization or domain would be
limited by the max possible name length. Will performance be something to
think about building the full path on every request?
>>
>> I now mention this issue in the spec for hierarchical project naming
(the relax naming approach does not suffer this issue). As you say, we’ll
have to change the DB (today it is only 64 chars) if we do store the full
path . This is slightly problematic since the maximum depth of hierarchy is
controlled by a config option, and hence could be changed. We will
absolutely have be able to build the path on-the-fly in order to support
legacy drivers (who won’t be able to store more than 64 chars). We may need
to do some performance tests to ascertain if we can get away with building
the path on-the-fly in all cases and avoid changing the table.  One
additional point is that, of course, the API will return this path whenever
it returns a project - so clients will need to be aware of this increase in
size.
>
>
> These are all good points that continue to push me towards relaxing the
project naming constraint :)
>>
>>
>> While whichever approach we chose would only be included in a new
microversion (3.7) of the Identity API, although some relevant APIs can
remain unaffected for a client talking 3.6 to a Newton server, not all can
be. As pointed out be jamielennox, this is a data modelling problem - if a
Newton server has created multiple projects called “dev” in the hierarchy,
a 3.6 client trying to scope a token simply to “dev” cannot be answered
correctly (and it is proposed we would have to return an 

Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-03 Thread Morgan Fainberg
On Jun 3, 2016 13:16, "Brant Knudson"  wrote:
>
>
>
> On Fri, Jun 3, 2016 at 2:35 PM, Lance Bragstad 
wrote:
>>
>> Hey all,
>>
>> I have been curious about impact of providing performance feedback as
part of the review process. From what I understand, keystone used to have a
performance job that would run against proposed patches (I've only heard
about it so someone else will have to keep me honest about its timeframe),
but it sounds like it wasn't valued.
>>
>
> We had a job running rally for a year (I think) that nobody ever looked
at so we decided it was a waste and stopped running it.
>
>>
>> I think revisiting this topic is valuable, but it raises a series of
questions.
>>
>> Initially it probably only makes sense to test a reasonable set of
defaults. What do we want these defaults to be? Should they be determined
by DevStack, openstack-ansible, or something else?
>>
>
> A performance test is going to depend on the environment (the machines,
disks, network, etc), the existing data (tokens, revocations, users, etc.),
and the config (fernet, uuid, caching, etc.). If these aren't consistent
between runs then the results are not going to be usable. (This is the
problem with running rally on infra hardware.) If the data isn't realistic
(1000s of tokens, etc.) then the results are going to be at best not useful
or at worst misleading.
>
>> What does the performance test criteria look like and where does it
live? Does it just consist of running tempest?
>>
>
> I don't think tempest is going to give us numbers that we're looking for
for performance. I've seen a few scripts and have my own for testing
performance of token validation, token creation, user creation, etc. which
I think will do the exact tests we want and we can get the results
formatted however we like.
>
>> From a contributor and reviewer perspective, it would be nice to have
the ability to compare performance results across patch sets. I understand
that keeping all performance results for every patch for an extended period
of time is unrealistic. Maybe we take a daily performance snapshot against
master and use that to map performance patterns over time?
>>
>
> Where are you planning to store the results?
>
>>
>> Have any other projects implemented a similar workflow?
>>
>> I'm open to suggestions and discussions because I can't imagine there
aren't other folks out there interested in this type of pre-merge data
points.
>>
>>
>> Thanks!
>>
>> Lance
>>
>
> Since the performance numbers are going to be very dependent on the
machines I think the only way this is going to work is if somebody's
willing to set up dedicated hardware to run the tests on. If you're doing
that then set it up to mimic how you deploy keystone, deploy the patch
under test, run the performance tests, and report the results. I'd be fine
with something like this commenting on keystone changes. None of this has
to involve openstack infra. Gerrit has a REST API to get the current
patches.
>
> Everyone that's got performance requirements should do the same. Maybe I
can get the group I'm in to try it sometime.
>
> - Brant
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

You have outlined everything I was asking for fr rally as a useful metric,
but simply getting the resources was a problem.

Unfortunately I have not seen anyone willing to offer these dedicated
resources and/or reporting the delta over time or per patchset.

There is only so much we can do without consistent / reliably the same test
environments.

I would be very happy to see this type of testing consistently reported
especially if it mimics real workloads as well as synthetic like rally/what
Matt and Dolph use.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Who is going to fix the broken non-voting tests?

2016-05-26 Thread Morgan Fainberg
On Thu, May 26, 2016 at 7:55 AM, Adam Young  wrote:

> Some mix of these three tests is almost always failing:
>
> gate-keystone-dsvm-functional-nv FAILURE in 20m 04s (non-voting)
> gate-keystone-dsvm-functional-v3-only-nv FAILURE in 32m 45s (non-voting)
> gate-tempest-dsvm-keystone-uwsgi-full-nv FAILURE in 1h 07m 53s (non-voting)
>
>
> Are we going to keep them running and failing, or boot them?  If we are
> going to keep them, who is going to commit to fixing them?
>
> We should not live with broken windows.
>
>
>
The uwsgi check should be moved to a proper run utilizing mod_proxy_uwsgi.
The v3 only one is a WIP that a few folks are working on
The function-nv one was passing somewhere. I think that one is close.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo_config] Improving Config Option Help Texts

2016-05-25 Thread Morgan Fainberg
On Wed, May 25, 2016 at 2:48 AM, Erno Kuvaja  wrote:

> On Tue, May 24, 2016 at 8:58 PM, John Garbutt 
> wrote:
>
>> On 24 May 2016 at 19:03, Ian Cordasco  wrote:
>> > -Original Message-
>> > From: Erno Kuvaja 
>> > Reply: OpenStack Development Mailing List (not for usage questions)
>> > 
>> > Date: May 24, 2016 at 06:06:14
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > 
>> > Subject:  [openstack-dev] [all][oslo_config] Improving Config Option
>> Help Texts
>> >
>> >> Hi all,
>> >>
>> >> Based on the not yet merged spec of categorized config options [0] some
>> >> project seems to have started improving the config option help texts.
>> This
>> >> is great but I noticed scary trend on clutter to be added on these
>> >> sections. Now looking individual changes it does not look that bad at
>> all
>> >> in the code 20 lines well structured templating. Until you start
>> comparing
>> >> it to the example config files. Lots of this data is redundant to what
>> is
>> >> generated to the example configs already and then the maths struck me.
>> >>
>> >> In Glance only we have ~120 config options (this does not include
>> >> glance_store nor any other dependencies we pull in for our configs like
>> >> Keystone auth. Those +20 lines of templating just became over 2000
>> lines of
>> >> clutter in the example configs and if all projects does that we can
>> >> multiply the issue. I think no-one with good intention can say that
>> it's
>> >> beneficial for our deployers and admins who are already struggling
>> with the
>> >> configs.
>> >>
>> >> So I beg you when you do these changes to the config option help fields
>> >> keep them short and compact. We have the Configuration Docs for
>> extended
>> >> descriptions and cutely formatted repetitive fields, but lets keep
>> those
>> >> off from the generated (Example) config files. At least I would like
>> to be
>> >> able to fit more than 3 options on the screen at the time when reading
>> >> configs.
>> >>
>> >> [0] https://review.openstack.org/#/c/295543/
>> >
>> > Hey Erno,
>> >
>> > So here's where I have to very strongly disagree with you. That spec
>> > was caused by operator feedback, specifically for projects that
>> > provide multiple services that may or may not have separated config
>> > files which and which already have "short and compact" descriptions
>> > that are not very helpful to oeprators.
>>
>> +1
>>
>> The feedback at operator sessions in Manchester and Austin seemed to
>> back up the need for better descriptions.
>>
>>
> I'm all for _better_ descriptions.
>
>
>> More precisely, Operators should not need to read the code to
>> understand how to use the configuration option.
>>
>> Now often that means they are longer. But they shouldn't be too long.
>>
>>
> Let me give an example of what I see as a clutter with the newly proposed
> help texts:
>
> Glance config files are split per service. So we have files
> glance-api.conf, glance-registry.conf, glance-scrubber.conf etc.
> We should not need to add 300 lines (once for each option) to
> glance-api.conf containing repetitive:
> """
>
> Services which consume this:
> * ``glance-api``
> """
> As it's glance-api.conf this _should_ be self-explanatory. This is getting
> worse for certain options we have in multiple config files that will have:
> """
>
> Services which consume this:
> * ``glance-api`` (mandatory for v1; optional for v2)
> * ``image scrubber`` (a periodic task)
> * ``cache prefetcher`` (a periodic task)
> """
> Which is kind of correct, but as all these three services has their own
> configs, changing it in one does not necessarily affect the rest
> (glance-api.conf is exception here if it is available and -scrubber and/or
> -cache configs are not). So now adding these lines to glance-scrubber.conf
> gives impression that glance-api consumes it from there, which is false.
>
> Will all options in [keystone_authtoken] have a list of every single
> OpenStack service consuming it? I certainly hope not.
>
> The next part of cluttering is adding again extra ~300 redundant lines:
> """
>
> Possible values:
> * A valid port number
> """
> This is specific example for PortOpt, Currently the configgenrator
> provides following from single line help text:
> """
> # Port the registry server is listening on. (port value)
> # Minimum value: 0
> # Maximum value: 65535
> #registry_port = 9191
> """
> Is "Possible values:\n A valid port number" by any means adding value to
> that help? I've seen the same with IntOpt where configgenerator adds that
> (integer value) and we add "Possible values:\n Valid Integer".
>
> > The *example* config files will have a lot more detail in them. Last I
>> > saw (I've stopped driving that specification) there was going to be a
>> > way to generate config files 

[openstack-dev] [keystone] New Core Reviewer (sent on behalf of Steve Martinelli)

2016-05-24 Thread Morgan Fainberg
I want to welcome Rodrigo Duarte (rodrigods) to the keystone core team.
Rodrigo has been a consistent contributor to keystone and has been
instrumental in the federation implementations. Over the last cycle he has
shown an understanding of the code base and contributed quality reviews.

I am super happy (as proxy for Steve) to welcome Rodrigo to the Keystone
Core team.

Cheers,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Plans to converge on one ldap client?

2016-05-24 Thread Morgan Fainberg
On Tue, May 24, 2016 at 8:55 AM, Corey Bryant <corey.bry...@canonical.com>
wrote:

>
>
> On Tue, May 24, 2016 at 11:11 AM, Morgan Fainberg <
> morgan.fainb...@gmail.com> wrote:
>
>>
>>
>> On Tue, May 24, 2016 at 5:53 AM, Corey Bryant <corey.bry...@canonical.com
>> > wrote:
>>
>>> Hi All,
>>>
>>> Are there any plans to converge on one ldap client across projects?
>>> Some projects have moved to ldap3 and others are using pyldap (both are in
>>> global requirements).
>>>
>>> The issue we're running into in Ubuntu is that we can only have one ldap
>>> client in Ubuntu main, while the others will live in universe.
>>>
>>> --
>>> Regards,
>>> Corey
>>>
>>>
>> Out of curiosity, what drives this requirement? pyldap and ldap3 do not
>> overlap in namespace and can co-install just fine. This is no different
>> than previously having python-ldap and ldap3.
>>
>> It seems a little arbitrary to say only one of these can be in main, but
>> this is why i am asking.
>>
>>
> No problem, thanks for asking.  I agree, it's no different than
> python-ldap and ldap3 and it's not a co-installability issue.  This is just
> a policy for Ubuntu main.  If we file a Main Inclusion Request (MIR) for a
> new ldap client then we'll be asked to work on what's needed to get the
> other client package out of main, which consists of patching use of one
> client for the other.
>
>
Ah, ok sure; still sounds a little silly imho, but only so much we can do
on that front ;). So the reality is keystone is considering ldap3, but
there have been concerns about ldap3's interface compared to the relatively
tried-and-true pyldap (a clean fork+py3 support of python-ldap). Long term
we may move to ldap3. Short term, we wanted python3 support, so the drop in
replacement for python-ldap was the clear winner (ldap3 is significantly
more work to support, and even when/if we support it there will be a period
where we support both, just in different drivers).

Basically, if we add ldap3 to keystone, we will be supporting both for a
non-insignificant time. For now we're leaning on pyldap for the foreseeable
future.


> --
> Regards,
> Corey
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Plans to converge on one ldap client?

2016-05-24 Thread Morgan Fainberg
On Tue, May 24, 2016 at 5:53 AM, Corey Bryant 
wrote:

> Hi All,
>
> Are there any plans to converge on one ldap client across projects?  Some
> projects have moved to ldap3 and others are using pyldap (both are in
> global requirements).
>
> The issue we're running into in Ubuntu is that we can only have one ldap
> client in Ubuntu main, while the others will live in universe.
>
> --
> Regards,
> Corey
>
>
Out of curiosity, what drives this requirement? pyldap and ldap3 do not
overlap in namespace and can co-install just fine. This is no different
than previously having python-ldap and ldap3.

It seems a little arbitrary to say only one of these can be in main, but
this is why i am asking.

--morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-23 Thread Morgan Fainberg
On Mon, May 23, 2016 at 4:28 PM, Gregory Haynes <g...@greghaynes.net> wrote:

> On Mon, May 23, 2016, at 05:24 PM, Morgan Fainberg wrote:
>
>
>
> On Mon, May 23, 2016 at 2:57 PM, Gregory Haynes <g...@greghaynes.net>
> wrote:
>
> On Fri, May 20, 2016, at 07:48 AM, Thierry Carrez wrote:
> > John Dickinson wrote:
> > > [...]
> > >> So the real question we need to answer is... where does OpenStack
> > >> stop, and where does the wider open source community start ? If
> > >> OpenStack is purely an "integration engine", glue code for other
> > >> lower-level technologies like hypervisors, databases, or distributed
> > >> block storage, then the scope is limited, Python should be plenty
> > >> enough, and we don't need to fragment our community. If OpenStack is
> > >> "whatever it takes to reach our mission", then yes we need to add one
> > >> language to cover lower-level/native optimization, because we'll
> > >> need that... and we need to accept the community cost as a
> > >> consequence of that scope choice. Those are the only two options on
> > >> the table.
> > >>
> > >> I'm actually not sure what is the best answer. But I'm convinced we,
> > >> as a community, need to have a clear answer to that. We've been
> > >> avoiding that clear answer until now, creating tension between the
> > >> advocates of an ASF-like collection of tools and the advocates of a
> > >> tighter-integrated "openstack" product. We have created silos and
> > >> specialized areas as we got into the business of developing time-
> > >> series databases or SDNs. As a result, it's not "one community"
> > >> anymore. Should we further encourage that, or should we focus on
> > >> what the core of our mission is, what we have in common, this
> > >> integration engine that binds all those other open source projects
> > >> into one programmable infrastructure solution ?
> > >
> > > You said the answer in your question. OpenStack isn't defined as an
> > > integration engine[3]. The definition of OpenStack is whatever it
> > > takes to fulfill our mission[4][5]. I don't mean that as a tautology.
> > > I mean that we've already gone to the effort of defining OpenStack.
> It's
> > > our mission statement. We're all about building a cloud platform upon
> > > which people can run their apps ("cloud-native" or otherwise), so we
> > > write the software needed to do that.
> > >
> > > So where does OpenStack stop and the wider community start? OpenStack
> > > includes the projects needed to fulfill its mission.
> >
> > I'd totally agree with you if OpenStack was developed in a vacuum. But
> > there is a large number of open source projects and libraries that
> > OpenStack needs to fulfill its mission that are not in "OpenStack": they
> > are external open source projects we depend on. Python, MySQL, libvirt,
> > KVM, Ceph, OpenvSwitch, RabbitMQ... We are not asking that those should
> > be included in OpenStack, and we are not NIHing replacements for those
> > in OpenStack either.
> >
> > So it is not as clear-cut as you present it, and you can approach this
> > dependency question from two directions.
> >
> > One is community-centric: "anything produced by our community is
> > OpenStack". If we are missing a lower-level piece to achieve our mission
> > and are developing it ourselves as a result, then it is OpenStack, even
> > if it ends up being a message queue or a database.
> >
> > The other approach is product-centric: "lower-level pieces are OpenStack
> > dependencies, rather than OpenStack itself". If we are missing a
> > lower-level piece to achieve our mission and are developing it as a
> > result, it could be developed on OpenStack infrastructure by members of
> > the OpenStack community but it is not "OpenStack the product", it's an
> > OpenStack *dependency*. It is not governed by the TC, it can use any
> > language and tool deemed necessary.
> >
> > On this second approach, there is the obvious question of where
> > "lower-level" starts, which as you explained above is not really
> > clear-cut. A good litmus test for it could be whenever Python is not
> > enough. If you can't develop it effectively with the language that is
> > currently sufficient for the rest of OpenStack, then developing it as an
> > OpenStack dependency in what

Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-23 Thread Morgan Fainberg
On Mon, May 23, 2016 at 2:57 PM, Gregory Haynes  wrote:

> On Fri, May 20, 2016, at 07:48 AM, Thierry Carrez wrote:
> > John Dickinson wrote:
> > > [...]
> > >> So the real question we need to answer is... where does OpenStack
> > >> stop, and where does the wider open source community start ? If
> > >> OpenStack is purely an "integration engine", glue code for other
> > >> lower-level technologies like hypervisors, databases, or distributed
> > >> block storage, then the scope is limited, Python should be plenty
> > >> enough, and we don't need to fragment our community. If OpenStack is
> > >> "whatever it takes to reach our mission", then yes we need to add one
> > >> language to cover lower-level/native optimization, because we'll
> > >> need that... and we need to accept the community cost as a
> > >> consequence of that scope choice. Those are the only two options on
> > >> the table.
> > >>
> > >> I'm actually not sure what is the best answer. But I'm convinced we,
> > >> as a community, need to have a clear answer to that. We've been
> > >> avoiding that clear answer until now, creating tension between the
> > >> advocates of an ASF-like collection of tools and the advocates of a
> > >> tighter-integrated "openstack" product. We have created silos and
> > >> specialized areas as we got into the business of developing time-
> > >> series databases or SDNs. As a result, it's not "one community"
> > >> anymore. Should we further encourage that, or should we focus on
> > >> what the core of our mission is, what we have in common, this
> > >> integration engine that binds all those other open source projects
> > >> into one programmable infrastructure solution ?
> > >
> > > You said the answer in your question. OpenStack isn't defined as an
> > > integration engine[3]. The definition of OpenStack is whatever it
> > > takes to fulfill our mission[4][5]. I don't mean that as a tautology.
> > > I mean that we've already gone to the effort of defining OpenStack.
> It's
> > > our mission statement. We're all about building a cloud platform upon
> > > which people can run their apps ("cloud-native" or otherwise), so we
> > > write the software needed to do that.
> > >
> > > So where does OpenStack stop and the wider community start? OpenStack
> > > includes the projects needed to fulfill its mission.
> >
> > I'd totally agree with you if OpenStack was developed in a vacuum. But
> > there is a large number of open source projects and libraries that
> > OpenStack needs to fulfill its mission that are not in "OpenStack": they
> > are external open source projects we depend on. Python, MySQL, libvirt,
> > KVM, Ceph, OpenvSwitch, RabbitMQ... We are not asking that those should
> > be included in OpenStack, and we are not NIHing replacements for those
> > in OpenStack either.
> >
> > So it is not as clear-cut as you present it, and you can approach this
> > dependency question from two directions.
> >
> > One is community-centric: "anything produced by our community is
> > OpenStack". If we are missing a lower-level piece to achieve our mission
> > and are developing it ourselves as a result, then it is OpenStack, even
> > if it ends up being a message queue or a database.
> >
> > The other approach is product-centric: "lower-level pieces are OpenStack
> > dependencies, rather than OpenStack itself". If we are missing a
> > lower-level piece to achieve our mission and are developing it as a
> > result, it could be developed on OpenStack infrastructure by members of
> > the OpenStack community but it is not "OpenStack the product", it's an
> > OpenStack *dependency*. It is not governed by the TC, it can use any
> > language and tool deemed necessary.
> >
> > On this second approach, there is the obvious question of where
> > "lower-level" starts, which as you explained above is not really
> > clear-cut. A good litmus test for it could be whenever Python is not
> > enough. If you can't develop it effectively with the language that is
> > currently sufficient for the rest of OpenStack, then developing it as an
> > OpenStack dependency in whatever language is appropriate might be the
> > solution...
> >
> > That is what I mean by 'scope': where does "OpenStack" stop, and where
> > do "OpenStack dependencies" start ? It is a lot easier and a lot less
> > community-costly to allow additional languages in OpenStack dependencies
> > (we already have plenty there).
> >
>
> I strongly agree with both of the points about what OpenStack is defined
> as. We are  a set of projects attempting to fulfill our mission. In
> doing so, we try to use outside dependencies to help us as much as
> possible. Sometimes we cannot find an outside dependency to satisfy a
> need whether due to optimization needs, licensing issues, usability
> problems, or simply because an outside project doesn't exist. That is
> when things become less clear-cut and we might need to develop software
> not purely/directly related to 

Re: [openstack-dev] [Keystone] Welcome Keystone to the World of Python 3

2016-05-23 Thread Morgan Fainberg
On Mon, May 23, 2016 at 12:03 PM, Doug Hellmann 
wrote:

> Excerpts from Morgan Fainberg's message of 2016-05-23 11:55:48 -0700:
> > On Mon, May 23, 2016 at 7:54 AM, Doug Hellmann 
> > wrote:
> >
> > > Excerpts from Morgan Fainberg's message of 2016-05-20 20:58:00 -0700:
> > > > We've gone through all of our test cases and all of our code base. At
> > > this
> > > > point Keystone is no longer skipping any of the tests (which do tend
> to
> > > > test the entire request stack) and we are properly gating on being
> > > > Python3.4 compatible.
> > > >
> > > > I want to thank everyone who has put in effort in the last few weeks
> to
> > > > punt the last of the patches though the gate. It would not have been
> > > doable
> > > > without those hacking on LdapPool, doing test cleanup, and those
> > > > reviewing/trying the code out.
> > > >
> > > > If you run across issues with Keystone and Python3, please let us
> know.
> > > >
> > > > A sincere thanks to the entire Keystone team involved in this
> multicycle
> > > > effort.
> > > >
> > > > --Morgan
> > >
> > > Is this for unit or functional tests?
> > >
> > >
> > Our unit tests and functional tests cross over significantly. The
> majority
> > of keystone's unit tests are really "stand up a full keystone and test
> the
> > API as though you were a client".
> >
> > This is not DSVM and true "functional" tests (most of the restful test
> > cases will move over to "functional" once it's working) are not fully
> setup
> > yet.
> >
> > In short "unit tests", except our unit tests are mostly "functional" in
> > nature.
> >
> > --Morgan
>
> Ah, OK, I was wondering if you were using the python 3 features of
> devstack and how well they were working for you.
>
>
Thats the next plan, of course.


> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Welcome Keystone to the World of Python 3

2016-05-23 Thread Morgan Fainberg
On Mon, May 23, 2016 at 7:54 AM, Doug Hellmann 
wrote:

> Excerpts from Morgan Fainberg's message of 2016-05-20 20:58:00 -0700:
> > We've gone through all of our test cases and all of our code base. At
> this
> > point Keystone is no longer skipping any of the tests (which do tend to
> > test the entire request stack) and we are properly gating on being
> > Python3.4 compatible.
> >
> > I want to thank everyone who has put in effort in the last few weeks to
> > punt the last of the patches though the gate. It would not have been
> doable
> > without those hacking on LdapPool, doing test cleanup, and those
> > reviewing/trying the code out.
> >
> > If you run across issues with Keystone and Python3, please let us know.
> >
> > A sincere thanks to the entire Keystone team involved in this multicycle
> > effort.
> >
> > --Morgan
>
> Is this for unit or functional tests?
>
>
Our unit tests and functional tests cross over significantly. The majority
of keystone's unit tests are really "stand up a full keystone and test the
API as though you were a client".

This is not DSVM and true "functional" tests (most of the restful test
cases will move over to "functional" once it's working) are not fully setup
yet.

In short "unit tests", except our unit tests are mostly "functional" in
nature.

--Morgan


> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Welcome Keystone to the World of Python 3

2016-05-20 Thread Morgan Fainberg
We've gone through all of our test cases and all of our code base. At this
point Keystone is no longer skipping any of the tests (which do tend to
test the entire request stack) and we are properly gating on being
Python3.4 compatible.

I want to thank everyone who has put in effort in the last few weeks to
punt the last of the patches though the gate. It would not have been doable
without those hacking on LdapPool, doing test cleanup, and those
reviewing/trying the code out.

If you run across issues with Keystone and Python3, please let us know.

A sincere thanks to the entire Keystone team involved in this multicycle
effort.

--Morgan
--
Morgan Fainberg (notmorgan)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-19 Thread Morgan Fainberg
e
lower level technologies have a place in "openstack's big tent" but should
be evaluated carefully before inclusion; I am not advocating we include
MySQL, PGSQL, or RabbitMQ as "part of the big tent". I think object storage
is an important thing within the cloud ecosystem, and by it's nature swift
belongs as part of the big tent.

The cost for golang, in my view after having a large number of
conversations, is worth it. This conversation will continue to come up in
different forms if we nix golang here, the low level optimizations are
relevant and the projects (initially swift) that are running up against
these issues are in-fact a part of OpenStack today.. I also want to make
sure we, as the TC and community, are not evaluating a specific project
here, but OpenStack as a whole. If we are evaluating one project
specifically and making a change based upon where it is in the ecosystem,
we should step way back and evaluate why this just now has become part of
the conversation and address that issue first


> I'm actually not sure what is the best answer. But I'm convinced we, as a
> community, need to have a clear answer to that. We've been avoiding that
> clear answer until now, creating tension between the advocates of an
> ASF-like collection of tools and the advocates of a tighter-integrated
> "openstack" product. We have created silos and specialized areas as we got
> into the business of developing time-series databases or SDNs. As a result,
> it's not "one community" anymore. Should we further encourage that, or
> should we focus on what the core of our mission is, what we have in common,
> this integration engine that binds all those other open source projects
> into one programmable infrastructure solution ?
>
> --
> Thierry Carrez (ttx)
>
>
I think we would do better as an ASF-like organization with a bias towards
programmable infrastructure. I do not think we should include more
time-series databases, RDBMSs, or even Message Brokers simply for the sake
of inclusion (there are examples of these in the big tent already). But we
should be careful about excluding and eliminating projects and scope as
well. We should include golang, and work on the other issues separately
(this sounds like something we should be working on setting proper
guideposts for the community, how we do that, etc, and revolves around
improving how the TC provides leadership). As part of improving the
leadership of OpenStack, we need to also work to have a clear
product-vision (and I do not mean "product" as in something specifically
sell-able). I think part of our issue and what is driving these
conversations is a lack of clear product vision which is part of the TC
providing the guideposts and improving leadership within OpenStack.

--
Morgan Fainberg (notmorgan)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Newton midycle planning

2016-05-17 Thread Morgan Fainberg
On Tue, May 10, 2016 at 4:26 PM, Morgan Fainberg <morgan.fainb...@gmail.com>
wrote:

> On Wed, Apr 13, 2016 at 7:07 PM, Morgan Fainberg <
> morgan.fainb...@gmail.com> wrote:
>
>> It is that time again, the time to plan the Keystone midcycle! Looking at
>> the schedule [1] for Newton, the weeks that make the most sense look to be
>> (not in preferential order):
>>
>> R-14 June 27-01
>> R-12 July 11-15
>> R-11 July 18-22
>>
>> As usual this will be a 3 day event (probably Wed, Thurs, Fri), and based
>> on previous attendance we can expect ~30 people to attend. Based upon all
>> the information (other midcycles, other events, the US July4th holiday), I
>> am thinking that week R-12 (the week of the newton-2 milestone) would be
>> the best offering. Weeks before or after these three tend to push too close
>> to the summit or too far into the development cycle.
>>
>> I am trying to arrange for a venue in the Bay Area (most likely will be
>> South Bay, such as Mountain View, Sunnyvale, Palo Alto, San Jose) since we
>> have done east coast and central over the last few midcycles.
>>
>> Please let me know your thoughts / preferences. In summary:
>>
>> * Venue will be Bay Area (more info to come soon)
>>
>> * Options of weeks (in general subjective order of preference): R-12,
>> R-11, R-14
>>
>> Cheers,
>> --Morgan
>>
>> [1] http://releases.openstack.org/newton/schedule.html
>>
>
> We have an update for the midcycle planning!
>
> First of all, I want to thank Cisco for hosting us for this midcycle. The
> Dates will be R-11[1], Wed-Friday: July 20-22 (expect to be around for a
> full day on the 20th and at least 1/2 day on the 22nd). The address will be
> 170 W Tasman Dr, San Jose, CA 95134 . The exact building and room # will be
> determined soon. Expect a place (wiki, google form, etc) to be posted this
> week so we can collect real numbers of those who will be joining us.
>
> Thanks for being patient with the planning. We should have ~35 spots for
> this midcycle.
>
> Cheers,
> --Morgan
>
>
RSVP Form for the Keystone Newton MidCycle:
http://goo.gl/forms/NfFMpJe6MSCXSNhr2

Please RSVP Early, maximum attendance is 35.

We look forward to seeing you there!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][designate][zaqar][nova][swift] using pylibmc instead of python-memcached

2016-05-13 Thread Morgan Fainberg
On Fri, May 13, 2016 at 1:12 PM, Adam Young  wrote:

> On 05/13/2016 12:52 PM, Monty Taylor wrote:
>
>> On 05/13/2016 11:38 AM, Eric Larson wrote:
>>
>>> Monty Taylor writes:
>>>
>>> On 05/13/2016 08:23 AM, Mehdi Abaakouk wrote:

> On Fri, May 13, 2016 at 02:58:08PM +0200, Julien Danjou wrote:
>
>> What's wrong with pymemcache, that we picked for tooz and are using
>> for 2 years now?
>>
>>   https://github.com/pinterest/pymemcache
>>
> Looks like a good alternative.
>
 Honestly, nobody should be using pymemcache or python-memcached or
 pylibmc for anything caching related in OpenStack. People should be
 using oslo.cache - however, if that needs work before it's usable,
 people should be using dogpile.cache, which is what oslo.cache uses on
 the backend.

 dogpile is pluggable, so it means that the backend used for caching
 can be chosen in a much broader manner. As morgan mentions elsewhere,
 that means that people who want to use a different memcache library
 just need to write a dogpile driver.

 Please don't anybody directly use memcache libraries for caching in
 OpenStack. Please.

 Using dogpile doesn't remove the decision of what caching backend is
>>> used. Dogpile has support (I think) for all the libraries mentioned here:
>>>
>>>
>>> https://bitbucket.org/zzzeek/dogpile.cache/src/87965ada186f9b3a4eb7ff033a2e31437d5e9bc6/dogpile/cache/backends/memcached.py
>>>
>>>
>>> Oslo cache would need to be the one making decision as to what backend
>>> is used if we need to have something consistent.
>>>
>> I do not understand why oslo.cache would make a backend decision. It's a
>> config-driven thing. I could see oslo.cache having a _default_ ... but
>> having oslo.cache use dogpile.cache and then remove the ability for a
>> deployer to chose which caching backend dogpile uses seems more than
>> passing strange to me.
>>
>
> With oslo cache, you say "I want memcache" and Oslo picks the driver.
> Standardizes the implementation within OpenStack.
>
>
You can also specify pylibmc or BMemcached, or Redis, or  as well when issuing the .configure() to
the dogpile.cache region. The default oslo.cache uses is python-memcached,
but that could be fixed easily.

>
>
>> With that said, it is important that we understand what projects have
>>> specific requirements or have experienced issues, otherwise there is a
>>> good chance teams will hit an issue down the line and have to work
>>> around it.
>>>
>> Yup. Totally agree. I certainly don't want to imply that there aren't
>> issues with memcache libs nor that they shouldn't be fixed. Merely
>> trying to point out that individual projects programming to the
>> interface of any of the libs is a thing that should be fixed.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Freezer] Replace Gnu Tar with DAR

2016-05-13 Thread Morgan Fainberg
On Fri, May 13, 2016 at 3:07 PM, Dieterly, Deklan 
wrote:

> Does anybody see any issues if Freezer used DAR instead of Gnu Tar? DAR
> seems to handle a particular use case that Freezer has while Gnu Tar does
> not.
> --
> Deklan Dieterly
>
> Senior Systems Software Engineer
> HPE
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Please elaborate what the specific case is, typically knowing what the
reasoning helps us provide guidance on if  is the best
choice. In this case, I don't know what you're running into to weigh DAR vs
Gnu Tar.

Cheers,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][designate][zaqar][nova][swift] using pylibmc instead of python-memcached

2016-05-13 Thread Morgan Fainberg
On Fri, May 13, 2016 at 6:23 AM, Mehdi Abaakouk  wrote:

> On Fri, May 13, 2016 at 02:58:08PM +0200, Julien Danjou wrote:
>
>> What's wrong with pymemcache, that we picked for tooz and are using for
>> 2 years now?
>>
>>  https://github.com/pinterest/pymemcache
>>
>
> Looks like a good alternative.
>
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

pymemcache is not drop-in replacement for python-memcached, but the cost to
wire it up to dogpile is effectively a 1-time cost and would need to be
done in general for the way keystone / keystonemiddleware works. It isn't
unreasonable to go with this option.

pymemcache used to have glaring holes in capabilities, but it seems it is
mostly up to par now.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][designate][zaqar][nova][swift] using pylibmc instead of python-memcached

2016-05-13 Thread Morgan Fainberg
On May 13, 2016 05:32, "Kiall Mac Innes"  wrote:
>
> Hey Dims,
>
> From what I remember, oslo.cache seemed unnecessarily complex to use
> vs memcache's simplicity, and didn't have any usage docs[1] to help folks
> get started using it.
>
> I can see there is some docs under the module index, but at a glance they
> seem somewhat disconnected and incomplete.
>
> Are there any complete examples of it's use available for us to compare
> against python-memcached and pylibmc etc?
>
> If there comparable functionality and perf wise, I don't see a reason why
> projects wouldn't switch. I'll certainly make the effort for Designate if
it
> looks like the right thing to do.
>
> Thanks,
> Kiall
>
> [1]: http://docs.openstack.org/developer/oslo.cache/usage.html
>
>
> On 13/05/16 12:35, Davanum Srinivas wrote:
> > Steve,
> >
> > Couple of points:
> >
> > * We can add pylibmc to g-r and phase out python-memcached over a time
period.
> > * If folks are using python-memcached, we should switch then over to
> > oslo.cache, then only oslo.cache will reference either
> > python-memcached or pylibmc which will make the situation easier to
> > handle.
> >
> > Thanks,
> > Dims
> >
> > On Fri, May 13, 2016 at 4:14 AM, Steve Martinelli
> >  wrote:
> >> /me gets the ball rolling
> >>
> >> Just when python3 support for keystone was looking like a reality,
we've hit
> >> another snag. Apparently there are several issues with
python-memcached in
> >> py3, putting it simply: it loads, but doesn't actually work. I've
included
> >> projects in the subject line that use python-memcached (based on
codesearch)
> >>
> >> Enter pylibmc; apparently it is (almost?) a drop-in replacement,
performs
> >> better, and is more actively maintained.
> >>
> >> - Has anyone had success using python-memcached in py3?
> >> - Is anyone interested in using pylibmc in their project instead of
> >> python-memcached?
> >> - Will buy-in from all projects be necessary to proceed for any single
> >> project?
> >>
> >> Open issues like this:
> >> https://github.com/linsomniac/python-memcached/issues/94 make me sad.
> >>
> >> stevemar
> >>
> >>
__
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Oslo.cache has a higher barrier to entry, but that is based on how dogpile
works and how keystone consumed it initially to be "pluggable".

We should make Oslo.cache better. It has some built in features because of
dogpile that are nice and doesn't strictly lock to memcache (but provides a
simple memcache-like interface for many backends) once configured. It also
allows for easier transitions between the libraries if they are needed (a
dogpile driver is not hard to make).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][designate][zaqar][nova][swift] using pylibmc instead of python-memcached

2016-05-13 Thread Morgan Fainberg
On May 13, 2016 05:25, "Mehdi Abaakouk"  wrote:
>>>
>>> - Is anyone interested in using pylibmc in their project instead of
>>> python-memcached?
>
>
> This is not a real drop-in replacement, pylibmc.Client is not threadsafe
> like python-memcached [1]. Aos it's written in C, it shouldn't be a
> problem for keystone because you don't use eventlet anymore, but for
> project that still use eventlet no greenlet switch will occurs during
> memcached IO.
>
> [1]
http://sendapatch.se/projects/pylibmc/misc.html#differences-from-python-memcached
>
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

FYI - Python-memcached is not really greenlet safe under load. We found
that it explicitly uses thread.local which means that enough connections
coming into a service that means heavily on memcache can cause the memcache
server to end up with a ton of open connections/max out TCP connection
limits.

This had to be worked around in keystone under eventlet. Luckily it hasn't
hit other projects too hard. This move should consider that design issue as
well (regardless of the final choice)

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][designate][zaqar][nova][swift] using pylibmc instead of python-memcached

2016-05-13 Thread Morgan Fainberg
On May 13, 2016 04:36, "Davanum Srinivas"  wrote:
>
> Steve,
>
> Couple of points:
>
> * We can add pylibmc to g-r and phase out python-memcached over a time
period.
> * If folks are using python-memcached, we should switch then over to
> oslo.cache, then only oslo.cache will reference either
> python-memcached or pylibmc which will make the situation easier to
> handle.
>
> Thanks,
> Dims
>
> On Fri, May 13, 2016 at 4:14 AM, Steve Martinelli
>  wrote:
> > /me gets the ball rolling
> >
> > Just when python3 support for keystone was looking like a reality,
we've hit
> > another snag. Apparently there are several issues with python-memcached
in
> > py3, putting it simply: it loads, but doesn't actually work. I've
included
> > projects in the subject line that use python-memcached (based on
codesearch)
> >
> > Enter pylibmc; apparently it is (almost?) a drop-in replacement,
performs
> > better, and is more actively maintained.
> >
> > - Has anyone had success using python-memcached in py3?
> > - Is anyone interested in using pylibmc in their project instead of
> > python-memcached?
> > - Will buy-in from all projects be necessary to proceed for any single
> > project?
> >
> > Open issues like this:
> > https://github.com/linsomniac/python-memcached/issues/94 make me sad.
> >
> > stevemar
> >
> >
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Another option is we could fork Python-Memcached and maintain it ourselves
(I would prefer to use pylibmc). But for a strictly pure Python
implementation it might make sense to do this.

Pymemcache is also another option, but was missing basic features last time
I looked / behaved significantly different (no mulit server support, etc).
We should revisit looking at pymemcache as well if we are looking at
pylibmc.

For the record, Python-memcached has at best been sporadically maintained
for the last year. This topic has come up a few times.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread Morgan Fainberg
On Thu, May 12, 2016 at 10:42 AM, Sean Dague  wrote:

> We just had to revert another v3 "fix" because it wasn't verified to
> work correctly in the gate - https://review.openstack.org/#/c/315631/
>
> While I realize project-config patches are harder to test, you can do so
> with a bogus devstack-gate change that has the same impact in some cases
> (like the case above).
>
> I think the important bit on moving forward is that every patch here
> which might be disruptive has some manual verification about it working
> posted in review by v3 team members before we approve them.
>
> I also think we need to largely stay non voting on the v3 only job until
> we're quite confident that the vast majority of things are flipped over
> (for instance there remains an issue in nova <=> ironic communication
> with v3 last time I looked). That allows us to fix things faster because
> we don't wedge some slice of the projects in a gate failure.
>
> -Sean
>
> On 05/12/2016 11:08 AM, Raildo Mascena wrote:
> > Hi folks,
> >
> > Although the Identity v2 API is deprecated as of Mitaka [1], some
> > services haven't implemented proper support to v3 yet. For instance,
> > we implemented a patch that made DevStack v3 by default that, when
> > merged, broke a lot of project gates in a few hours [2]. This
> > happened due to specific services incompatibility issues with Keystone
> > v3 API, such as hardcoded v2 usage, usage of removed keystoneclient CLI,
> > requesting v2 service tokens and the lack of keystoneauth session usage.
> >
> > To discuss those points, we did a cross-project work
> > session in the Newton Summit[3]. One point we are working on at this
> > momment is creating gates to ensure the main OpenStack services
> > can live without the Keystone v2 API. Those gates setup devstack with
> > only Identity v3 enabled and run the Tempest suite on this environment.
> >
> > We already did that for a few services, like Nova, Cinder, Glance,
> > Neutron, Swift. We are doing the same job for other services such
> > as Ironic, Magnum, Ceilometer, Heat and Barbican [4].
> >
> > In addition, we are creating jobs to run functional tests for the
> > services on this identity v3-only environment[5]. Also, we have a couple
> > of other fronts that we are doing like removing some hardcoded v2 usage
> > [6], implementing keystoneauth sessions support in clients and APIs [7].
> >
> > Our plan is to keep tackling as many items from the cross-project
> > session etherpad as we can, so we can achieve more confidence in moving
> > to a DevStack working v3-only, making sure everyone is prepared to work
> > with Keystone v3 API.
> >
> > Feedbacks and reviews are very appreciated.
> >
> > [1] https://review.openstack.org/#/c/251530/
> > [2] https://etherpad.openstack.org/p/v3-only-devstack
> > [3] https://etherpad.openstack.org/p/newton-keystone-v3-devstack
> > [4]
> https://review.openstack.org/#/q/project:openstack-infra/project-config+branch:master+topic:v3-only-integrated
> > [5]
> https://review.openstack.org/#/q/topic:v3-only-functionals-tests-gates
> > [6] https://review.openstack.org/#/q/topic:remove-hardcoded-keystone-v2
> > [7] https://review.openstack.org/#/q/topic:use-ksa
> >
> > Cheers,
> >
> > Raildo
> >
> >
> >
>

This  also comes back to the conversation at the summit. We need to propose
the timeline to turn over for V3 (regardless of voting/non-voting today) so
that it is possible to set the timeline that is expected for everything to
get fixed (and where we are expecting/planning to stop reverting while
focusing on fixing the v3-only changes).

I am going to ask the Keystone team to set forth the timeline and commit to
getting the pieces in order so that we can make v3-only voting rather than
playing the propose/revert game we're currently doing. A proposed timeline
and gameplan will only help at this point.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Newton midycle planning

2016-05-10 Thread Morgan Fainberg
On Wed, Apr 13, 2016 at 7:07 PM, Morgan Fainberg <morgan.fainb...@gmail.com>
wrote:

> It is that time again, the time to plan the Keystone midcycle! Looking at
> the schedule [1] for Newton, the weeks that make the most sense look to be
> (not in preferential order):
>
> R-14 June 27-01
> R-12 July 11-15
> R-11 July 18-22
>
> As usual this will be a 3 day event (probably Wed, Thurs, Fri), and based
> on previous attendance we can expect ~30 people to attend. Based upon all
> the information (other midcycles, other events, the US July4th holiday), I
> am thinking that week R-12 (the week of the newton-2 milestone) would be
> the best offering. Weeks before or after these three tend to push too close
> to the summit or too far into the development cycle.
>
> I am trying to arrange for a venue in the Bay Area (most likely will be
> South Bay, such as Mountain View, Sunnyvale, Palo Alto, San Jose) since we
> have done east coast and central over the last few midcycles.
>
> Please let me know your thoughts / preferences. In summary:
>
> * Venue will be Bay Area (more info to come soon)
>
> * Options of weeks (in general subjective order of preference): R-12,
> R-11, R-14
>
> Cheers,
> --Morgan
>
> [1] http://releases.openstack.org/newton/schedule.html
>

We have an update for the midcycle planning!

First of all, I want to thank Cisco for hosting us for this midcycle. The
Dates will be R-11[1], Wed-Friday: July 20-22 (expect to be around for a
full day on the 20th and at least 1/2 day on the 22nd). The address will be
170 W Tasman Dr, San Jose, CA 95134 . The exact building and room # will be
determined soon. Expect a place (wiki, google form, etc) to be posted this
week so we can collect real numbers of those who will be joining us.

Thanks for being patient with the planning. We should have ~35 spots for
this midcycle.

Cheers,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Morgan Fainberg
On Tue, May 3, 2016 at 1:46 PM, Clint Byrum  wrote:

> Excerpts from Morgan Fainberg's message of 2016-05-03 11:13:38 -0700:
> > On Tue, May 3, 2016 at 10:28 AM, Monty Taylor 
> wrote:
> >
> > > On 05/03/2016 11:47 AM, Clint Byrum wrote:
> > >
> > >> Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:
> > >>
> > >>> On 05/03/2016 08:55 AM, Clint Byrum wrote:
> > >>>
> > 
> >  Perhaps we have different perspectives. How is accepting what we
> >  previously emitted and told the user would be valid sneaky or wrong?
> >  Sounds like common sense due diligence to me.
> > 
> > >>>
> > >>> I agree - I see no reason we can't validate previously emitted
> tokens.
> > >>> But I don't agree strongly, because re-authing on invalid token is a
> > >>> thing users do hundreds of times a day. (these aren't oauth API Keys
> or
> > >>> anything)
> > >>>
> > >>>
> > >> Sure, one should definitely not be expecting everything to always work
> > >> without errors. On this we agree for sure. However, when we do decide
> to
> > >> intentionally induce errors for reasons we have not done so before, we
> > >> should weigh the cost of avoiding that with the cost of having it
> > >> happen. Consider this strawman:
> > >>
> > >> - User gets token, it says "expires_at Now+4 hours"
> > >> - User starts a brief set of automation tasks in their system
> > >>that does not use python and has not failed with invalid tokens
> thus
> > >>far.
> > >> - Keystone nodes are all updated at one time (AMAZING cloud ops team)
> > >> - User's automation jobs fail at next OpenStack REST call
> > >> - User begins debugging, wasting hours of time figuring out that
> > >>their tokens, which they stored and show should still be valid,
> were
> > >>rejected.
> > >>
> > >
> > > Ah - I guess this is where we're missing each other, which is good and
> > > helpful.
> > >
> > > I would argue that any user that is _storing_ tokens is doing way too
> much
> > > work. If they are doing short tasks, they should just treat them as
> > > ephemeral. If they are doing longer tasks, they need to deal with
> timeouts.
> > > SO, this:
> > >
> > >
> > > - User gets token, it says "expires_at Now+4 hours"
> > > - User starts a brief set of automation tasks in their system
> > >that does not use python and has not failed with invalid tokens thus
> > >far.
> > >
> > > should be:
> > >
> > > - User starts a brief set of automation tasks in their system
> > > that does not use python and has not failed with invalid tokens thus
> > > far.
> > >
> > > "Get a token" should never be an activity that anyone ever consciously
> > > performs.
> > >
> > >
> > This is my view. Never, ever, ever assume your token is good until
> > expiration. Assume the token might be broken at any request and know how
> to
> > re-auth.
> >
> > > And now they have to refactor their app, because this may happen again,
> > >> and they have to make sure that invalid token errors can bubble up to
> the
> > >> layer that has the username/password, or accept rolling back and
> > >> retrying the whole thing.
> > >>
> > >> I'm not saying anybody has this system, I'm suggesting we're putting
> > >> undue burden on users with an unknown consequence. Falling back to
> UUID
> > >> for a while has a known cost of a little bit of code and checking junk
> > >> tokens twice.
> > >>
> > >
> > Please do not advocate "falling back" to UUID. I am actually against
> making
> > fernet the default (very, very strongly), if we have to have this
> > "fallback" code. It is the wrong kind of approach, we already have
> serious
> > issues with complex code paths that produce subtly different results. If
> > the options are:
> >
> > 1) Make Fernet Default and have "fallback" code
> >
> > or
> >
> > 2) Leave UUID default and highly recommend fernet (plus gate on fernet
> > primarily, default in devstack)
> >
> > I will jump on my soapbox and be very loudly in favor of the 2nd option.
> If
> > we communicate this is a change that will happen (hey, maybe throw an
> > error/make the config option "none" so it has to be explicit) in Newton,
> > and then move to a Fernet default in O - I'd be ok with that.
> >
> > >
> > > Totally. I have no problem with the suggestion that keystone handle
> this.
> > > But I also think that users should quite honestly stop thinking about
> > > tokens at all. Tokens are an implementation detail that if any user
> thinks
> > > about while writing their app they're setting themselves up to be
> screwed -
> > > so we should make sure we're not talking about them in a primary way
> such
> > > as to suggest that people focus a lot of energy on them.
> > >
> > > (I also frequently see users who are using python libraries even get
> > > everything horribly wrong and screw themselves because they think they
> need
> > > to think about tokens)
> > >
> >
> > Better communication that tokens are ephemeral and should not 

Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Morgan Fainberg
On Tue, May 3, 2016 at 10:28 AM, Monty Taylor  wrote:

> On 05/03/2016 11:47 AM, Clint Byrum wrote:
>
>> Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:
>>
>>> On 05/03/2016 08:55 AM, Clint Byrum wrote:
>>>

 Perhaps we have different perspectives. How is accepting what we
 previously emitted and told the user would be valid sneaky or wrong?
 Sounds like common sense due diligence to me.

>>>
>>> I agree - I see no reason we can't validate previously emitted tokens.
>>> But I don't agree strongly, because re-authing on invalid token is a
>>> thing users do hundreds of times a day. (these aren't oauth API Keys or
>>> anything)
>>>
>>>
>> Sure, one should definitely not be expecting everything to always work
>> without errors. On this we agree for sure. However, when we do decide to
>> intentionally induce errors for reasons we have not done so before, we
>> should weigh the cost of avoiding that with the cost of having it
>> happen. Consider this strawman:
>>
>> - User gets token, it says "expires_at Now+4 hours"
>> - User starts a brief set of automation tasks in their system
>>that does not use python and has not failed with invalid tokens thus
>>far.
>> - Keystone nodes are all updated at one time (AMAZING cloud ops team)
>> - User's automation jobs fail at next OpenStack REST call
>> - User begins debugging, wasting hours of time figuring out that
>>their tokens, which they stored and show should still be valid, were
>>rejected.
>>
>
> Ah - I guess this is where we're missing each other, which is good and
> helpful.
>
> I would argue that any user that is _storing_ tokens is doing way too much
> work. If they are doing short tasks, they should just treat them as
> ephemeral. If they are doing longer tasks, they need to deal with timeouts.
> SO, this:
>
>
> - User gets token, it says "expires_at Now+4 hours"
> - User starts a brief set of automation tasks in their system
>that does not use python and has not failed with invalid tokens thus
>far.
>
> should be:
>
> - User starts a brief set of automation tasks in their system
> that does not use python and has not failed with invalid tokens thus
> far.
>
> "Get a token" should never be an activity that anyone ever consciously
> performs.
>
>
This is my view. Never, ever, ever assume your token is good until
expiration. Assume the token might be broken at any request and know how to
re-auth.


> And now they have to refactor their app, because this may happen again,
>> and they have to make sure that invalid token errors can bubble up to the
>> layer that has the username/password, or accept rolling back and
>> retrying the whole thing.
>>
>> I'm not saying anybody has this system, I'm suggesting we're putting
>> undue burden on users with an unknown consequence. Falling back to UUID
>> for a while has a known cost of a little bit of code and checking junk
>> tokens twice.
>>
>
Please do not advocate "falling back" to UUID. I am actually against making
fernet the default (very, very strongly), if we have to have this
"fallback" code. It is the wrong kind of approach, we already have serious
issues with complex code paths that produce subtly different results. If
the options are:

1) Make Fernet Default and have "fallback" code

or

2) Leave UUID default and highly recommend fernet (plus gate on fernet
primarily, default in devstack)

I will jump on my soapbox and be very loudly in favor of the 2nd option. If
we communicate this is a change that will happen (hey, maybe throw an
error/make the config option "none" so it has to be explicit) in Newton,
and then move to a Fernet default in O - I'd be ok with that.


>
> Totally. I have no problem with the suggestion that keystone handle this.
> But I also think that users should quite honestly stop thinking about
> tokens at all. Tokens are an implementation detail that if any user thinks
> about while writing their app they're setting themselves up to be screwed -
> so we should make sure we're not talking about them in a primary way such
> as to suggest that people focus a lot of energy on them.
>
> (I also frequently see users who are using python libraries even get
> everything horribly wrong and screw themselves because they think they need
> to think about tokens)
>

Better communication that tokens are ephemeral and should not assume to
work always (even until their expiry) should be the messaging we use. It's
simple, plan to reauth as needed and handle failures.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-05-02 Thread Morgan Fainberg
On Mon, May 2, 2016 at 11:32 AM, Adam Young  wrote:

> On 04/26/2016 08:28 AM, Guangyu Suo wrote:
>
> Hello, oslo team
>
> For now, some sensitive options like password or token are configured as
> plaintext, anyone who has the priviledge to read the configure file can get
> the real password, this may be a security problem that can't be
> unacceptable for some people.
>
> So the first solution comes to my mind is to encrypt these options when
> configuring them and decrypt them when reading them in oslo.config. This is
> a bit like apache/openldap did, but the difference is these softwares do a
> salt hash to the password, this is a one-way encryption that can't be
> decrypted, these softwares can recognize the hashed value. But if we do
> this work in oslo.config, for example the admin_password in
> keystone_middleware section, we must feed the keystone with the plaintext
> password which will be hashed in keystone to compare with the stored hashed
> password, thus the encryped value in oslo.config must be decryped to
> plaintext. So we should encrypt these options using symmetrical or
> unsymmetrical method with a key, and put the key in a well secured place,
> and decrypt them using the same key when reading them.
>
> Of course, this feature should be default closed. Any ideas?
>
>
> PKI.  Each service gets a client certificate that they use, signed by a
> selfsigned CA on the controller node, and uses the Tokenless/X509 Mapping
> in Keystone to identify itself.
>
> Do not try to build a crypto system around passwords.  None of us are
> qualified to do that.
>
>
++ Rule 1 of crypto - don't roll your own system.


> We should be able to kill explicit service users and use X509 any way.
>
>
++ We should be moving towards token-less auth for service users where
possible. This doesn't mitigate the need to cert/key manage properly (NSS?
devops-y things, etc)


> Kerberos would work, too, for deployments that prefer that form of
> Authentication.  We can document this, but do not need to implement.
>
>
Never hurts to have alternatives.


> Certmonger can manage the certificates for us.
>
> Anchor can act as the CA for deployments that want something more than
> selfsigned, but don't want to go with a full CA.
>
>
I'd be in favor of anchor being the default (devstack? gate? "best
practices") choice for 'service users' over the full CA, but in either case
as long as we have an easy-to-use-low-barrier-to-entry-well-formed-system,
we're on the right path.


>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-05-02 Thread Morgan Fainberg
On Tue, Apr 26, 2016 at 4:25 PM, Guangyu Suo  wrote:

> I think there is a little misunderstanding over here, the key point about
> this problem is that you store your password as *plaintext* in the
> configuration file, maybe this password is also the password of many other
> systems. You can't stop the right person to do the right thing, if someone
> gets the encryped password, and he can also get into
>

We can engineer things for best practices. If this password is the "same as
many other systems" we need to have more conversations on why that is the
case and how we can encourage people to do the correct/best practice thing
and not share passwords. I know the exposure of a password to your
infrastructure control plane is a big thing, but shared passwords simply is
not something we should be engineering specific solutions for, instead we
should document the best practices and why they are important.


> the box, then he is the right person, just like somebody gets your
> password through "brute force" attack, you can't stop him to do the right
> thing. If someone gets the entryped password, but he can not get into the
> box, he can do nothing, and this is our goal. So I think split the global
> configuration file to "general" and "secure" files, and encrypt the secure
> file is the right way to do this.
>
>
> 2016-04-26 16:05 GMT-05:00 Doug Hellmann :
>
>> Excerpts from Morgan Fainberg's message of 2016-04-26 10:17:30 -0500:
>> > On Tue, Apr 26, 2016 at 9:24 AM, Jordan Pittier <
>> jordan.pitt...@scality.com>
>> > wrote:
>> >
>> > >
>> > >
>> > > On Tue, Apr 26, 2016 at 3:32 PM, Daniel P. Berrange <
>> berra...@redhat.com>
>> > > wrote:
>> > >
>> > >> On Tue, Apr 26, 2016 at 08:19:23AM -0500, Doug Hellmann wrote:
>> > >> > Excerpts from Guangyu Suo's message of 2016-04-26 07:28:42 -0500:
>> > >> > > Hello, oslo team
>> > >> > >
>> > >> > > For now, some sensitive options like password or token are
>> configured
>> > >> as
>> > >> > > plaintext, anyone who has the priviledge to read the configure
>> file
>> > >> can get
>> > >> > > the real password, this may be a security problem that can't be
>> > >> > > unacceptable for some people.
>> > >>
>> > > It's not a security problem if your config files have the proper
>> > > permissions.
>> > >
>> > >
>> > >> > >
>> > >> > > So the first solution comes to my mind is to encrypt these
>> options
>> > >> when
>> > >> > > configuring them and decrypt them when reading them in
>> oslo.config.
>> > >> This is
>> > >> > > a bit like apache/openldap did, but the difference is these
>> softwares
>> > >> do a
>> > >> > > salt hash to the password, this is a one-way encryption that
>> can't be
>> > >> > > decrypted, these softwares can recognize the hashed value. But
>> if we
>> > >> do
>> > >> > > this work in oslo.config, for example the admin_password in
>> > >> > > keystone_middleware section, we must feed the keystone with the
>> > >> plaintext
>> > >> > > password which will be hashed in keystone to compare with the
>> stored
>> > >> hashed
>> > >> > > password, thus the encryped value in oslo.config must be
>> decryped to
>> > >> > > plaintext. So we should encrypt these options using symmetrical
>> or
>> > >> > > unsymmetrical method with a key, and put the key in a well
>> secured
>> > >> place,
>> > >> > > and decrypt them using the same key when reading them.
>> > >>
>> > > The issue here is to find a "well secured place". We should not only
>> move
>> > > the problem somewhere else.
>> > >
>> > >
>> > >> > >
>> > >> > > Of course, this feature should be default closed. Any ideas?
>> > >> >
>> > >> > Managing the encryption keys has always been the issue blocking
>> > >> > implementing this feature when it has come up in the past. We
>> can't have
>> > >> > oslo.config rely on a separate OpenStack service for key
>> management,
>> > >> > because presumably that service would want to use oslo.config and
>> then
>> > >> > we have a dependency cycle.
>> > >> >
>> > >> > So, we need a design that lets us securely manage those encryption
>> keys
>> > >> > before we consider adding encryption. If we solve that, it's then
>> > >> > probably simpler to encrypt an entire config file instead of
>> worrying
>> > >> > about encrypting individual values (something like how ansible
>> vault
>> > >> > works).
>> > >>
>> > >> IMHO encrypting oslo config files is addressing the wrong problem.
>> > >> Rather than having sensitive passwords stored in the main config
>> > >> files, we should have them stored completely separately by a secure
>> > >> password manager of some kind. The config file would then merely
>> > >> contain the name or uuid of an entry in the password manager. The
>> > >> service (eg nova-compute) would then query that password manager
>> > >> to get the actual sensitive password data it requires. At this point
>> > >> oslo.config does not need to know/care about encryption of its data
>> > >> as there's no longer 

Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-04-26 Thread Morgan Fainberg
On Tue, Apr 26, 2016 at 10:57 AM, Joshua Harlow 
wrote:

> Daniel P. Berrange wrote:
>
>> On Tue, Apr 26, 2016 at 08:19:23AM -0500, Doug Hellmann wrote:
>>
>>> Excerpts from Guangyu Suo's message of 2016-04-26 07:28:42 -0500:
>>>
 Hello, oslo team

 For now, some sensitive options like password or token are configured as
 plaintext, anyone who has the priviledge to read the configure file can
 get
 the real password, this may be a security problem that can't be
 unacceptable for some people.

 So the first solution comes to my mind is to encrypt these options when
 configuring them and decrypt them when reading them in oslo.config.
 This is
 a bit like apache/openldap did, but the difference is these softwares
 do a
 salt hash to the password, this is a one-way encryption that can't be
 decrypted, these softwares can recognize the hashed value. But if we do
 this work in oslo.config, for example the admin_password in
 keystone_middleware section, we must feed the keystone with the
 plaintext
 password which will be hashed in keystone to compare with the stored
 hashed
 password, thus the encryped value in oslo.config must be decryped to
 plaintext. So we should encrypt these options using symmetrical or
 unsymmetrical method with a key, and put the key in a well secured
 place,
 and decrypt them using the same key when reading them.

 Of course, this feature should be default closed. Any ideas?

>>> Managing the encryption keys has always been the issue blocking
>>> implementing this feature when it has come up in the past. We can't have
>>> oslo.config rely on a separate OpenStack service for key management,
>>> because presumably that service would want to use oslo.config and then
>>> we have a dependency cycle.
>>>
>>> So, we need a design that lets us securely manage those encryption keys
>>> before we consider adding encryption. If we solve that, it's then
>>> probably simpler to encrypt an entire config file instead of worrying
>>> about encrypting individual values (something like how ansible vault
>>> works).
>>>
>>
>> IMHO encrypting oslo config files is addressing the wrong problem.
>> Rather than having sensitive passwords stored in the main config
>> files, we should have them stored completely separately by a secure
>> password manager of some kind. The config file would then merely
>> contain the name or uuid of an entry in the password manager. The
>> service (eg nova-compute) would then query that password manager
>> to get the actual sensitive password data it requires. At this point
>> oslo.config does not need to know/care about encryption of its data
>> as there's no longer sensitive data stored.
>>
>
> That reminds me of the internals of the keyring library that some of the
> openstack clients used to use. It would integrate with mac os keychain,
> linux secret services and things like kwallet and windows secret services
> via its abstractions (code @
> https://github.com/jaraco/keyring/tree/9.0/keyring/backends); perhaps
> oslo.config could integrate with that library (and overcome the issues that
> the python-*-client libraries had with using it/ejecting support for it).
>
> There are probably other similar libraries with similar backends that
> could be used also (but that's the one I know about). A pluggable backend
> might be nice since then u can integrate with your own secret service (for
> example I know yahoo has there own similar service).
>
>
We should not integrate with keyring. It has caused a large number of
issues and has very bad dependency changes (that tend to change in weird
ways). We should/could use something else.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-04-26 Thread Morgan Fainberg
On Tue, Apr 26, 2016 at 9:24 AM, Jordan Pittier 
wrote:

>
>
> On Tue, Apr 26, 2016 at 3:32 PM, Daniel P. Berrange 
> wrote:
>
>> On Tue, Apr 26, 2016 at 08:19:23AM -0500, Doug Hellmann wrote:
>> > Excerpts from Guangyu Suo's message of 2016-04-26 07:28:42 -0500:
>> > > Hello, oslo team
>> > >
>> > > For now, some sensitive options like password or token are configured
>> as
>> > > plaintext, anyone who has the priviledge to read the configure file
>> can get
>> > > the real password, this may be a security problem that can't be
>> > > unacceptable for some people.
>>
> It's not a security problem if your config files have the proper
> permissions.
>
>
>> > >
>> > > So the first solution comes to my mind is to encrypt these options
>> when
>> > > configuring them and decrypt them when reading them in oslo.config.
>> This is
>> > > a bit like apache/openldap did, but the difference is these softwares
>> do a
>> > > salt hash to the password, this is a one-way encryption that can't be
>> > > decrypted, these softwares can recognize the hashed value. But if we
>> do
>> > > this work in oslo.config, for example the admin_password in
>> > > keystone_middleware section, we must feed the keystone with the
>> plaintext
>> > > password which will be hashed in keystone to compare with the stored
>> hashed
>> > > password, thus the encryped value in oslo.config must be decryped to
>> > > plaintext. So we should encrypt these options using symmetrical or
>> > > unsymmetrical method with a key, and put the key in a well secured
>> place,
>> > > and decrypt them using the same key when reading them.
>>
> The issue here is to find a "well secured place". We should not only move
> the problem somewhere else.
>
>
>> > >
>> > > Of course, this feature should be default closed. Any ideas?
>> >
>> > Managing the encryption keys has always been the issue blocking
>> > implementing this feature when it has come up in the past. We can't have
>> > oslo.config rely on a separate OpenStack service for key management,
>> > because presumably that service would want to use oslo.config and then
>> > we have a dependency cycle.
>> >
>> > So, we need a design that lets us securely manage those encryption keys
>> > before we consider adding encryption. If we solve that, it's then
>> > probably simpler to encrypt an entire config file instead of worrying
>> > about encrypting individual values (something like how ansible vault
>> > works).
>>
>> IMHO encrypting oslo config files is addressing the wrong problem.
>> Rather than having sensitive passwords stored in the main config
>> files, we should have them stored completely separately by a secure
>> password manager of some kind. The config file would then merely
>> contain the name or uuid of an entry in the password manager. The
>> service (eg nova-compute) would then query that password manager
>> to get the actual sensitive password data it requires. At this point
>> oslo.config does not need to know/care about encryption of its data
>> as there's no longer sensitive data stored.
>>
> This looks complicated. I like text files that I can quickly view and
> edit, if I am authorized to (through good old plain Linux permissions).
>
>
>>
>> Regards,
>> Daniel
>>
>
oslo.config already supports multiple configuration files. As long as the
configuration sections are appropriately combined (they should be? if not
there is a gap), we can rely on that feature to handle the split between
"secure" options and "general options". I am strongly against encrypting
the whole file (it doesn't really solve the problem that well). There is a
history of having "secure" files and "generally viewable" config files
(prior art, such as gerrit having a "general" config and a "secure" config)
in many deployments. This also could be handled (decrypting the "secure"
file) by the startup scripts (systemd is *very* good at order of
operations/wait for signals); use of ramdisk and proper POSIX (and SELinux)
attributes can limit concerns about access of the "secure" file (mix in
some ramdisk, and a reboot of the system/systemd stopping the service, can
also clear the "sensitive" data). Expect that the data is ultimately
readable via direct memory access if the standard "best" practices are
insufficient.

There was already talk of having oslo.config supporting an "external store"
(direct access to hiera or similar?) for certain options. That may be
significantly better (and ultimately more controlled) than trying to wedge
encryption into configuration files.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-22 Thread Morgan Fainberg
On Fri, Apr 22, 2016 at 2:57 PM, Dan Smith  wrote:

> > However I do want to point out that cells v2 is not just about dealing
> > with scale in the database. The message queue is another consideration,
> > and as far as I know there is not an analog to the "distributed
> > database" option available for the persistence layer.
>
> Yeah, it's actually *mostly* about the messaging layer. In fact, if you
> don't want to fragment the database, I'm not sure you have to. Thinking
> with my Friday brain, I don't actually think it would be a problem if
> you configured all the cells (including the cemetery cell) to use the
> same actual database, but different queues. Similarly, you should be
> able to merge and split cells pretty easily if it's convenient for
> whatever reason.
>
>
This would be a very interesting direction to explore. Focus on the pain
points of the message queue and then look at addressing the beast that is
the database layer separately. I am going to toss support in behind a lot
of what has been said in this thread already. But I really wanted to voice
my support for exploring this option if/when we have a bit of time. Not
fragmenting the DB unless it's really needed, is a good approach.

With that all said... I wouldn't want to derail work simply because of a
"nice to have" option.


> > Additionally with v1 we found that deployers have enjoyed being able to
> > group their hardware with cells. Baremetal goes in this cell, SSD filled
> > computes over here, and spinning disks over there. And beyond that
> > there's the ability to create a cell, fill it with hardware, and then
> > test it without plugging it up to the production API. Cells provides an
> > entry point for poking at things that isn't available without it.
>
> People ask me about this all the time. I think it's a bit of a false
> impression of what it's for, but the ability to stand up a chunk of new
> functionality with a temporary API and then merge it into the larger
> deployment is something people seem to like.
>
> --Dan
>
>
Interesting. I never considered cells to work like a POC environment.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-22 Thread Morgan Fainberg
On Fri, Apr 22, 2016 at 2:58 PM, Tom Fifield  wrote:

> Hi all,
>
> On 22/04/16 16:40, Clint Byrum wrote:
>
>> But in the mean time, maybe we can just send this message
>> to party planners: Provide us with interesting spaces to converse and
>> bond in, and we will be happier.
>>
>
> Spoke with the party planners and got the inside gossip :)
>
> The good news: for the Austin summit party, those who are looking for a
> quieter space will be served as well as those who enjoy live music.
>
> ProTip: http://stackcityaustin.openstack.org/map.php
>
> For the quieter space, G'Raj Mahal, Parlor Room and L'Estelle are the ones
> you want. Within these venues, the noise will come from your voices, rather
> than an amplified band.
>
>
>
Fantastic tips! Thanks Tom!


>
> Regards,
>
>
>
> Tom
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] release hiatus

2016-04-21 Thread Morgan Fainberg
Safe travels! See you in austin.

On Thu, Apr 21, 2016 at 4:22 PM, Tony Breeds 
wrote:

> On Thu, Apr 21, 2016 at 02:13:15PM -0400, Doug Hellmann wrote:
> > The release team is preparing for and traveling to the summit, just as
> > many of you are. With that in mind, we are going to hold off on
> > releasing anything until 2 May, unless there is some sort of critical
> > issue or gate blockage. Please feel free to submit release requests to
> > openstack/releases, but we'll only plan on processing any that indicate
> > critical issues in the commit messages.
>
> What's you preferred way to indicating to the release team that something
> is
> urgent?
>
> There's always the post review jump on IRC and ping y'all.  Just wondering
> if
> you have a preference for something else.
>
> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Morgan Fainberg
On Thu, Apr 21, 2016 at 9:07 AM, Michael Krotscheck 
wrote:

> Hey everyone-
>
> So, HPE is seeking sponsors to continue the core party. The reasons are
> varied - internal sponsors have moved to other projects, the Big Tent has
> drastically increased the # of cores, and the upcoming summit format change
> creates quite a bit of uncertainty on everything surrounding the summit.
>
> Furthermore, the existence of the Core party has been... contentious. Some
> believe it's exclusionary, others think it's inappropriate, yet others
> think it's a good way to thank those of use who agree to be constantly
> pestered for code reviews.
>
> I'm writing this message for two reasons - mostly, to kick off a
> discussion on whether the party is worthwhile. Secondly, to signal to other
> organizations that this promotional opportunity is available.
>
> Personally, I appreciate being thanked for my work. I do not necessarily
> need to be thanked in this fashion, however as the past venues have been
> far more subdued than the Tuesday night events (think cocktail party), it's
> a welcome mid-week respite for this overwhelmed little introvert. I don't
> want to see it go, but I will understand if it does.
>
> Some numbers, for those who like them (Thanks to Mark Atwood for providing
> them):
>
> Total repos: 1010
> Total approvers: 1085
> Repos for official teams: 566
> OpenStack repo approvers: 717
> Repos under release management: 90
> Managed release repo approvers: 281
>
> Michael
>

Personally, I am in the camp that the core party is something that should
not be continued because of it's somewhat exclusionary aspects. I don't
have an alternative in mind to fill the space of the party. I often find
that given an open/free night something a bit more subdued (even than the
core party) can occur, it just tends to be in a number of smaller groups.
Having a further respite from the speed/intensity of the summit (an open
night with even less plans!), would be welcome to many of us.

Also consider the split summit proposal and if the core party really has a
place in the new proposed summit/project gathering world.

Cheers,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-keystoneclient] Return request-id to caller

2016-04-20 Thread Morgan Fainberg
On Wed, Apr 13, 2016 at 6:07 AM, David Stanek  wrote:

> On Wed, Apr 13, 2016 at 3:26 AM koshiya maho 
> wrote:
>
>>
>> My request to all keystone cores to give their suggestions about the same.
>>
>>
> I'll test this a little and see if I can see how it breaks.
>
> Overall I'm not really a fan of this design. It's just a hack to add
> attributes where they don't belong. Long term I think this will be hard to
> maintain.
>
>
>
If we want to return a response object we should return a response object.
Returning a magic list with attributes (or a dict with attributes, etc)
feels very, very wrong.

I'm not going to block this design, but I wish we had something a bit
better.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   >