Re: [openstack-dev] [keystone] [keystoneauth] Debug data isn't sanitized - bug 1638978

2017-10-03 Thread Jamie Lennox
Another option, pass log=False which we currently do for all the auth
requests. This will prevent debug printing the body at all, so con, by
default you can't see that message, but it's there because I never wanted
to mess around with masking individual service's secrets like this.

On 29 Sep. 2017 11:49 pm, "Lance Bragstad"  wrote:

>
>
> On 09/27/2017 06:38 AM, Bhor, Dinesh wrote:
>
> Hi Team,
>
>
>
> There are four solutions to fix the below bug:
>
> https://bugs.launchpad.net/keystoneauth/+bug/1638978
>
>
>
> 1) Carry a copy of mask_password() method to keystoneauth from oslo_utils
> [1]:
>
> *Pros:*
>
> A. keystoneauth will use already tested and used version of mask_password.
>
>
>
> *Cons:*
>
> A. keystoneauth will have to keep the version of mask_password() method
> sync with oslo_utils version.
>
>  If there are any new "_SANITIZE_KEYS" added to oslo_utils
> mask_password then those should be added in keystoneauth mask_password also.
>
> B. Copying the "mask_password" will also require to copy its supporting
> code [2] which is huge.
>
>
>
>
> I'm having flashbacks of the oslo-incubator days...
>
>
>
> 2) Use Oslo.utils mask_password() method in keystoneauth:
>
> *Pros:*
>
> A) No synching issue as described in solution #1. keystoneauth will
> directly use mask_password() method from Oslo.utils.
>
>
>
> *Cons:*
>
> A) You will need oslo.utils library to use keystoneauth.
>
> Objection by community:
>
> - keystoneauth community don't want any dependency on any of OpenStack
> common oslo libraries.
>
> Please refer to the comment from Morgan: https://bugs.launchpad.net/
> keystoneauth/+bug/1700751/comments/3
>
>
>
>
>
> 3) Add a custom logging filter in oslo logger
>
> Please refer to POC sample here: http://paste.openstack.org/show/617093/
>
> OpenStack core services using any OpenStack individual python-*client (for
> e.g python-cinderclient used in nova service) will need to pass oslo_logger
> object during it’s
>
> initialization which will do the work of masking sensitive information.
>
> Note: In nova, oslo.logger object is not passed during cinder client
> initialization (https://github.com/openstack/nova/blob/master/nova/volume/
> cinder.py#L135-L141),
>
> In this case, sensitive information will not be masked as it isn’t using
> Oslo.logger.
>
>
>
> *Pros:*
>
> A) No changes required in oslo.logger or any OpenStack services if
> mask_password method is modified in oslo.utils.
>
>
>
> *Cons:*
>
> A) Every log message will be scanned for certain password fields degrading
> the performance.
>
> B) If consumer of keystoneauth doesn’t use oslo_logger, then the sensitive
> information will not be masked.
>
> C) Will need to make changes wherever applicable to the OpenStack core
> services to pass oslo.logger object during python-novaclient initialization.
>
>
>
>
>
> 4) Add mask_password formatter parameter in oslo_log:
>
> Add "mask_password" formatter to sanitize sensitive data and pass it as a
> keyword argument to the log statement.
>
> If the mask_password is set, then only the sensitive information will be
> masked at the time of logging.
>
> The log statement will look like below:
>
>
>
> logger.debug("'adminPass': 'Now you see me'"), mask_password=True)
>
>
>
> Please refer to the POC code here: http://paste.openstack.org/show/618019/
>
>
>
> *Pros:  *
>
> A) No changes required in oslo.logger or any OpenStack services if
> mask_password method is modified in oslo.utils.
>
>
>
> *Cons:*
>
> A) If consumer of keystoneauth doesn’t use oslo_logger, then the sensitive
> information will not be masked.
>
> B) If you forget to pass mask_password=True for logging messages where
> sensitive information is present, then those fields won't be masked with
> ***.
>
>  But this can be clearly documented as suggested by Morgan and Lance.
>
> C) This solution requires you to add a below check in keystoneauth to
> avoid from an exception being raised in case logger is pure python Logger
> as it
>
>   doesn’t accept mask_password keyword argument.
>
>
>
> if isinstance(logger, logging.Logger):
>
> logger.debug(' '.join(string_parts))
>
> else:
>
> logger.debug(' '.join(string_parts), mask_password=True)
>
>
>
> This check assumes that the logger instance will be oslo_log only if it is
> not of python default logging.Logger.
>
> Keystoneauth community is not ready to have any dependency on any oslo-*
> lib, so it seems this solution has low acceptance chances.
>
>
> Options 2, 3, and 4 all require dependencies on oslo in order to work,
> which is a non-starter according to Morgan's comment in the bug [0].
> Options 3 and 4 will require a refactor to get keystoneauth to use oslo.log
> (today it uses the logging module from Python's standard library).
>
> [0] https://bugs.launchpad.net/keystoneauth/+bug/1700751/comments/3
>
>
>
> Please let me know your opinions about the above four approaches. Which
> one should we adopt?
>
>
>
> [1] 

[openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-02 Thread Jamie Lennox
Hi All,

I'm really sad to announce that I'll be leaving the OpenStack community (at
least for a while), I've accepted a new position unrelated to OpenStack
that'll begin in a few weeks, and am going to be mostly on holiday until
then.

I want to thank everyone I've had the pleasure of working with over the
last few years - but particularly the Keystone community. I feel we as a
team and I personally grew a lot over that time, we made some amazing
achievements, and I couldn't be prouder to have worked with all of you.

Obviously I'll be around at least during the night for some of the Sydney
summit and will catch up with some of you there, and hopefully see some of
you at linux.conf.au. To everyone else, thank you and I hope we'll meet
again.


Jamie Lennox, Stacker.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Mistral][Devstack] Confusion between auth_url and auth_uri in keystone middleware

2017-06-20 Thread Jamie Lennox
On 16 June 2017 at 00:44, Mikhail Fedosin  wrote:

> Thanks György!
>
> On Thu, Jun 15, 2017 at 1:55 PM, Gyorgy Szombathelyi  doclerholding.com> wrote:
>
>> Hi Mikhail,
>>
>> (I'm not from the Keystone team, but did some patches for using
>> keystonauth1).
>>
>> >
>> > 2. Even if auth_url is set, it can't be used later, because it is not
>> registered in
>> > oslo_config [5]
>>
>> auth_url is actually a dynamic parameter and depends on the keystone auth
>> plugin used
>> (auth_type=xxx). The plugin which needs this parameter, registers it.
>>
>
> Based on this http://paste.openstack.org/show/612664/ I would say that
> the plugin doesn't register it :(
> It either can be a bug, or it was done intentionally, I don't know.
>
>
>>
>> >
>> > So I would like to get an advise from keystone team and understand what
>> I
>> > should do in such cases. Official documentation doesn't add clarity on
>> the
>> > matter because it recommends to use auth_uri in some cases and auth_url
>> in
>> > others.
>> > My suggestion is to add auth_url in the list of keystone authtoken
>> > middleware config options, so that the parameter can be used by the
>> others.
>>
>> Yepp, this makes some confusion, but adding auth_url will make a clash
>> with
>> most (all?) authentication plugins. auth_url can be considered as an
>> 'internal'
>> option for the keystoneauth1 modules, and not used by anything else (like
>> the keystonemiddleware itself). However if there would be a more elagant
>> solution, I would also hear about it.
>>
>> >
>> > Best,
>> > Mike
>> >
>> Br,
>> György
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> My final thought that we have to use both (auth_url and auth_uri) options
> in mistral config, which looks ugly, but necessary.
>
> Best,
> Mike
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
Hi,

I feel like the question has been answered in the thread, but as i'm
largely responsible for this I thought i'd pipe up here.

It's annoying and unfortunate that auth_uri and auth_url look so similar.
They've actually existed for some time side by side and ended up like that
out of evolution rather that any thought. Interestingly the first result
for auth_uri in google is [1]. I'd be happy to rename it for something else
if we can agree on what.

Regarding your paste (and the reason i popped up), i would consider this a
bug in mistral. The auth options aren't registered into oslo.config until
just before the plugin is loaded because depending on what you put in for
auth_type the options may be different. In practice pretty much every
plugin has an auth_url, but mistral shouldn't be assuming anything about
the structure of [keystone_authtoken]. That's the sole responsibility of
keystonemiddleware and it does change over time.

Jamie


[1] https://adam.younglogic.com/2016/06/auth_uri-vs-auth_url/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-02-27 Thread Jamie Lennox
On 27 February 2017 at 08:56, Sean Dague  wrote:

> We recently implemented a Nova feature around validating that project_id
> for quotas we real in keystone. After that merged, trippleo builds
> started to fail because their undercloud did not specify the 'identity'
> service as the unversioned endpoint.
>
> https://github.com/openstack/nova/blob/8b498ce199ac4acd94eb33a7f47c05
> ee0c743c34/nova/api/openstack/identity.py#L34-L36
> - (code merged in Nova).
>
> After some debug, it was clear that '/v2.0/v3/projects/...' was what was
> being called. And after lots of conferring in the Keystone room, we
> definitely made sure that the code in question was correct. The thing I
> wanted to do was make the failure more clear.
>
> The suggestion was made to use the following code approach:
>
> https://review.openstack.org/#/c/438049/6/nova/api/openstack/identity.py
>
> resp = sess.get('/projects/%s' % project_id,
> endpoint_filter={
> 'service_type': 'identity',
> 'version': (3, 0)
> },
> raise_exc=False)
>
>
> However, I tested that manually with an identity =>
> http:///v2.0 endpoint, and it passes. Which confused me.
>
> Until I found this -
> https://github.com/openstack/keystoneauth/blob/
> 3364703d3b0e529f7c1b7d1d8ea81726c4f5f121/keystoneauth1/discover.py#L313
>
> keystonauth is specifically coding around the keystone transition from a
> versioned /v2.0 endpoint to an unversioned one.
>
>
> While that is good for the python ecosystem using it, it's actually
> *quite* bad for the rest of our ecosystem (direct REST, java, ruby, go,
> js, php), because it means that all other facilities need the same work
> around. I actually wonder if this is one of the in the field reasons for
> why the transition from v2 -> v3 is going slow. That's actually going to
> potentially break a lot of software.
>
> It feels like this whole discovery version hack bit should be removed -
> https://review.openstack.org/#/c/438483/. It also feels like a migration
> path for non python software in changing the catalog entries needs to be
> figured out as well.
>
> I think on the Nova side we need to go back to looking for bogus
> endpoint because we don't want issues like this hidden from us.
>
> -Sean


So I would completely agree, I would like to see this behaviour
removed. However
it was done very intentionally - and at the time it was written it was
needed.

This is one of a number of situations where keystoneauth tried its best to
paper over inconsistencies in OpenStack APIs because to various levels of
effectiveness almost all the python clients were doing this. Any whilst we
have slowly pushed the documentation and standard deployment procedures to
unversioned URLs whilst this hack was maintained in keystoneauth we didn't
have to fix it individually for every client.

Where python and keystoneauth are different from every other language is
that the services themselves are written in python and using these
libraries and inter-service communication had to continue to work
throughout the transition. You may remember the fun we had trying to change
to v3 auth and unversioned URLs in devstack? This hack is what made it
possible at all. As you say this is extremely difficult for other
languages, but something there isn't a solution for whilst this transition
is in place.

Anyway a few cycles later we are in a different position and a new service
such as the placement API can decide that it shouldn't work at all if the
catalog isn't configured as OpenStack advises. This is great! We can
effectively force deployments to transition to unversioned URLs. We can't
change the default behaviour in keystoneauth but it should be relatively
easy to give you an adapter that doesn't do this. Probably something like
[1]. I also filed it as a bug, which links to this thread [2], but could
otherwise do with some more detail.

Long story short, sorry but it'll have to be a new flag. Yes, keystoneauth
is supposed to be a low-level request maker, but it is also trying to paper
over a number of historical bad decisions so at the very least the user
experience is correct and we don't have clients re-inventing it themselves.

[1] https://review.openstack.org/#/c/438788/
[2] https://bugs.launchpad.net/keystoneauth/+bug/1668484
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] new keystone core (breton)

2016-11-01 Thread Jamie Lennox
Congrats Boris, Great to have new people on board. Well earned.

On 1 November 2016 at 15:53, Brad Topol  wrote:

> Congratulations Boris!!! Very well deserved!!!
>
> --Brad
>
>
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
>
> [image: Inactive hide details for Steve Martinelli ---10/31/2016 03:51:29
> PM---I want to welcome Boris Bobrov (breton) to the keystone]Steve
> Martinelli ---10/31/2016 03:51:29 PM---I want to welcome Boris Bobrov
> (breton) to the keystone core team. Boris has been a contributor for
>
> From: Steve Martinelli 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 10/31/2016 03:51 PM
> Subject: [openstack-dev] [keystone] new keystone core (breton)
> --
>
>
>
> I want to welcome Boris Bobrov (breton) to the keystone core team. Boris
> has been a contributor for some time and is well respected by the keystone
> team for bringing real-world operator experience and feedback. He has
> recently become more active in terms of code contributions and bug
> triaging. Upon interacting with Boris, you quickly realize he has a high
> standard for quality and keeps us honest.
>
> Thanks for all your hard work Boris, I'm happy to have you on the team.
>
> Steve Martinelli
> stevemar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc] bug/design flaw: Clients not cached per region

2016-10-17 Thread Jamie Lennox
On 18 October 2016 at 12:09, Dean Troyer  wrote:

> On Mon, Oct 17, 2016 at 5:29 PM, Adrian Turjak 
> wrote:
> > What I'm wondering is can the current client cache be changed to be keyed
> > off the client_manager.region_name? That way the change is only in how
> the
> > clients are built and the code elsewhere doesn't change unless it
> actually
> > does something by manually changing region_name. This would then be a
> change
> > that would go unnoticed outside of the clientmanger and simply add a new
> > feature.
> >
> > Actually, I got distracted while writing this email and wrote a patch:
> > https://review.openstack.org/#/c/387696
> >
> > Using the test command in my first email, this works. It should simply
> work
> > with all existing cases, but the test suite should confirm that first of
> > course.
> >
> > With that change anyone can easily work exactly as before (region_name
> will
> > be set to your default region), and new features that are multi-region
> can
> > be introduced without any pain provided they know to update
> > client_manager.region_name.
>
> This is where I have a problem with this approach.  Those are
> descriptors, and make_client() is only called at first reference.  Any
> given command can not assume it will be the first one to initialize
> the client, or not.  Changing region_name like that is not going to
> reset the descriptor, that would now become a manual call.
>
> For the minimialist case of a CLI re-loading for each command issued
> (the common case it seems) this is less of an issue.  But for any
> longer-running invocation, such as interactive mode or a non-cli
> consumer of osc-lib, this quickly gets to be sub-optimal.  Keeping a
> client dict keyed off region name allows you to keep all of those
> clients (instantiated only as needed/used) around as long as you need
> them and not require re-creating them.
>
> There is also an interaction with the requests session cache that I do
> not think will be a problem, but have not yet thought through all the
> way here that we should consider.
>
> > I have been following this a little and it does sound interesting. Am
> > curious what solution is found for this. Can plugins overwrite existing
> > commands? That way if someone wanted a server create with their own
> features
> > they just make a plugin that replaces the standard command. While a bit
> of a
> > blunt solution, it does seem like the simplest, although people need to
> be
> > aware when installing plugins that they can replace/overwrite default
> > commands and be careful to install only trusted plugins.
>
> Currently there is _no_ checking done WRT the command namespaces, any
> plugin can happily stomp on any other command.  The results are
> officially undefined, mostly because the load order via setuptools is
> not deterministic or assured.  My server create plugin works, but we
> can not assure that will always be the case, which is why this is not
> released yet.
>
> The next plugin interface revision will have a notion of registering
> its commands so we can be more deliberate with the call orders and
> also collision checking.  There also needs to be some way to sanity
> check that not just any old thing that you might not be aware of is
> hooking your commands.
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

A comment from the cheap seats, almost all clients are using a
keystoneauth1 session at this point and that's where your authentication
information is being cached. There is essentially no cost to creating a
client with an existing session as auth happens on demand.

The region_name is not part of the authentication request, it's used to
lookup the endpoint and so is passed to Client creation.

Given this maybe there is no longer any value in a ClientCache? It was
mostly useful to prevent clients doing dumb and share auth amongst them. So
long as the session/auth is created and saved once, a client can be created
per use/request with this information (including region) with no real
performance impact.


Jamie
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] keystoneclient.client.v3.Client: extract identity endpoint

2016-10-13 Thread Jamie Lennox
On 13 October 2016 at 23:19, Johannes Grassler  wrote:

> Hello,
>
> I've got an existing keystoneclient.client.v3.Client object with an
> authenticated session. Now I'd like to get the identity URL this
> object uses for requesting things from Keystone. I want to use that
> URL in a trust's endpoint list in order to allow the user the client
> is authenticated as to talk to Keystone on the trustor's behalf.
>
> The client is authenticated as a service user and issues a GET to
>
>GET http://192.168.123.20/identity_admin/v3/OS-TRUST/trusts
>
> when the following code snippet is executed:
>
>   client.trusts.list()
>
> (`client` is my keystoneclient.client.v3.Client instance).
>
> Initially I thought I could use the auth_url from the client's
> session object, i.e.
>
>   client.session.auth.auth_url
>
> but that turned out to be a dead end because it's the internal
> endpoint:
>
>   http://192.168.123.20/identity/v3
>
> This will be useless for a trust's endpoint URL list if the
> trustee (my service user) ends up using
>
>   http://192.168.123.20/identity_admin/v3
>
> to talk to Keystone. I could look up the admin URL from the catalog
> like this...
>
>   keystone_service=client.services.list(type='identity')[0]
>   client.endpoints.list(service=keystone_service,
> interface='admin',
> region=client.region_name)
>
> ...but that feels rather dirty since it independently looks up the
> admin endpoint rather than plucking the identity endpoint from the
> keystone client instance. Is there a cleaner way to get that
> information directly from the keystoneclient.client.v3.Client
> instance?
>
> Cheers,
>
> Johannes
>

So this is one of those times where keystoneclient is really jno different
from the other clients and is just using the session you gave it to do the
right thing.

>From the session you can do:

session.get_endpoint(service_type='identity', interface='admin',
region='region')

to get the URL from the catalog.



> --
> Johannes Grassler, Cloud Developer
> SUSE Linux GmbH, HRB 21284 (AG Nürnberg)
> GF: Felix Imendörffer, Jane Smithard, Graham Norton
> Maxfeldstr. 5, 90409 Nürnberg, Germany
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] requests-mock

2016-10-11 Thread Jamie Lennox
On 11 October 2016 at 16:23, Clay Gerrard  wrote:

> Greetings!
>
> Anyone have any experience to share positive or negative using
> requests-mock?  I see it's been used to replace another dependency that had
> some problems in many of the OpenStack python client libraries:
>
> Added to global requirements -> https://review.openstack.org/#/c/104067
> Added to novaclient -> https://review.openstack.org/#/c/112179/
> Added to cinderclient -> https://review.openstack.org/#/c/106665/
> Added to keystoneclient -> https://review.openstack.org/#/c/106659/
>
> But I'm not sure how folks other than Jamie are getting on with it?  When
> writing new tests do you tend to instinctively grab for requests-mock - or
> do you mostly not notice it's there?  Any unexpected behaviors ever have
> you checking it out with bzr and reading over the source?  Do you recall
> ever having any bumps or bruises in the gate or in your development
> workflow because of a new release from requests-mock?  No presumed fault on
> Jamie!  It seems like he's doing a Herculean one-man job there; but it can
> be difficult go it alone:
>
> https://bugs.launchpad.net/requests-mock/+bug/1616690
>
> It looks like the gate on this project is configured to run nova &
> keystone client tests; so that may be sufficient to catch any sort of issue
> that might come up in something that depends on it?  Presumably once he
> lands a change - he does the update to global-requirements and then all of
> OpenStack get's it from there?
>
> I ask of course because I really don't understand how this works [1] :D
>
> But anyway - Jamie was kind enough to offer to refactor some tests for us
> - but in the process seems to require to bring in these new dependencies -
> so I'm trying to evaluate if I can recommend requiring this code in order
> to develop on swiftclient [2].
>
> Any feedback is greatly appreciated!
>
> -Clay
>
> 1. As you may know (thanks to some recently publicity) swift & swiftclient
> joined OpenStack in the time of dinosaurs with a general policy of trying
> to keep dependencies to a *minimum* - but then one day the policy changed
> to... *always* add dependencies whenever possible?  j/k I'm not acctually
> sure what the OpenStack policy is on dependency hygiene :D  Anyway, I can't
> say *exactly* where that "general policy" came from originally?  Presumably
> crieht or gholt just had some first hand experience that the dependencies
> you choose to add have a HUGE impact on your project over it's lifetime -
> or read something from Joel on Software - http://www.joelonsoftware.com/
> articles/fog07.html - or traveled into the future and read the
> "go proverbs" and google'd "npm breaks internet, again".  Of course they've
> since moved on from OpenStack but the general idea is still something that
> new contributors to swift & swiftclient get acclimated to and the circle of
> confusion continues https://github.com/openstack/swift/blob/master/
> CONTRIBUTING.rst#swift-design-principles - but hey!  maybe I can educate
> myself about the OpenStack policy/process; add this dependency and maybe
> the next one too; then someday break the cycle!?!?
>
> 2. https://review.openstack.org/#/c/360298
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
Clay,

So I'm not going to comment too much on the quality of the library as i
obviously think it's good (and AFAIK we've never broken an API).

The only thing I'd like to clarify is that I'm never coming to clients with
the intent of converting tests to use requests-mock to promote the library.
I'm probably the main person behind most of the keystoneauth work and one
of the biggest problems I've had with trying to convert projects to work
with keystoneauth (and keystoneclient before that) is that mocking out the
request.request function, or the whole keystoneclient Client as swiftclient
does has made the unit tests very strictly locked to the current way things
are done. By converting the clients to a request level mock I've been able
to change the way the clients authenticate whilst proving there are no
change in the requests made.

The same is true of swiftclient, though it's a longer and slower process
there as i've discussed with people in the past - and would like to try
again at summit. Patches [1][2] exists to bring some support of
keystoneauth to swiftclient (which would make the hacking i'm currently
trying to do on glance_store much easier), but ideally i would like to go
further because there is existing and proven support for things like
reauthenticating a token on 401 via keystoneauth rather than the current,
somewhat hacky method, that would greatly simplify the interaction between
the openstack services and swift.

Swiftclient is 

Re: [openstack-dev] [cue][qa] Status of OpenStack CI jobs

2016-09-16 Thread Jamie Lennox
Cue failing on oslo.context was noticed by the oslo gate jobs a while ago.
These patches [1][2][3] were put up to cue to fix the issue and cleanup the
context usage on 2016/07/22 and have not seen a review.


[1] https://review.openstack.org/#/c/345693
[2] https://review.openstack.org/#/c/345694
[3] https://review.openstack.org/#/c/345695


On 17 September 2016 at 11:09, Ken'ichi Ohmichi 
wrote:

> Hi Graham,
>
> Thanks for your response :-)
> Yeah, you are right.
> That seems the same as a designate bug report:
> https://bugs.launchpad.net/designate/+bug/1603036
> Hope this helps the gate jobs.
>
> Thanks
> Ken Ohmichi
>
> ---
>
>
> 2016-09-15 6:21 GMT-07:00 Hayes, Graham :
> > On 15/09/2016 00:12, Ken'ichi Ohmichi wrote:
> >> Hi Cue-team,
> >>
> >> As http://status.openstack.org/openstack-health/#/ , the cue gate jobs
> >> continues failing 100%.
> >> What is current status of the development?
> >> Hopefully the  job will become stable for smooth development.
> >>
> >> Thanks
> >> Ken Ohmichi
> >
> > I am not sure what the status of development is, but it looks like they
> > hit the same issue some of us did with olso.context 2.6.0 (when the
> > output of to_dict() changed).
> >
> >
> >
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][release][requirements][FFE] global-requirements update to positional to 1.1.1

2016-09-12 Thread Jamie Lennox
On 8 September 2016 at 11:29, Tony Breeds  wrote:

> On Wed, Sep 07, 2016 at 08:10:45AM -0500, Matthew Thode wrote:
> > https://review.openstack.org/366631
> >
> > The combination of oslo.context 2.9.0 + positional 1.0.1 (which is the
> > current minimum requirement) results in various unit test failures in
> > barbican, related to parsing of request headers in generated contexts
> > for unit testing.  Updating to 1.1.1 resolves this issue.
>
> I'd really like to get to the bottom of exactly what these failures are
> and how
> they can be fixed.  I'd ask why we didn't catch them sooner but that boils
> down
> to us not actually testing our lower-bounds.
>
> https://bugs.launchpad.net/oslo.context/+bug/1620963 indicates that
> they're
> unit test failures but elsewhere I saw functional tests mentioned.  Have we
> uncovered a real issue or a testing defect?
>
> > This is specifically affecting barbican and RDO testing (from discussion
> > and the review).
> >
> > The reason I think an FFE is needed is because downstream packagers,
> > while encouraged to package based on upper-constraints sometimes don't.
> > Meaning they'd miss something like this.
>
> Yeah it's complex.  We've stated several  times that this is the contract
> we
> make with downstream that we test with u-c and that's our reccomendation
> for
> packaging.  While I agree that we shoudl test our minimums that's not a
> thing
> we can do right now[1]
>
> I agree that it's wrong to state our minimum is 1.0.1 when we *know* that
> it's
> 1.1.1, I'm not convinced the know that yet.
>
> > Arguments against are that this will have knock on effects down the line
> > (will require re-releases and re-re-releases because of updating things
> > like keystone (this is deep in the depgraph)), so is bad from a release
> > team work point of view.  Also, I think this just effects testing, so
> > the impact of this is more minor than something more serious (not JUST
> > breaking testing).
>
> Here's my summary of the changes needed to bump the minimum[2]
>
> Package  : positional [positional>=1.0.1] (used by 4 projects)
> Re-Release   : 4 projects
> openstack/keystoneauth[type:library]
> openstack/keystonemiddleware  [type:library]
> openstack/oslo.context[type:library]
> openstack/python-keystoneclient   [type:library]
>
> Each of those 4 libraries have stable/newton branches that only contain
> updates to .gitreview
> origin/master : keystoneauth1===2.12.1
> origin/master : keystonemiddleware===4.9.0
> origin/master : oslo.context===2.9.0
> origin/master : python-keystoneclient===3.5.0
>
> So if we take the g-r change we'd need to release
>
> keystoneauth1===2.13.0
> keystonemiddleware===4.10.0
> oslo.context===2.10.0
> python-keystoneclient===3.6.0
>
> All of which would be accepted by global-requirements
>
> I know during the requirements meeting I sdaid I was worried about
> secondary
> release effects but if I follow correctly they'll be minimal.
>
> I guess that's a long way of saying we need someone that knows about
> oslo.context and hopefully barbican to look at this
>
> Yours Tony.
>
> [1] I just had an idea for a crazy hack that might kinda work to generate
> lower-constraints.txt something to look at Ocata :)
> [2] If we don't do this befoer we branch stable/newton then we *can't* do
> it.
>

So the major difference between positional 1.0 and 1.1 is that we swapped
from using a @functools.wrap decorator to a @wrapt decorator. The reason
for this is that @functools.wraps basically screws up any inspection of the
function signature.

Barbican failing this difference means it's inspecting
oslo.context.RequestContext [1] and it looks like it's doing this so it can
tell the difference between before and after oslo.context 2.2. Given we're
at 2.9 in minimum requirements we can just remove this and all should be ok.

Patch: https://review.openstack.org/#/c/369092/


[1]
http://git.openstack.org/cgit/openstack/barbican/tree/barbican/context.py?h=3.0.0.0b3#n43


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] new core reviewer (rderose)

2016-09-07 Thread Jamie Lennox
Congratulations Ron. Welcome aboard.

On 3 September 2016 at 04:13, Brad Topol  wrote:

> Shh! Let them get the leg irons welded shut on him first :-). Pay no
> attention Ron... Congratulations!
>
>
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
>
> [image: Inactive hide details for Morgan Fainberg ---09/02/2016 12:55:37
> PM---On Sep 2, 2016 08:44, "Brad Topol"  wr]Morgan
> Fainberg ---09/02/2016 12:55:37 PM---On Sep 2, 2016 08:44, "Brad Topol" <
> bto...@us.ibm.com> wrote: >
>
> From: Morgan Fainberg 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 09/02/2016 12:55 PM
> Subject: Re: [openstack-dev] [keystone] new core reviewer (rderose)
>
> --
>
>
>
> On Sep 2, 2016 08:44, "Brad Topol" <*bto...@us.ibm.com*
> > wrote:
> >
> > Congratulations Ron!!! Very well deserved!!!
> >
> > --Brad
> >
> >
> > Brad Topol, Ph.D.
> > IBM Distinguished Engineer
> > OpenStack
> > (919) 543-0646
> > Internet: *bto...@us.ibm.com* 
> > Assistant: Kendra Witherspoon (919) 254-0680
> >
> > Steve Martinelli ---09/01/2016 10:47:49 AM---I want to welcome Ron De
> Rose (rderose) to the Keystone core team. In a short time Ron has shown a v
> >
> > From: Steve Martinelli <*s.martine...@gmail.com*
> >
> >
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> *openstack-dev@lists.openstack.org* >
> > Date: 09/01/2016 10:47 AM
> >
> > Subject: [openstack-dev] [keystone] new core reviewer (rderose)
> > 
> >
> >
> >
> > I want to welcome Ron De Rose (rderose) to the Keystone core team. In a
> short time Ron has shown a very positive impact. Ron has contributed
> feature work for shadowing LDAP and federated users, as well as enhancing
> password support for SQL users. Implementing these features and picking up
> various bugs along the way has helped Ron to understand the keystone code
> base. As a result he is able to contribute to the team with quality code
> reviews.
> >
> > Thanks for all your hard work Ron, we sincerely appreciate it.
> >
> > Steve___
> ___
> >
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> *openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
> 
> > *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> 
> >
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> *openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
> 
> > *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> 
> >
>
> Ahahaha! Another person to direct questions to now! ;)
>
> Congrats Ron!
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova][requirements] Getting oslo.context 2.6.0 into the gate

2016-07-30 Thread Jamie Lennox
On 27 July 2016 at 19:10, Tony Breeds  wrote:

> On Mon, Jul 18, 2016 at 04:09:54PM +0200, Markus Zoeller wrote:
> > Since yesterday, Nova uses "oslo.context" 2.6.0 [1] but the needed
> > change [2] is not yet in place, which broke "gate-nova-python27-db"[3].
> > Logstash counts 70 hits/h [4]. Most folks will be at the midcycle in
> > Portland and won't be available for the next 2h or so.
> > If you can have a look at it and merge it, that would be great.
> >
> > References:
> > [1]
> >
> https://github.com/openstack/requirements/commit/238389c4ee1bd3cc9be4931dd2639aea2dae70f1
> > [2] https://review.openstack.org/#/c/342604/1
> > [3] https://bugs.launchpad.net/nova/+bug/1603979
> > [4] logstash: http://goo.gl/79yFb9
>
> I feel like we need to make a plan to more forward and that's going to
> require
> some coordination.
>
> The requirements team saw this coming in that nova's tests failed when
> 2.6.0
> was added to the upper-constraints.txt.  We had a plan[1] but then failed
> to
> execute.  The requirements team has a couple of TODOs from there but the
> biggest one is to add actual cross-project gate checks so that we have
> *very
> strong* signals that things will break.
>
> So the state we're in is
> oslo.context 2.6.0 is out and used in all projects that *do not* honor
> upper-constraints.txt
> oslo.context 2.5.0 is being used by all projects that *do* honor
> upper-constraints.txt
>
> The Path forward IMO is
>
> a) Unblock oslo.context 2.6.0
> - But leave upper-constraints.txt pointing to 2.5.0
> - https://review.openstack.org/#/c/347608/
> * We can test shims/fixes against this.
>

I think this gets easier with the release of 2.7 as we can hopefully just
bump minimum requirements to here and forget the whole 2.6 mess.


> b) Identify projects that break with > 2.5.0
> - Seems like this is (at least)
> - Trove
>

check job: https://review.openstack.org/#/c/349204/

- Nova
>

check job: https://review.openstack.org/#/c/348204/

- Designate
>

check job: https://review.openstack.org/#/c/349205/

- Others?
> c) Add shims to them to work with 2.5.0 and newer
> - Nova: https://review.openstack.org/#/c/342604/ and
> https://review.openstack.org/#/c/348057/
> d) Bump u-c to point at "the latest"
> e) Bump the minium in g-r to 2.6.0
> f) Remove items from 'c'
>
> Notes:
>  - The requirements team will not be able to merge any change that bumps
>oslo.context in u-c until step 'd'.  The reality here is due to our
>tooling/gating that probably means that all u-c changes will be paused
>  - As stated in my pre-amble we're working on testing to make this better.
>  - We almost certainly need a corss-project session during the design
> summit to
>discuss the API boundry for the context and how projects are
>expected/allowed to use it.
>

Thanks Tony. I think this will work well. I hope not many other projects
will need the same shims nova did as we've patched a few already.
I completely agree on the cross-project oslo.context session and offer to
take that one as there are a number of plans around improving it that have
not been properly communicated.


> Yours Tony.
>
> [1]
> http://eavesdrop.openstack.org/irclogs/%23openstack-requirements/%23openstack-requirements.2016-07-15.log.html#t2016-07-15T03:42:24
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] gate "gate-nova-python27-db" is broken due to oslo.context 2.6.0

2016-07-19 Thread Jamie Lennox
On 20 July 2016 at 00:06, Joshua Harlow  wrote:

> Hayes, Graham wrote:
>
>> On 18/07/16 22:27, Ronald Bradford wrote:
>>
>>> Hi All,
>>>
>>> For Oslo libraries we ensure that API's are backward compatible for 1+
>>> releases.
>>> When an Oslo API adds a new class attribute (as in this example of
>>> is_admin_project and 4 other attributes) added to Oslo Context in
>>> 2.6.0,  these are added to ensure this API is also forward compatible
>>> with existing project code for any contract with the base class
>>> instantiation or manipulation.
>>>
>>
>> Which projects is this run against?
>>
>> The issue seen is presently Nova specific (as other projects can
>>> utilize 2.6.0) and it is related to projects that sub-class
>>> oslo.context, and how unit tests are written for using class
>>> parameters.  Ideally, to implement using oslo.context correctly
>>> OpenStack projects should:
>>>
>>
>> Designate also had to make a quick change to support 2.6.0.
>>
>> We were lucky as it was noticed by the RDO builds, which had pulled in
>> 2.6.0 before the requirements update was proposed, so it did not break
>> our gate.
>>
>> I just did a quick search and there is a few projects that hardcoded
>> this, like we did.
>>
>
> Ya, that's bad, nothing in the docs of the to_dict API say what to even
> compare against (or the keys produced), so I'm pretty sure anyone doing
> this is setting themselves up for future failure and fragile software.
>

Can you post that list?


>
>
>> * Not perform direct dictionary to dictionary comparisons with the
>>> to_dict() method as this does not support when new attributes at
>>> added. Two patches (one to nova) address this in offending projects
>>> [5][6]
>>> * Unit tests should focus on attributes specific to the sub-classed
>>> signature, e.g. [7].  Oslo context provides an extensive set of unit
>>> tests for base class functionality. This is a wish list item for
>>> projects to implement.
>>>
>>> The to_dict() method exists as a convenience method only and is not an
>>> API contract. The resulting set of keys should be used accordingly.
>>> This is why there is no major release version.
>>>
>>
>> How are developers supposed to know that?
>>
>
> So we (in oslo) can (and ideally will) make this better but when the API
> doesn't itself tell you what keys are produced or what the values of those
> keys are then it should be pretty obvious to u (the library user) that u
> can not reliably do dictionary comparisons (because how do u know what to
> compare against when the docs don't state that?). I suppose people are
> 'reverse engineering the dict' by just looking at the code but that's also
> not so great...
>

I think the obvious and only thing you should expect from the to_dict
method is that it can be reversed by the from_dict method. Subclasses can
then make small modifications to those methods to add additional
information as appropriate. There is a bit of a problem in this with the
way subclasses are done that is fixed in [1] but it does not affect any
existing code.

We realize that the to_dict method is subclassed by a lot of services and
affects RPC and so contexts must be serializable between different versions
of the library so we will not modify existing to_dict values but as
mentioned writing your tests to assume this will never be added to sets us
up for these problems.

In this case oslo.context was largely extracted from nova and so the
fragile tests make sense and should therefore be fixed - but the oslo
change does not constitute a breaking API change.


[1] https://review.openstack.org/#/c/341250/


>
>
>> This kind of feels like semantics. This was an external API that changed
>> and as a result should have been a major version.
>>
>
> I think this is where it gets a little bit into as u said, semantics, but
> the semantics IMHO are important here because it affects the ability of
> oslo.context to move forward & change.
>
> I suppose we should/could just put a warning on this method like I did in
> taskflow (for something similar) @
> https://github.com/openstack/taskflow/blob/master/taskflow/engines/base.py#L71
> to denote that nothing in the dict that is returned can be guaranteed to
> always be the same.
>
>
>
>> There is a note from our discussion in Oslo to improve our
>>> documentation to describe the API use of to_dict() better and state we
>>> will not remove to_dict() keys within a release, but that may happen
>>> between releases.
>>>
>>> There is a subsequent problem with how Nova performs a warning test
>>> [8]. Additional reviews are looking at addressing this sub-class usage
>>> of from_dict() and to_dict().
>>>
>>> Regards
>>>
>>> Ronald
>>>
>>>
>>> [5] https://review.openstack.org/#/c/343694/,
>>> [6] https://review.openstack.org/#/c/342367/
>>> [7] https://review.openstack.org/#/c/342869/
>>> [8]
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/unit/test_context.py#n144
>>>
>>> On Mon, Jul 18, 2016 at 10:50 AM, 

Re: [openstack-dev] [Openstack] Naming polls - and some issues

2016-07-14 Thread Jamie Lennox
Partially because its name is Circular Quay, so it would be like calling
the S release Street for  Street.

Having said that there are not that many of them and Sydney people know
what you mean when you are going to the Quay.


On 14 July 2016 at 21:35, Neil Jerram  wrote:

> Not sure what the problem would be with 'Quay' or 'Street' - they both
> sound like good options to me.
>
>
> On Thu, Jul 14, 2016 at 11:29 AM Eoghan Glynn  wrote:
>
>>
>>
>> > >> Hey all!
>> > >>
>> > >> The poll emails for the P and Q naming have started to go out - and
>> > >> we're experiencing some difficulties. Not sure at the moment what's
>> > >> going on ... but we'll keep working on the issues and get ballots to
>> > >> everyone as soon as we can.
>> > >
>> > > You'll need to re-send at least some emails, because the link I
>> received
>> > > is wrong - the site just reports
>> > >
>> > >   "Your voter key is invalid. You should have received a correct URL
>> by
>> > >   email."
>> >
>> > Yup. That would be a key symptom of the problems. One of the others is
>> > that I just uploaded 3000 of the emails to the Q poll and it shows 0
>> > active voters.
>> >
>> > I think maybe it needs a nap...
>>
>> Any chance we could remove "Quay" from the Q release naming poll before
>> the links are fixed and the real voting starts?
>>
>> Otherwise we risk looking a bit silly, since "Quay" for the Q release
>> would be somewhat akin to choosing "Street" for the S release ;)
>>
>> Cheers,
>> Eoghan
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Jamie Lennox
On 4 July 2016 at 19:58, Julien Danjou  wrote:

> On Mon, Jul 04 2016, Denis Makogon wrote:
>
> > What would be the best place to submit spec?
>
> The cross project spec repo?
>
>   https://git.openstack.org/cgit/openstack/openstack-specs/
>
> --
> Julien Danjou
> /* Free Software hacker
>https://julien.danjou.info */


The cross project might be a good place to submit the spec - however
keystoneauth is where you will likely want to implement this so it is
available to all clients.


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][openid][mistral] Enabling OpenID Connect authentication w/o federation

2016-07-01 Thread Jamie Lennox
On 23 June 2016 at 21:30, Renat Akhmerov  wrote:

> Hi,
>
> I’m looking for some hints on how to enable authentication via OpenID
> Connect protocol, particularly in Mistral. Actually, specific protocol is
> not so important. I’m mostly interested in conceptional vision here and I’d
> like to ask the community if what we would like to do makes sense.
>
> *Problem statement*
>
> Whereas there are people using Mistral as an OpenStack service with proper
> Keystone authentication etc. some people want to be able to use it w/o
> OpenStack at all or in some scenarios where OpenStack is just one thing
> that Mistral workflows should interact with.
>
> In one of our cases we want to use Mistral w/o OpenStack but we want to
> make Mistral perform authentication via OIDC. I’ve done some research on
> what Keystone already has that could help us do that and I found a group of
> plugins for OIDC authentication flows under [1]. The problem I see with
> these plugins for my particular case is that I still have to properly
> install Keystone and configure it for Federation since the plugins use
> Federation. Feels like a redundant time consuming step for me. A normal
> flow for these plugins is to first get so-called unscoped token via OIDC
> and then request a scoped token from Keystone via its Federation API. I
> think understand why it works this way, it’s well documented in Keystone
> docs. Briefly, it’s required to get user info, list of available resources
> etc, whatever OIDC server does not provide, it only works as an identity
> provider.
>
> What ideally I'd like to do is to avoid installing and configuring
> Keystone at all.
>

So with the exception of token_endpoint which is basically for debugging
yes, all the plugins in keystoneauth are designed to work with keystone.
Keystone provides a whole bunch of things here like user, role and project
management - basically the Authorization that goes with OIDC's
authentication.

It also provides the auth_token middleware which reads those tokens and
provides a series of well known headers that you can use to know what
project you're in, do policy enforcement and basically all permission
management. For most projects this is what you care about. If you write
your own version of auth_token middleware for your identity provider you
can use whatever authentication you like.

You'll basically need a way of mapping the information you get from your
OIDC provider into the projects, roles and user info that makes sense for
your service. And when it gets sufficiently complex that you have to allow
different deployers to configure this in different ways and for any number
of protocols you'll have keystone's federation implementation.


> *Possible solution*
>
> What I’m thinking about is: would it be OK to just create a set of new
> authentication plugins under keystoneauth project that would do the same as
> existing ones but w/o getting a Keystone scoped token? That way we could
> still take advantage of existing keystone auth plugins framework but w/o
> having to install and configure Keystone service. I realize that we’ll lose
> some capabilities that Keystone provides but for many cases it would be
> enough just to authenticate on a client and then validate token from HTTP
> headers via OIDC server on server side. Just one more necessary thing to do
> here is to fill tenant/project but that could be extracted from a token.
>

So you can use keystoneauth to implement plugins that do not hit keystone.
A plugin basically has to implement this[1] interface which has no direct
ties to keystone. There is then a standard subclass of this that handles
most of the work for interacting with keystone that the existing plugins
all use. It's fairly well documented but if you have additional questions
let us know.

I'm pretty sure from here you can use the new version of openstackclient
and anything else that uses keystoneauth.

These plugins would probably not live in the keystoneauth repository unless
there was a lot more people interested in using them - however
keystoneauth, OSC, shade etc all specify the plugin to use via a name which
is a setuptools entrypoint so so long as the plugin is installed on the
system you can use it even though it wasn't in the repo.


[1]
https://github.com/openstack/keystoneauth/blob/master/keystoneauth1/plugin.py


>
> *Questions*
>
>
>1. Would this new plugin have a right to be part of keystoneauth
>project despite Keystone service is not involved at all? The alternative is
>just to teach Mistral to do authentication w/o using keystone client  at
>all. But IMO the advantage of having such plugin (group of plugins
>actually) is that someone else could reuse it.
>
> Not initially, but as i mentioned above so long as its installed on the
machine you want to use it from that doesn't matter.

>
>1. Is there any existing code that we could reuse to solve this
>problem? Maybe what I’m describing is 

Re: [openstack-dev] [keystone][cross-project] Standardized role names and policy

2016-06-28 Thread Jamie Lennox
-:--
> admin:  Overall cloud admin
> service  :  for service users only, not real
> humans
> {service_type}_admin :  identity_admin, compute_admin,
> network_admin etc.
> {service_type}_{api_resource}_manager: identity_user_manager,
>compute_server_manager,
> network_subnet_manager
> observer :  read only access
> {service_type}_observer  : identity_observer, image_observer
>
>
> Jamie Lennox originally wrote the spec that got the ball rolling, and
> Dolph Matthews just took it to the next level.  It is worth a read.
>
> I think this is the way to go.  There might be details on how to get
> there, but the granularity is about right.
> If we go with that approach, we might want to rethink about how we enforce
> policy.  Specifically, I think we should split the policy enforcement up
> into two stages:
>
> 1.  Role check.  This only needs to know the service and the api
> resource.  As such, it could happen in middleware.
>
> 2. Scope check:  for user or project ownership.  This happens in the code
> where it is currently called.  Often, an object needs to be fetched from
> the database
>
> The scope check is an engineering decision:  Nova developers need to be
> able to say where to find the scope on the virtual machine, Cinder
> developers on the volume objects.
>
> Ideally, The python-*clients, Horizon and other tools would be able to
> determine what capabilities a given token would provide based on the roles
> included in the validation response. If the role check is based on the URL
> as opposed to the current keys in the policy file, the client can determine
> based on the request and the policy file whether the user would have any
> chance of succeeding in a call. As an example, to create a user in
> Keystone, the API is:
>
> POST https://hostname:port/v3/users
>
> Assuming the client has access to the appropriate policy file, if can
> determine that a token with only the role "identity_observer" would not
> have the ability to execute that command.  Horizon could then modify the
> users view to remove the "add user" form.
>
> For user management, we want to make role assignments as simple as
> possible and no simpler.  An admin should not have to assign all of the
> individual roles that a user needs.  Instead, assigning the role "Member"
> should imply all of the subordinate roles that a user needs to perform the
> standard workflows.  Expanding out the implied roles can be done either
> when issuing a token, or when evaluating the policy file, or both.
>
> I'd like to get the conversation on this started here on the mailing list,
> and lead in to a really productive set of talks at the Austin summit.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:  <openstack-dev-requ...@lists.openstack.org>
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:  <openstack-dev-requ...@lists.openstack.org>
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting and ask.openstack.org

2016-06-28 Thread Jamie Lennox
On 29 June 2016 at 09:49, Steve Martinelli  wrote:

> I think we want something a bit more organized.
>
> Morgan tossed the idea of a keystone-docs repo, which could have:
>
> - The FAQ Adam is asking about
> - Install guides (moved over from openstack-manuals)
> - A spot for all those neat and unofficial blog posts we do
> - How-to guides
> - etc...
>
> I think it's a neat idea and warrants some discussion. Of course, we don't
> want to be the odd project out.
>

What would be the advantage of a new repo rather than just using the
keystone/docs folder. My concern is that docs/ already gets stagnate but a
new repo would end up being largely ignored and at least theoretically you
can update docs/ when the relevant code changes.


>
> On Tue, Jun 28, 2016 at 6:00 PM, Ian Cordasco 
> wrote:
>
>> -Original Message-
>> From: Adam Young 
>> Reply: OpenStack Development Mailing List (not for usage questions)
>> 
>> Date: June 28, 2016 at 16:47:26
>> To: OpenStack Development Mailing List > >
>> Subject:  [openstack-dev] Troubleshooting and ask.openstack.org
>>
>> > Recently, the Keystone team started brainstormin a troubleshooting
>> > document. While we could, eventually put this into the Keystone repo,
>> > it makes sense to also be gathering troubleshooting ideas from the
>> > community at large. How do we do this?
>> >
>> > I think we've had a long enough run with the ask.openstack.org website
>> > to determine if it is really useful, and if it needs an update.
>> >
>> >
>> > I know we getting nuked on the Wiki. What I would like to be able to
>> > generate is Frequently Asked Questions (FAQ) page, but as a living
>> > document.
>> >
>> > I think that ask.openstack.org is the right forum for this, but we need
>> > some more help:
>> >
>> > It seems to me that keystone Core should be able to moderate Keystone
>> > questions on the site. That means that they should be able to remove
>> > old dead ones, remove things tagged as Keystone that do not apply and so
>> > on. I would assume the same is true for Nova, Glance, Trove, Mistral
>> > and all the rest.
>> >
>> > We need some better top level interface than just the tags, though.
>> > Ideally we would have a page where someone lands when troubleshooting
>> > keystone with a series of questions and links to the discussion pages
>> > for that question. Like:
>> >
>> >
>> > I get an error that says "cannot authenticate" what do I do?
>> >
>> > What is the Engine behind "ask.openstack.org?" does it have other tools
>> > we could use?
>>
>> The engine is linked in the footer: https://askbot.com/
>>
>> I'm not sure how much of it is reusable but it claims to be able to do
>> some of the things I think you're asking for except it doesn't
>> explicitly mention deleting comments/questions/etc.
>>
>> --
>> Ian Cordasco
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Version header for OpenStack microversion support

2016-06-20 Thread Jamie Lennox
On 21 June 2016 at 11:41, Edward Leafe  wrote:

> On Jun 18, 2016, at 9:03 AM, Clint Byrum  wrote:
>
> > Whatever API version is used behind the compute API is none of the user's
> > business.
>
> Actually, yeah, it is.
>
> If I write an app or a tool that expects to send information in a certain
> format, and receive responses in a certain format, I don't want that to
> change when the cloud operator upgrades their system. I only want things to
> change when I specifically request that they change by specifying a new
> microversion.
>
>
I think the point there was as a user you only get to specify the version
you communicate with the initial service. Talking to nova you can ask for
2.4 (no idea on the actual numbers), but you have no control over the
version nova uses to talk to neutron or glance because that doesn't affect
the output to you as a user.


>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-19 Thread Jamie Lennox
On 20 June 2016 at 12:33, Morgan Fainberg <morgan.fainb...@gmail.com> wrote:

>
>
> On Sun, Jun 19, 2016 at 6:51 PM, Adam Young <ayo...@redhat.com> wrote:
>
>> On 06/16/2016 02:19 AM, Jamie Lennox wrote:
>>
>> Thanks everyone for your input.
>>
>> I generally agree that there is something that doesn't quite feel right
>> about purely trusting this information to be passed from service to
>> service, this is why i was keen for outside input and I have been
>> rethinking the approach.
>>
>>
>> They really feel like a variation on Trust tokens.
>>
>> From the service perspective, they are tokens, just not the one the user
>> originally requested.
>>
>> The "reservation" as I see it is an implicit trust created by the user
>> requesting the operation on the initial service.
>>
>> When the service validates the token, it can get back the,  lets call it
>> a "reserved token" in keeping with the term reservation above.  That token
>> will have a longer life span than the one the user originally requested,
>> but (likely) fewer roles.
>>
>> When nova calls glance, and then glance calls Swift, we can again
>> transition to different reserved tokens if needs be.
>>
>>
>>
> I would really, really, really, prefer not to build in the need to
> "transition" between "reserved" tokens when jumping between services. This
> wont be "impossible", but I really don't want to start from the simpler
> proposal; It's a lot of moving parts.
>
> The big difference here is that trusts are explicit and have a LOT of
> overhead (and frankly would be clunky for this) as currently implemented,
> this is closer to an evolved version of the composite tokens we talked over
> in Paris.
>

It's kind of like each of those things - but i guess mentally i'm trying to
think of it differently and why I explicitly avoided calling it a token
even though there's the obvious similarity.

I'm trying to think of it as authorization for a single operation. Just by
nature of openstack that single operation might pass through a couple of
services to actually be completed.

I don't think we would ever transition or roll reservations. A reservation
lasts for the lifetime of one user request and represents permission for
the rest of openstack to execute this operation on behalf of the user. You
should not be able to use a reservation to fetch another token or
reservation, it is a verified bundle of state.

I also want to not think about it in terms of reducing roles. Ideally we
wouldn't even need to include roles in the reservation because we know up
front the policy check has succeeded (though i doubt this will be
practical). This reservation is only useful to perform the pre-authorized
operation it specifies. Roles are done with by this point, that check has
happened.

As i'm still trying to find good analogies i'll just try saying it in
different ways to get the point across:

An unscoped token is a user's authentication.
A scoped token is a user's authorization on some project.
A reservation is keystone's assertion that openstack services should
perform the enclosed operation for the enclosed user.

A user exchanges credentials for an unscoped token, then uses that to scope
a token to a project and then make calls. A service exchanges the scoped
token and current request information to create a reservation. Service to
service communication then is all about the reservation.

A reservation is operation state, it is not a token that can be used to
perform other actions.

Sorry for rambling.


>
>>
>>
>>
>> To this end i've proposed reservations (a name that doesn't feel right):
>> https://review.openstack.org/#/c/330329/
>>
>> At a gut feeling level i'm much happier with the concept. I think it will
>> allow us to handle the distinction between user->service and
>> service->service communication much better and has the added bonus of
>> potentially opening up some policy options in future.
>>
>> Please let me know of any concerns/thoughts on the new approach.
>>
>> Once again i've only written the proposal part of the spec as there will
>> be a lot of details to figure out if we go forward. It is also fairly rough
>> but it should convey the point.
>>
>>
>> Thanks
>>
>> Jamie
>>
>> On 3 June 2016 at 03:06, Shawn McKinney <smckin...@symas.com> wrote:
>>
>>>
>>> > On Jun 2, 2016, at 10:58 AM, Adam Young < <ayo...@redhat.com>
>>> ayo...@redhat.com> wrote:
>>> >
>>> > Any senseible RBAC setup would support this, but we are not using a
>>> sensible one, we are 

Re: [openstack-dev] Version header for OpenStack microversion support

2016-06-18 Thread Jamie Lennox
Quick question: why do we need the service type or name in there? You
really should know what API you're talking to already and it's just
something that makes it more difficult to handle all the different APIs in
a common way.
On Jun 18, 2016 8:25 PM, "Steve Martinelli"  wrote:

> Looks like Manila is using the service name instead of type
> (X-OpenStack-Manila-API-Version) according to this link anyway:
> http://docs.openstack.org/developer/manila/devref/api_microversion_dev.html
>
> Keystone can follow the cross project spec and use the service type
> (Identity instead of Keystone).
> On Jun 17, 2016 12:44 PM, "Ed Leafe"  wrote:
>
>> On Jun 17, 2016, at 11:29 AM, Henry Nash  wrote:
>>
>> > We are currently in the process of implementing microversion support in
>> keystone - and are obviously trying to follow the cross-projec spec for
>> this (
>> http://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html
>> ).
>> >
>> > One thing I noticed was that the header specified in this spec is of
>> the form:
>> >
>> > OpenStack-API-Version: [SERVICE_TYPE] [X,Y]
>> >
>> > for example:
>> >
>> > OpenStack-API-Version: identity 3.7
>> >
>> > However, from what i can see of the current implementations I have seen
>> of microversioning in OpenStack (Nova, Manilla), they use service-specific
>> headers, e.g.
>> >
>> > X-OpenStack-Nova-API-Version: 2.12
>> >
>> > My question is whether there a plan to converge on the generalized
>> header format….or are we keep with the service-specific headers? I’d
>> obviously like to implement the correct one for keystone.
>>
>> Yes, the plan is to converge on the more generic headers. The Nova
>> headers (don’t know about Manilla) pre-date the API WG spec, and were the
>> motivation for development of that spec. We’ve even made it possible to
>> accept both header formats [0] until things can be migrated to the
>> recommended format.
>>
>> [0] https://review.openstack.org/#/c/300077/
>>
>> -- Ed Leafe
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-16 Thread Jamie Lennox
Thanks everyone for your input.

I generally agree that there is something that doesn't quite feel right
about purely trusting this information to be passed from service to
service, this is why i was keen for outside input and I have been
rethinking the approach.

To this end i've proposed reservations (a name that doesn't feel right):
https://review.openstack.org/#/c/330329/

At a gut feeling level i'm much happier with the concept. I think it will
allow us to handle the distinction between user->service and
service->service communication much better and has the added bonus of
potentially opening up some policy options in future.

Please let me know of any concerns/thoughts on the new approach.

Once again i've only written the proposal part of the spec as there will be
a lot of details to figure out if we go forward. It is also fairly rough
but it should convey the point.


Thanks

Jamie

On 3 June 2016 at 03:06, Shawn McKinney  wrote:

>
> > On Jun 2, 2016, at 10:58 AM, Adam Young  wrote:
> >
> > Any senseible RBAC setup would support this, but we are not using a
> sensible one, we are using a hand rolled one. Replacing everything with
> Fortress implies a complete rewrite of what we do now.  Nuke it from orbit
> type stuff.
> >
> > What I would rather focus on is the splitting of the current policy into
> two parts:
> >
> > 1. Scope check done in code
> > 2. Role check done in middleware
> >
> > Role check should be donebased on URL, not on the policy key like
> identity:create_user
> >
> >
> > Then, yes, a Fortress style query could be done, or it could be done by
> asking the service itself.
>
> Mostly in agreement.  I prefer to focus on the model (RBAC) rather than a
> specific impl like Fortress. That is to say support the model and allow the
> impl to remain pluggable.  That way you enable many vendors to participate
> in your ecosystem and more important, one isn’t tied to a specific backend
> (ldapv3, sql, …)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][security] Service User Permissions

2016-06-01 Thread Jamie Lennox
Hi All,

I'd like to bring to the attention of the wider security groups and
OpenStack users the Service Users Permissions [1] spec currently proposed
against keystonemiddleware.

To summarize quickly OpenStack has long had the problem of token expiry
happening in the middle of a long running operation and failing service to
service requests and there have been a number of ways proposed around this
including trusts and using the service users to perform operations.

Ideally in a big system like this we only want to validate a token and
policy once on a user's first entry to the system, however all services
only communicate via the public interfaces so we cannot tell at validation
time whether this is the first, second, or twentieth time we are validating
a token. (If we ever do OpenStack 2.0 we should change this)

The proposed spec provides a way to simulate the at-edge validation for
service to service communication. If a request has an X-Service-Token
header (an existing concept) then instead of validating the user's token we
should trust all the headers sent with that request (X_USER_ID,
X_PROJECT_ID etc). We would still validate the X-Service-Token header. This
has the effect that one service asserts to another that it has already
validated this token and the receiving service shouldn't validate it again
and bypass the expiry problem.

The glaring security issue here is that a user with the service role can
now emulate any request on behalf of any user by sending the expected
authenticated headers. This will place an extreme level of trust on
accounts that up to now have generally only been able to validate a token.
There is both the concern here that a malicious service could craft new
requests with bogus credentials as well as services deciding that this
provides them the ability to do non-expiring trusts from a user where it
can simply replay the headers it received on previous requests to perform
future operations on behalf of a user. This is _absolutely not_ the
intended use case but something I expect to come up.

There is a variation of this mentioned in the spec where we pass only the
user-id, project-id and audit information from service to service and then
middleware can recreate the token from this information similar to how
fernet tokens work today. There is additional processing here which in the
standard case will simply reproduce the same headers that the last service
already knew and it still allows a large amount of emulation from the
service.

There are possibly ways we can secure this header bundle via signing
however the practical result is essentially a secondary expiry time and an
operational complexity that will make PKI tokens and rotating fernet keys
appear trivial for the benefit of securing a service that we already trust
with our tokens.

As this has such far reaching implications throughout openstack i would
like outside input on whether the risks are worth the reward in this case,
and what we would need to do to secure a deployment like this.

Please comment here and on the spec.



Thanks,

Jamie



[1] https://review.openstack.org/#/c/317266/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: keystone federation user story

2016-05-25 Thread Jamie Lennox
On 25 May 2016 at 03:55, Alexander Makarov  wrote:

> Colleagues,
>
> here is an actual use case for shadow users assignments, let's discuss
> possible solutions: all suggestions are appreciated.
>
> -- Forwarded message --
> From: Andrey Grebennikov 
> Date: Tue, May 24, 2016 at 9:43 AM
> Subject: keystone federation user story
> To: Alexander Makarov 
>
>
> Main production usecase:
> As a system administrator I need to create assignments for federated users
> into the projects when the user has not authenticated for the first time.
>
> Two different approaches.
> 1. A user has to be assigned directly into the project with the role
> Role1. Since shadow users were implemented, Keystone database has the
> record of the user when the federated user authenticates for the first
> time. When it happens, the user gets unscoped token and Keystone registers
> the user in the database with generated ID (the result of hashing the name
> and the domain). At this point the user cannot get scoped token yet since
> the user has not been assigned to any project.
> Nonetheless there was a bug
> https://bugs.launchpad.net/keystone/+bug/1313956 which was abandoned, and
> the reporter says that currently it is possible to assign role in the
> project to non-existing user (API only, no CLI). It doesn't help much
> though since it is barely possible to predict the ID of the user if it
> doesn't exist yet.
>
> Potential solution - allow per-user project auto-creation. This will allow
> the user to get scoped token with a pre-defined role (should be either
> mentioned in config or in mapping) and execute operations right away.
>
> Disadvantages: less control and order (will potentially end up with
> infinite empty projects).
> Benefits: user is authorized right away.
>

This is something that has come up a few times as a workflow problem. For
some group of users you should end up with your own project that doesn't
exist until the first time you log in. This is something i think we could
extend the mapper to handle. It wouldn't be user authenticated immediately,
just solve the workflow of personal projects.


> Another potential solution - clearly describe a possibility to assign
> shadow user to a project (client should generate the ID correctly), even
> though the user has not been authenticated for the first time yet.
>
> Disadvantages: high risk of administrator's mistake when typing user's ID.
> Benefits: user doesn't have to execute first dummy authentication in order
> to be registered.
>

I would prefer not to do this. It either involves creating a user and then
somehow associating what federated information they will present with
later, or allowing you to create a user with a predetermined or predictable
id. I dont think we should add either of those APIs.


>
> 2. Operate with the groups. It means that the user is a member of the
> remote group and we propose the groups to be assigned to the projects
> instead of the users.
> There is no concept of shadow groups yet, so it still has to be
> implemented.
>
> Same problem - in order to be able to assign the group to the project
> currently it has to exist in Keystone database.
>

I'm not sure what you want for shadow groups here. Groups are always a
keystone concept, they have never been ephemeral in the way that federated
users used to be. Are you wanting  to make it so that keystone groups are
auto created?

Mapping federated users into groups has always been the way federation was
designed in keystone because even though you can't know the actual users
that are going to log in, it is very likely they fall into something that
can fairly easily be categorized by looking at the roles that come in from
the IDP assertion. So your mapping does something like "if user has the
admin role put them in the federated-admin group", the federated-admin
group has already been established and already has roles on a number of
projects. Users are then automatically granted those roles on those
projects. You could go so far as to check for user names in the mapping
here but that's not a sustainable solution.


> It should be either allowed to pre-create the project for a group (based
> on some specific flags in mappings),
>

maybe - if you created the groups why don't you know the projects they are
going to be in?


> or it should be allowed to assign non-existing groups into the projects.
>

still not sure on this non-existing groups concept.


>
> I'd personally prefer to allow some special attribute to be specified in
> either the config or mapping which will allow project auto-creation.
> For example, user is added to the group "openstack" in the backend. In
> this case this group is the part of SAML assertions (in case when SAML2 is
> used as the protocol), and Keystone should recognize this group through the
> mapping. When user makes login attempt, Keystone should pre-create the
> project 

Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-15 Thread Jamie Lennox
On 13 May 2016 at 04:15, Sean Dague  wrote:

> On 05/12/2016 01:47 PM, Morgan Fainberg wrote:
> > This  also comes back to the conversation at the summit. We need to
> > propose the timeline to turn over for V3 (regardless of
> > voting/non-voting today) so that it is possible to set the timeline that
> > is expected for everything to get fixed (and where we are
> > expecting/planning to stop reverting while focusing on fixing the
> > v3-only changes).
> >
> > I am going to ask the Keystone team to set forth the timeline and commit
> > to getting the pieces in order so that we can make v3-only voting rather
> > than playing the propose/revert game we're currently doing. A proposed
> > timeline and gameplan will only help at this point.
>
> A timeline would be good (proposed below), but there are also other bits
> of the approach we should consider.
>

That was my job to get sent to the TC. I'll get on it.


>
> I would expect, for instance,
> gate-tempest-dsvm-neutron-identity-v3-only-full to be on keystone, and
> it does not appear to be. Is there a reason why?
>

To test that keystone works with keystone v3? Otherwise what you're doing
is making it so that keystone's gate breaks every time neutron does
something that's not v3 compatible which brings it to our attention but
otherwise just gets in the way. The hope was to push the check job failure
back to the service so that its not purely keystone's job to run around and
fix all the other services when the incompatible change is discovered.


>
> With that on keystone, devstack-gate, devstack, tempest the integrated
> space should be pretty well covered. There really is no need to also go
> stick this on glance, nova, cinder, neutron, swift I don't think,
> because they only really use keystone through pretty defined interfaces.
>

Initially i would have agreed, and there has been a voting job on devstack
with keystone v3 only that proves that all of these services can work
together for at least a cycle. Where we got stung was all the plugins and
configuration options used in these services that don't get tested by that
integrated gate job. The hope was that by pushing these jobs out to the
services we would get more coverage of the service specific configurations
- but I can see that might not be working.


> Then some strategic use of nv jobs on things we know would have some
> additional interactions here (because we know they are currently broken
> or they do interesting things) like ironic, heat, trove, would probably
> be pretty useful.
>
> That starts building up the list of known breaks the keystone folks are
> tracking, which should get a drum beat every week in email about
> outstanding issues, and issues fixed.
>
> The goal of gate-tempest-dsvm-neutron-identity-v3-only-full should not
> be for that to be voting, ever. It should be to use that as a good
> indicator that changing the default in devstack (and thus in the
> majority of upstream jobs) to not ever enable v2.
>
> Because of how v3 support exists in projects (largely hidden behind
> keystoneauth), it is really unlikely to rando regress once fixed. There
> just aren't that many knobs a project has that would make that happen.
> So I think we can make forward progress without a voting backstop until
> we get to a point where we can just throw the big red switch (with
> warning) on a Monday (probably early in the Otaca cycle) and say there
> you go. It's now the project job to handle it. And they'll all get fair
> warning for the month prior to the big red switch.
>

I agree. Very early in the Otaca cycle is also the timeframe we had
discussed at summit so it looks like there is a good consensus there and
i'll get that proposal to TC this week.

For now we maintain the v3-only jobs as non-voting and we continue to push
the changes particularly to projects that are not tested in the default
devstack integrated gate test.

PS. I assume i'm right in assuming it's just impossible/infeasable to have
project-config changes determine all the jobs that are affected by a change
and run those as the project-config gate. It seems like one of the last few
places where we can commit something that breaks everyone and have never
noticed.


-Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Nova] Any Code Examples of Other Services Using Keystone Policy?

2016-05-09 Thread Jamie Lennox
I don't see that we need a new role for this because it would need to be
added to all the admin users. I was thinking just admin or
target.project_id==token.project_id. Hopefully in future this would get
better because we can get nova to send the service token as well and
enforce that it came from another service as well.

On 6 May 2016 at 07:54, Dolph Mathews  wrote:

> My understanding from the summit session was that we should have a
> specific role defined in keystone's policy.json here:
>
>
> https://github.com/openstack/keystone/blob/a16287af5b7761c8453b2a8e278d78652497377c/etc/policy.json#L37
>
> Which grants access to nothing in keystone beyond that check. So, the new
> rule could be revised to something as generic as:
>
>   "identity:get_project": "rule:admin_required or project_id:%(
> target.project.id)s or role:identity_get_project",
>
> Where the new role name I appended at the end exactly matches the policy
> rule name.
>
> However, unlike the summit discussion, which specified only providing
> access to HEAD /v3/projects/{project_id}, keystone's usage of policy
> unfortunately wraps both HEAD and GET with the same policy check.
>
> On Thu, May 5, 2016 at 3:05 PM Augustina Ragwitz 
> wrote:
>
>> I'm currently working on the spec for Project ID Validation in Nova
>> using Keystone. The outcome of the Design Summit Session was that the
>> Nova service user would use the Keystone policy to establish whether the
>> requester had access to the project at all to verify the id. I was
>> wondering if there were any code examples of a non-Keystone service
>> using the Keystone policy in this way?
>>
>> Also if I misunderstood something, please feel free to correct me or to
>> clarify!
>>
>> Here is the etherpad from the session:
>> https://etherpad.openstack.org/p/newton-nova-keystone
>> And here is the current spec: https://review.openstack.org/#/c/294337
>>
>>
>> --
>> Augustina Ragwitz
>> Sr Systems Software Engineer, HPE Cloud
>> Hewlett Packard Enterprise
>> ---
>> irc: auggy
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> -Dolph
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone commands

2016-04-20 Thread Jamie Lennox
On 20 April 2016 at 14:02, Dolph Mathews  wrote:

> On Tue, Apr 19, 2016 at 10:40 PM, Kenny Ji-work 
> wrote:
>
>> Hi all,
>>
>> I have installed openstack mitaka, when I execute any keystone's commands
>> with the result displayed below:
>> But I execute `openstack role list`, the result is succeed.
>>
>> *[root@devstack scripts]# keystone --debug role-list*
>>
>
You mix up two concepts here. `openstack role list` uses the
python-openstackclient and is the correct call. `keystone --debug role
list` uses the keystoneclient CLI which is deprecated and recently removed
from upstream. This will not support v3 - depending on where this v3
configuration is set.

Keep using the openstack CLI you will find it more compatible with
deployments.



/usr/lib/python2.7/site-packages/keystoneclient/shell.py:64:
>> DeprecationWarning: The keystone CLI is deprecated in favor of
>> python-openstackclient. For a Python library, continue using
>> python-keystoneclient.
>>   'python-keystoneclient.', DeprecationWarning)
>> WARNING: unsupported identity-api-version 3, falling back to 2.0
>> /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145:
>> DeprecationWarning: Constructing an instance of the
>> keystoneclient.v2_0.client.Client class without a session is deprecated as
>> of the 1.7.0 release and may be removed in the 2.0.0 release.
>>   'the 2.0.0 release.', DeprecationWarning)
>> /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147:
>> DeprecationWarning: Using the 'tenant_name' argument is deprecated in
>> version '1.7.0' and will be removed in version '2.0.0', please use the
>> 'project_name' argument instead
>>   super(Client, self).__init__(**kwargs)
>> /usr/lib/python2.7/site-packages/debtcollector/renames.py:45:
>> DeprecationWarning: Using the 'tenant_id' argument is deprecated in version
>> '1.7.0' and will be removed in version '2.0.0', please use the 'project_id'
>> argument instead
>>   return f(*args, **kwargs)
>> /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371:
>> DeprecationWarning: Constructing an HTTPClient instance without using a
>> session is deprecated as of the 1.7.0 release and may be removed in the
>> 2.0.0 release.
>>   'the 2.0.0 release.', DeprecationWarning)
>> /usr/lib/python2.7/site-packages/keystoneclient/session.py:140:
>> DeprecationWarning: keystoneclient.session.Session is deprecated as of the
>> 2.1.0 release in favor of keystoneauth1.session.Session. It will be removed
>> in future releases.
>>   DeprecationWarning)
>> /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56:
>> DeprecationWarning: keystoneclient auth plugins are deprecated as of the
>> 2.1.0 release in favor of keystoneauth1 plugins. They will be removed in
>> future releases.
>>   'in future releases.', DeprecationWarning)
>> DEBUG:keystoneclient.auth.identity.v2:Making authentication request to
>> http://10.240.227.233:35357/v3/tokens
>>
>
> This line shows a v2 client making a request to a v3 endpoint. Do you have
> a versioned (v3) endpoint configured or hardcoded somewhere?
>
>
>> INFO:requests.packages.urllib3.connectionpool:Starting new HTTP
>> connection (1): 10.240.227.233
>> DEBUG:requests.packages.urllib3.connectionpool:"POST /v3/tokens HTTP/1.1"
>> 404 93
>> DEBUG:keystoneclient.session:Request returned failure status: 404
>> Authorization Failed: The resource could not be found. (HTTP 404)
>> (Request-ID: req-c71a1014-1c8d-47c0-bed5-cba6ad91881f)
>>
>> Thank you for answering!
>>
>> Sincerely!
>> Kenny Ji
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-19 Thread Jamie Lennox
On 20 April 2016 at 04:17, Monty Taylor  wrote:

> On 04/19/2016 10:16 AM, Daniel P. Berrange wrote:
>
>> On Tue, Apr 19, 2016 at 09:57:56AM -0500, Dean Troyer wrote:
>>
>>> On Tue, Apr 19, 2016 at 9:06 AM, Adam Young  wrote:
>>>
>>> I wonder how much of that is Token caching.  In a typical CLI use patter,
 a new token is created each time a client is called, with no passing of
 a
 token between services.  Using a session can greatly decrease the
 number of
 round trips to Keystone.


>>> Not as much as you think (or hope?).  Persistent token caching to disk
>>> will
>>> help some, at other expenses though.  Using --timing on OSC will show how
>>> much time the Identity auth round trip cost.
>>>
>>> I don't have current numbers, the last time I instrumented OSC there were
>>> significant load times for some modules, so we went a good distance to
>>> lazy-load as much as possible.
>>>
>>> What Dan sees WRT a persistent client process, though, is a combination
>>> of
>>> those two things: saving the Python loading and the Keystone round trips.
>>>
>>
>> The 1.5sec overhead I eliminated doesn't actually have anything todo
>> with network round trips at all. Even if you turn off all network
>> services and just run 'openstack ' and let it fail due
>> to inability to connect it'll still have that 1.5 sec overhead. It
>> is all related to python runtime loading and work done during module
>> importing.
>>
>> eg run 'unstack.sh' and then compare the main openstack client:
>>
>> $ time /usr/bin/openstack server list
>> Discovering versions from the identity service failed when creating the
>> password plugin. Attempting to determine version from URL.
>> Unable to establish connection to http://192.168.122.156:5000/v2.0/tokens
>>
>> real0m1.555s
>> user0m1.407s
>> sys 0m0.147s
>>
>> Against my client-as-a-service version:
>>
>> $ time $HOME/bin/openstack server list
>> [Errno 111] Connection refused
>>
>> real0m0.045s
>> user0m0.029s
>> sys 0m0.016s
>>
>>
>> I'm sure there is scope for also optimizing network traffic / round
>> trips, but I didn't investigate that at all.
>>
>> I have (had!) a version of DevStack that put OSC into a subprocess and
>>> called it via pipes to do essentially what Dan suggests.  It saves some
>>> time, at the expense of complexity that may or may not be worth the
>>> effort.
>>>
>>
>> devstack doesn't actually really need any significant changes beyond
>> making sure $PATH pointed to the replacement client programs and that
>> the server was running - the latter could be automated as a launch on
>> demand thing which would limit devstack changes.
>>
>> It actually doesn't technically need any devstack change - these
>> replacement clients could simply be put in some 3rd party git repo
>> and let developers who want the speed benefit simply put them in
>> their $PATH before running devstack.
>>
>> One thing missing is any sort of transactional control in the I/O with the
>>> subprocess, ie, an EOT marker.  I planned to add a -0 option (think
>>> xargs)
>>> to handle that but it's still down a few slots on my priority list.
>>> Error
>>> handling is another problem, and at this point (for DevStack purposes
>>> anyway) I stopped the investigation, concluding that reliability trumped
>>> a
>>> few seconds saved here.
>>>
>>
>> For I/O I simply replaced stdout + stderr with a new StringIO handle to
>> capture the data when running each command, and for error handling I
>> ensured the exit status was fed back & likewise stderr printed.
>>
>> It is more than just a few seconds saved - almost 4 minutes, or
>> nearly 20% of entire time to run stack.sh on my machine
>>
>>
>> Ultimately, this is one of the two giant nails in the coffin of continuing
>>> to persue CLIs in Python.  The other is co-installability. (See that
>>> current thread on the ML for pain points).  Both are easily solved with
>>> native-code-generating languages.  Go and Rust are at the top of my
>>> personal list here...
>>>
>>
> Using entrypoints and plugins in python is slow, so loading them is slow,
> as is loading all of the dependent libraries. Those were choices made for
> good reason back in the day, but I'm not convinced either are great anymore.
>
> A pluginless CLI that simply used REST calls rather than the
> python-clientlibs should be able to launch in get to the business of doing
> work in 0.2 seconds - counting time to load and parse clouds.yaml. That
> time could be reduced - the time spent in occ parsing vendor json files is
> not strictly necessary and certainly could go faster. It's not as fast as
> 0.004 seconds, but with very little effort it's 6x faster.
>
> Rather than ditching python for something like go, I'd rather put together
> a CLI with no plugins and that only depended on keystoneauth and
> os-client-config as libraries. No?
>
>
I can feel Dean banging his head from here :)

If you extend this because 

Re: [openstack-dev] [oslo] oslo.context and name fields

2016-04-05 Thread Jamie Lennox
from_environ was mine, it's reasonably new and at the time i was blocked
upon getting a release before pushing it out to services. Since then i've
been distracted with other things. The intent at the time was exactly this
to standardize the values on the context object, though in my case i was
particularly interested with how we could handle authentication plugins.

The problems i hit were specifically around how we could seperate values
that were relevant to things like policy from values that were relevant for
RPC etc rather that the big to_dict that is used for everything at the
moment.

There were a number of problems with this, however nothing that would
prevent more standardization of the base attributes and using from_environ
now.


On 6 April 2016 at 07:39, Ronald Bradford  wrote:

> I have created a version that use constructor arguments. [5]
> I will review in more detail across projects the use of keystone
> middleware to see if we can utilize a constructor environment attribute to
> simply constructor usage.
>
> [5] https://review.openstack.org/301918
>
> Ronald Bradford
>
> Web Site: http://ronaldbradford.com
> LinkedIn:  http://www.linkedin.com/in/ronaldbradford
> Twitter:@RonaldBradford 
> Skype: RonaldBradford
> GTalk: Ronald.Bradford
> IRC: rbradfor
>
>
> On Tue, Apr 5, 2016 at 3:49 PM, Sean Dague  wrote:
>
>> Cool. Great.
>>
>> In looking at this code a bit more I think we're missing out on some
>> commonality by the fact that this nice bit of common parsing -
>>
>> https://github.com/openstack/oslo.context/blob/c63a359094907bc50cc5e1be716508ddee825dfa/oslo_context/context.py#L138-L161
>> is actually hidden behind a factory pattern, and not used by anyone in
>> OpenStack -
>> http://codesearch.openstack.org/?q=from_environ=nope==
>>
>> If instead of standardizing the args to the context constructor, we
>> could remove a bunch of them and extract that data from a passed
>> environment during the constructor that should remove a bunch of parsing
>> code in every project. It would also mean that we could easily add
>> things like project_name and user_name in, and they would be available
>> to all consumers.
>>
>> -Sean
>>
>> On 04/05/2016 03:39 PM, Ronald Bradford wrote:
>> > Sean,
>> >
>> > I cannot speak to historically why there were not there, but I am
>> > working through the app-agnostic-logging-parameters blueprint [1] right
>> > now and it's very related to this.  As part of this work I would be
>> > reviewing attributes that are more commonly used in subclassed context
>> > objects for inclusion into the base oslo.context class, a step before a
>> > more kwargs init() approach that many subclassed context object utilize
>> now.
>> >
>> > I am also proposing the standardization of context arguments [2]
>> > (specifically ids), and names are not mentioned but I would like to
>> > follow the proposed convention.
>> >
>> > However, as you point out in the middleware [3], if the information is
>> > already available I see no reason not to facilite the base oslo.context
>> > class to consume this for subsequent use by logging.  FYI the
>> > get_logging_values() work in [4] is specially to add logging only values
>> > and this can be the first use case.
>> >
>> > While devstack uses these logging format string options, the defaults
>> > (which I presume are operator centric do not).  One of my goals of the
>> > Austin Ops summit it to get to talk with actual operators and find out
>> > what is really in use.   Regardless, the capacity to choose should be
>> > available when possible if the information is already identified without
>> > subsequent lookup.
>> >
>> >
>> > Ronald
>> >
>> >
>> > [1]
>> https://blueprints.launchpad.net/oslo.log/+spec/app-agnostic-logging-parameters
>> > [2] https://review.openstack.org/#/c/290907/
>> > [3]
>> http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
>> > [4] https://review.openstack.org/#/c/274186/
>> >
>> >
>> >
>> >
>> >
>> > On Tue, Apr 5, 2016 at 2:31 PM, Sean Dague > > > wrote:
>> >
>> > I was trying to clean up the divergent logging definitions in
>> devstack
>> > as part of scrubbing out 'tenant' references -
>> > https://review.openstack.org/#/c/301801/ and in doing so stumbled
>> over
>> > the fact that the extremely useful project_name and user_name
>> fields are
>> > not in base oslo.context.
>> >
>> >
>> https://github.com/openstack/oslo.context/blob/c63a359094907bc50cc5e1be716508ddee825dfa/oslo_context/context.py#L148-L159
>> >
>> > These are always available to be set -
>> >
>> http://docs.openstack.org/developer/keystonemiddleware/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service
>> >
>> > And they 

Re: [openstack-dev] [tempest] Implementing tempest test for Keystone federation functional tests

2016-04-03 Thread Jamie Lennox
On 2 April 2016 at 09:21, Rodrigo Duarte  wrote:

>
>
> On Thu, Mar 31, 2016 at 1:11 PM, Matthew Treinish 
> wrote:
>
>> On Thu, Mar 31, 2016 at 11:38:55AM -0400, Minying Lu wrote:
>> > Hi all,
>> >
>> > I'm working on resource federation at the Massachusetts Open Cloud. We
>> want
>> > to implement functional test on the k2k federation, which requires
>> > authentication with both a local keystone and a remote keystone (in a
>> > different cloud installation). It also requires a K2K/SAML assertion
>> > exchange with the local and remote keystones. These functions are not
>> > implemented in the current tempest.lib.service library, so I'm adding
>> code
>> > to the service library.
>> >
>> > My question is, is it possible to adapt keystoneauth python clients? Or
>> do
>> > you prefer implementing it with http requests.
>>
>> So tempest's clients have to be completely independent. That's part of
>> tempest's
>> design points about testing APIs, not client implementations. If you need
>> to add
>> additional functionality to the tempest clients that's fine, but pulling
>> in
>> keystoneauth isn't really an option.
>>
>
> ++
>
>
>>
>> >
>> > And since this test requires a lot of environment set up including: 2
>> > separate cloud installations, shibboleth, creating mapping and
>> protocols on
>> > remote cloud, etc. Would it be within the scope of tempest's mission?
>>
>> From the tempest perspective it expects the environment to be setup and
>> already
>> exist by the time you run the test. If it's a valid use of the API, which
>> I'd
>> say this is and an important one too, then I feel it's fair game to have
>> tests
>> for this live in tempest. We'll just have to make the configuration
>> options
>> around how tempest will do this very explicit to make sure the necessary
>> environment exists before the tests are executed.
>>
>
> Another option is to add those tests to keystone itself (if you are not
> including tests that triggers other components APIs). See
> https://blueprints.launchpad.net/keystone/+spec/keystone-tempest-plugin-tests
>
>

Again though, the problem is not where the tests live but where we run
them. To practically run these tests we need to either add K2K testing
support to devstack (not sure this is appropriate) or come up with a new
test environment that deploys 2 keystones and federation support that we
can CI against in the gate. This is doable but i think something we need
support with from infra before worrying about tempest.



>
>> The fly in the ointment for this case will be CI though. For tests to
>> live in
>> tempest they need to be verified by a CI system before they can land. So
>> to
>> land the additional testing in tempest you'll have to also ensure there
>> is a
>> CI job setup in infra to configure the necessary environment. While I
>> think
>> this is a good thing to have in the long run, it's not necessarily a small
>> undertaking.
>>
>
>> -Matt Treinish
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [horizon] [qa] keystone versionless endpoints and v3

2016-03-08 Thread Jamie Lennox
So with the merge of [1] the devstack change [2] to use unversioned
endpoints passes gate. From previous experience this should not be
considered an extensive test, however the only real way to find out what
doesn't work is to merge it and see what fails.


[1] https://review.openstack.org/#/c/287532/
[2] https://review.openstack.org/#/c/285879/

On 9 March 2016 at 10:21, Matt Fischer <m...@mattfischer.com> wrote:

> On Tue, Feb 23, 2016 at 8:49 PM, Jamie Lennox <jamielen...@gmail.com>
> wrote:
>
>>
>>
>> On 18 February 2016 at 10:50, Matt Fischer <m...@mattfischer.com> wrote:
>>
>>> I've been having some issues with keystone v3 and versionless endpoints
>>> and I'd like to know what's expected to work exactly in Liberty and beyond.
>>> I thought with v3 we used versionless endpoints but it seems to cause some
>>> breakages and some disagreement as to what should work.
>>>
>>
>> Excellent! I'm really glad someone is looking into this beyond the simple
>> cases.
>>
>>
>>> Here's what I've found:
>>>
>>> Using versionless endpoints:
>>>  - horizon project selector doesn't work (v3 api configured in horizon
>>> local_settings) [1]
>>>  - keystone client doesn't work (expected v3 I think)
>>>  - nova/neutron etc seem ok with a few exceptions [2]
>>>
>>> Adding /v3 to my endpoints:
>>>  - openstackclient seems to double up the /v3 reference which fails [3],
>>> this breaks puppet-openstack, in addition to general CLI usage.
>>>
>>> Adding /v2.0 to my endpoints:
>>>  - things seem to work the best this way
>>>  - this matches the install docs too
>>>  - its not very "v3-onic"
>>>
>>>
>>> My goal is to be as v3 as possible, but everything needs to work 100%.
>>> Given that...
>>>
>>> What's the correct and supported way to setup endpoints such that
>>> Keystone v3 works?
>>>
>>
>> So the problem with switching to v3 is that a lot of services and clients
>> were designed to assume you would have a /v2.0 on your URL. To work with v3
>> they therefore inspect the url and essentially s/v2.0/v3 before making
>> calls. Any of the services using the keystoneclient/keystoneauth session
>> stuff correctly shouldn't have this problem - but that is certainly not
>> everyone.
>>
>> It does however explain why you see problems with /v3 where /v2.0 seems
>> to work even for the v3 API.
>>
>>
>>> Are services expected to handle versionless keystone endpoints properly?
>>>
>>
>> Services should never need to manipulate the catalog. This is what's
>> causing the problem. If they leave it up to the client to do this then it
>> will handle the unversioned endpoint.
>>
>>
>>>
>>>
>> Can I ignore that keystoneclient doesn't work with versionless? Does this
>>> imply that services that use the python library (like Horizon) will also be
>>> broken?
>>>
>>
>> This I'm surprised by. Do you mean the keystone CLI utility that ships
>> with keystoneclient? If so the decision was made it should never support v3
>> and to use openstackclient instead. I haven't actually looked at this in a
>> long time but we should probably fix it even though it's been deprecated
>> for a long time now.
>>
>>
>>> Do I need/Should I have both v2.0 and v3 endpoints in my catalog?
>>>
>>> No. And particularly with the new catalog formats that went through the
>> cross project working group recently we made the decision that these
>> endpoints should not contain a version number at all. This is not ready yet
>> but we are working towards that goal.
>>
>>
>>> [1] its making curl calls without a version on the endpoint, causing it
>>> to fail. I will file a bug pending the outcome of this discussion.
>>>
>>> [2] specifically neutron_admin_auth_url in nova.conf doesn't seem to
>>> work without a Keystone API version on it. For
>>> cinder keymgr_encryption_auth_url also seems to need it. I assume I'll
>>> eventually also hit some of these:
>>> https://etherpad.openstack.org/p/v3-only-devstack
>>>
>>
>> Can you file bugs for both of these? I've worked on both these sections
>> before so should be able to have a look into it.
>>
>> I was going to finish by saying that we have unversioned endpoints in
>> devstack - but looking again now and we don't :( There have been various
>> reverted patches in the v3 transition and th

Re: [openstack-dev] [openstack][keystone] What is the difference between auth_url and auth_uri?

2016-03-07 Thread Jamie Lennox
This is an unfortunate naming scheme that is a long story. Simple version
is:

auth_url is what the auth plugin is using, so where the process will
authenticate to before it authenticates tokens, probably an internal url.
auth_uri is what ends up in the WWW-Authenticate: keystone-uri= header and
so should be the unversioned public endpoint.

Sorry for the confusion.

On 29 February 2016 at 22:41, Qiao, Liyong  wrote:

> Uri and url are different but sometime they might be same.
>
>
>
> Well ,you can see it from http://www.ietf.org/rfc/rfc3986.txt
>
> BR, Eli(Li Yong)Qiao
>
>
>
> *From:* 王华 [mailto:wanghua.hum...@gmail.com]
> *Sent:* Monday, February 29, 2016 7:04 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [openstack][keystone] What is the difference
> between auth_url and auth_uri?
>
>
>
> Hi all,
>
>
>
> There are two config parameters (auth_uri and auth_url) in
> keystone_authtoken group. I want to know what is the difference between
> them. Can I use only one of them?
>
>
>
>
>
> Best Regards,
>
> Wanghua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] python-novaclient region setting

2016-02-24 Thread Jamie Lennox
On 23 February 2016 at 01:02, Monty Taylor  wrote:

> On 02/21/2016 11:40 PM, Andrey Kurilin wrote:
>
>> Hi!
>> `novaclient.client.Client` entry-point supports almost the same
>> arguments as `novaclient.v2.client.Client`. The difference is only in
>> api_version, so you can set up region via `novaclient.client.Client` in
>> the same way as `novaclient.v2.client.Client`.
>>
>
> The easiest way to get a properly constructed nova Client is with
> os-client-config:
>
> import os_client_config
>
> OS_PROJECT_NAME="d8af8a8f-a573-48e6-898a-af333b970a2d"
> OS_USERNAME="0b8c435b-cc4d-4e05-8a47-a2ada0539af1"
> OS_PASSWORD="REDACTED"
> OS_AUTH_URL="http://auth.vexxhost.net;
> OS_REGION_NAME="ca-ymq-1"
>
> client = os_client_config.make_client(
> 'compute',
> auth_url=OS_AUTH_URL, username=OS_USERNAME,
> password=OS_PASSWORD, project_name=OS_PROJECT_NAME,
> region_name=OS_REGION_NAME)
>
> The upside is that the constructor interface is the same for all of the
> rest of the client libs too (just change the first argument) - and it will
> also read in OS_ env vars or named clouds from clouds.yaml if you have them
> set.
>
> (The 'simplest' way is to put your auth and region information into a
> clouds.yaml file like this:
>
>
> http://docs.openstack.org/developer/os-client-config/#site-specific-file-locations
>
> Such as:
>
> # ~/.config/openstack/clouds.yaml
> clouds:
>   vexxhost:
>  profile: vexxhost
>  auth:
>project_name: d8af8a8f-a573-48e6-898a-af333b970a2d
>username: 0b8c435b-cc4d-4e05-8a47-a2ada0539af1
>password: REDACTED
>  region_name: ca-ymq-1
>
>
> And do:
>
> client = os_client_config.make_client('compute', cloud='vexxhost')
>
>
> If you don't want to do that for some reason but you'd like to construct a
> novaclient Client object by hand:
>
>
> from keystoneauth1 import loading
> from keystoneauth1 import session as ksa_session
> from novaclient import client as nova_client
>
> OS_PROJECT_NAME="d8af8a8f-a573-48e6-898a-af333b970a2d"
> OS_USERNAME="0b8c435b-cc4d-4e05-8a47-a2ada0539af1"
> OS_PASSWORD="REDACTED"
> OS_AUTH_URL="http://auth.vexxhost.net;
> OS_REGION_NAME="ca-ymq-1"
>
> # Get the auth loader for the password auth plugin
> loader = loading.get_plugin_loader('password')
> # Construct the auth plugin
> auth_plugin = loader.load_from_options(
> auth_url=OS_AUTH_URL, username=OS_USERNAME, password=OS_PASSWORD,
> project_name=OS_PROJECT_NAME)
>

note that loaders here are generally things like load from config file, or
load from argparse arguments. If you are doing everything in a script like
this you would probably just use the keystoneauth1.identity.Password like

from keystoneauth1 import identity

auth_plugin = identity.Password(auth_url=)



> # Construct a keystone session
> # Other arguments that are potentially useful here are:
> #  verify - bool, whether or not to verify SSL connection validity
> #  cert - SSL cert information
> #  timout - time in seconds to use for connection level TCP timeouts
> session = ksa_session.Session(auth_plugin)
>
> # Now make the client
> # Other arguments you may be interested in:
> #  service_name - if you need to specify a service name for finding the
> # right service in the catalog
> #  service_type - if the cloud in question has given a different
> # service type (should be 'compute' for nova - but
> # novaclient sets it, so it's safe to omit in most cases
> #  endpoint_override - if you want to tell it to use a different URL
> #  than what the keystone catalog returns
> #  endpoint_type - if you need to specify admin or internal
> #  endpoints rather than the default 'public'
> #  Note that in glance and barbican, this key is called
> #  'interface'
> client = nova_client.Client(
> version='2.0', # or set the specific microversion you want
> session=session, region_name=OS_REGION_NAME)
>
> It might be clear why I prefer the os_client_config factory function
> instead - but what I prefer and what you prefer might not be the same
> thing. :)
>
> On Mon, Feb 22, 2016 at 6:11 AM, Xav Paice > > wrote:
>>
>> Hi,
>>
>> In http://docs.openstack.org/developer/python-novaclient/api.html
>> it's got some pretty clear instructions not to
>> use novaclient.v2.client.Client but I can't see another way to
>> specify the region - there's more than one in my installation, and
>> no param for region in novaclient.client.Client
>>
>> Shall I hunt down/write a blueprint for that?
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > 

Re: [openstack-dev] [keystone] [horizon] [qa] keystone versionless endpoints and v3

2016-02-23 Thread Jamie Lennox
On 18 February 2016 at 10:50, Matt Fischer  wrote:

> I've been having some issues with keystone v3 and versionless endpoints
> and I'd like to know what's expected to work exactly in Liberty and beyond.
> I thought with v3 we used versionless endpoints but it seems to cause some
> breakages and some disagreement as to what should work.
>

Excellent! I'm really glad someone is looking into this beyond the simple
cases.


> Here's what I've found:
>
> Using versionless endpoints:
>  - horizon project selector doesn't work (v3 api configured in horizon
> local_settings) [1]
>  - keystone client doesn't work (expected v3 I think)
>  - nova/neutron etc seem ok with a few exceptions [2]
>
> Adding /v3 to my endpoints:
>  - openstackclient seems to double up the /v3 reference which fails [3],
> this breaks puppet-openstack, in addition to general CLI usage.
>
> Adding /v2.0 to my endpoints:
>  - things seem to work the best this way
>  - this matches the install docs too
>  - its not very "v3-onic"
>
>
> My goal is to be as v3 as possible, but everything needs to work 100%.
> Given that...
>
> What's the correct and supported way to setup endpoints such that Keystone
> v3 works?
>

So the problem with switching to v3 is that a lot of services and clients
were designed to assume you would have a /v2.0 on your URL. To work with v3
they therefore inspect the url and essentially s/v2.0/v3 before making
calls. Any of the services using the keystoneclient/keystoneauth session
stuff correctly shouldn't have this problem - but that is certainly not
everyone.

It does however explain why you see problems with /v3 where /v2.0 seems to
work even for the v3 API.


> Are services expected to handle versionless keystone endpoints properly?
>

Services should never need to manipulate the catalog. This is what's
causing the problem. If they leave it up to the client to do this then it
will handle the unversioned endpoint.


>
>
Can I ignore that keystoneclient doesn't work with versionless? Does this
> imply that services that use the python library (like Horizon) will also be
> broken?
>

This I'm surprised by. Do you mean the keystone CLI utility that ships with
keystoneclient? If so the decision was made it should never support v3 and
to use openstackclient instead. I haven't actually looked at this in a long
time but we should probably fix it even though it's been deprecated for a
long time now.


> Do I need/Should I have both v2.0 and v3 endpoints in my catalog?
>
> No. And particularly with the new catalog formats that went through the
cross project working group recently we made the decision that these
endpoints should not contain a version number at all. This is not ready yet
but we are working towards that goal.


> [1] its making curl calls without a version on the endpoint, causing it to
> fail. I will file a bug pending the outcome of this discussion.
>
> [2] specifically neutron_admin_auth_url in nova.conf doesn't seem to work
> without a Keystone API version on it. For cinder keymgr_encryption_auth_url
> also seems to need it. I assume I'll eventually also hit some of these:
> https://etherpad.openstack.org/p/v3-only-devstack
>

Can you file bugs for both of these? I've worked on both these sections
before so should be able to have a look into it.

I was going to finish by saying that we have unversioned endpoints in
devstack - but looking again now and we don't :( There have been various
reverted patches in the v3 transition and this must have been one of them.

For now i would suggest keeping the endpoints with the /v2.0 prefix as even
things using v3 API know how to work around this. The goal is to go
versionless everywhere (including other services, long goal but the others
will be easier than keystone) and anything you find that isn't working
isn't using the clients correctly so file a bug and add me to it.


Jamie



> [3] "Making authentication request to
> http://127.0.0.1:5000/v3/v3/auth/tokens;
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Addressing issue of keysone token expiry during long running operations

2015-12-20 Thread Jamie Lennox
Hey Paul,

At the Tokyo summit we discussed a general way to make it so that user
tokens were only expiration tested once. When the token hits nova for
example we can say it was validated, then when nova talks to glance it
sends both the user token (or enough data to represent the user token) and
an X-Service-Token which is the token nova validated with and we say the
presence of the X-Service-Token means we should trust that the previous
service already did enough validation to just trust it.

This is a big effort because it's going to require changing how service to
service communication works at all places.

At the moment I don't have a blueprint for this. The biggest change is
going to be making service->service communication rely on keystoneauth auth
plugins so that we can have the auth plugin control what data is
communicated rather than hack this in to every location and so far has
required updates to middleware and future to oslo.context and others to
make this easy for services to consume. This work has been ongoing by
myself, mordred and morgan (if you see reviews to switch your service to
keystoneauth plugins please review as it will make the rest of this work
easier in future).

I certainly don't expect to see this pulled off in Mitaka time frame.

For the mean time more and more services are relying on trusts, which is an
unfortunate but workable solution.

Jamie

On 18 December 2015 at 22:13, Paul Carlton  wrote:

> Jamie
>
> John Garbutt suggested I follow up this issue with you.  I understand you
> may be leading the
> effort to address the issue of token expiry during a long running
> operation.  Nova
> encounter this scenario during image snapshots and live migrations.
>
> Is there a keystone blueprint for this issue?
>
> Thanks
>
> --
> Paul Carlton
> Software Engineer
> Cloud Services
> Hewlett Packard
> BUK03:T242
> Longdown Avenue
> Stoke Gifford
> Bristol BS34 8QZ
>
> Mobile:+44 (0)7768 994283
> Email:mailto:paul.carlt...@hpe.com
> Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks
> RG12 1HN Registered No: 690597 England.
> The contents of this message and any attachments to it are confidential
> and may be legally privileged. If you have received this message in error,
> you should delete it from your system immediately and advise the sender. To
> any recipient of this message within HP, unless otherwise stated you should
> consider this message and attachments as "HP CONFIDENTIAL".
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron][keystone] how to reauth the token

2015-12-20 Thread Jamie Lennox
On 17 December 2015 at 02:59, Pavlo Shchelokovskyy <
pshchelokovs...@mirantis.com> wrote:

> Hi all,
>
> I'd like to start discussion on how Ironic is using Neutron when Keystone
> is involved.
>
> Recently the patch [0] was merged in Ironic to fix a bug when the token
> with which to create the neutronclient is expired. For that Ironic now
> passes both username/password of its own service user and the token from
> the request to the client. But that IMO is a wrong thing to do.
>
> When token is given but happens to be expired, neutronclient will
> reauthentificate [1] using provided credentials for service tenant and user
> - but in fact the original token might have come from completely different
> tenant. Thus the action neutron is performing might look for / change
> resources in the service tenant instead of the tenant for which the
> original token was issued.
>
> Ironic by default is admin-only service, so the token that is accepted is
> admin-scoped, but still it might be coming from different tenants (e.g.
> service tenant or actual admin tenant, or some other tenant that admin is
> logged into). And even in the case of admin-scoped token I'm not sure how
> this will work for domain-separated tenants in Keystone v3. Does
> admin-scoped neutronclient show all ports including those created by
> tenants in domains other than the domain of admin tenant?
>
> If I understand it right, the best we could do is use keystoneauth *token
> auth plugins that can reauth when the token is about to expire (but of
> course not when it is already expired).
>
>
I'm not familiar with ironic as to what token is being passed around there.

If it's the user's token there's really nothing we can do. You can't
refresh a token a user gave you (big security issue) and using
authentication plugins there really isn't going to help. In this case it's
weird to pass both the token and the user/pass because assuming
neutronclient allows that at all you're not going to know if you performed
an operation as the user or the service.

If it's the token of the ironic service user (which seems possible because
in that patch you've removed the else statement to always use the ironic
service user) then yes if you were to use authentication plugins the token
would be refreshed for you automatically because we have the username and
password available to get a new token.

The only real option at the moment to extending the life of the user token
is to establish a trust with keystone immediately on receiving the user
token that delegates permission from the user to the service. You then use
the service token (refreshable) to perform operations before returning to
the user. This is what heat and recently glance (and others) have done to
get around this problem.

There is ongoing work to solve this in a better way for all services but
there is a lot to be done (change service->service communication
everywhere) before this is available so if you are experiencing problems i
wouldn't wait for it.

As a last aside, please create another section for the service user. You
can use the same credentials but consider the keystone_authtoken section
off limits. The options you are reading from there are old, not used in
recent configurations (including devstack) and are going to mean that
auth_token middleware in ironic can't be configured with v3, let alone cert
based auth or any of the new things we are introducing there.



[0] https://review.openstack.org/#/c/255885
> [1]
> https://github.com/openstack/python-neutronclient/blob/master/neutronclient/client.py#L173
>
> Best regards,
> --
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-07 Thread Jamie Lennox
On 8 December 2015 at 07:53, Thomas Goirand  wrote:

> On 12/01/2015 07:57 AM, Steve Martinelli wrote:
> > Trying to summarize here...
> >
> > - There isn't much interest in keeping eventlet around.
> > - Folks are OK with running keystone in a WSGI server, but feel they are
> > constrained by Apache.
> > - uWSGI could help to support multiple web servers.
> >
> > My opinion:
> >
> > - Adding support for uWSGI definitely sounds like it's worth
> > investigating, but not achievable in this release (unless someone
> > already has something cooked up).
> > - I'm tempted to let eventlet stick around another release, since it's
> > causing pain on some of our operators.
> > - Other folks have managed to run keystone in a web server (and
> > hopefully not feel pain when doing so!), so it might be worth getting
> > technical details on just how it was accomplished. If we get an OK from
> > the operator community later on in mitaka, I'd still be OK with removing
> > eventlet, but I don't want to break folks.
> >
> > stevemar
> >
> > From: John Dewey 
> > 100% agree.
> >
> > We should look at uwsgi as the reference architecture. Nginx/Apache/etc
> > should be interchangeable, and up to the operator which they choose to
> > use. Hell, with tcp load balancing now in opensource Nginx, I could get
> > rid of Apache and HAProxy by utilizing uwsgi.
> >
> > John
>
> The main problem I see with running Keystone (or any other service) in a
> web server, is that *I* (as a package maintainer) will loose the control
> over when the service is started. Let me explain why that is important
> for me.
>
> In Debian, many services/daemons are run, then their API is used by the
> package. In the case of Keystone, for example, it is possible to ask,
> via Debconf, that Keystone registers itself in the service catalog. If
> we get Keystone within Apache, it becomes at least harder to do so.
>

I was going to leave this up to others to comment on here, but IMO -
excellent. Anyone that is doing an even semi serious deployment of
OpenStack is going to require puppet/chef/ansible or some form of
orchestration layer for deployment. Even for test deployments it seems to
me that it's crazy for this sort of functionality be handled from debconf.
The deployers of the system are going to understand if they want to use
eventlet or apache and should therefore understand what restarting apache
on a system implies.


>
> The other issue is that if all services are sharing the same web server,
> restarting the web server restarts all services. Or, said otherwise: if
> I need to change a configuration value of any of the services served by
> Apache, I will need to restart them all, which is very annoying: I very
> much prefer to just restart *ONE* service if I need.
>
> Also, something which we learned the hard way at Mirantis: it is *very*
> annoying that Apache restarts every Sunday morning by default in
> distributions like Ubuntu and Debian (I'm not sure for the other
> distros). No, the default config of logrotate and Apache can't be
> changed in distros just to satisfy OpenStack users: there's other users
> of Apache in these distros.
>

:O


>
> Then, yes, uWSGI becomes a nice option. I used it for the Barbican
> package, and it worked well. Though the uwsgi package in Debian isn't
> very well maintained, and multiple times, Barbican could have been
> removed from Debian testing because of RC bugs against uWSGI.
>
> So, all together, I'm a bit reluctant to see the Eventlet based servers
> going away. If it's done, then yes, I'll work around it. Though I'd
> prefer if it didn't.
>
> It is also my view that it's up to the deployers to decide how they want
> to implement things. For many small use cases, Eventlet performs well
> enough.
>
> Finally, one thing which I never understood: if Eventlet is bad as an
> HTTP server, can't we use anything else written in Python? Isn't it
> possible to write a decent HTTP server in Python? Why are we forced into
> just Eventlet for doing the job? I haven't searched around, but there
> must be loads of alternatives, no?
>
> Cheers,
>
> Thomas Goirand (zigo)
>

So I'd be ok with keeping eventlet around until after we can figure out
something for multiple virtual envs (i think you'd replace virtualenvs with
containers) , but i don't think the packaging should have anything to do
with this.


> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Case for renewability of tokens, increasing expiration time

2015-11-21 Thread Jamie Lennox
I realize this has been mostly closed up, but just a few additions.

On 19 November 2015 at 08:06, Dolph Mathews  wrote:


> On Tue, Nov 17, 2015 at 2:56 PM, Lindsay Pallickal 
> wrote:
>
>>
>>
>> On Tue, Nov 17, 2015 at 5:31 AM, Dolph Mathews 
>> wrote:
>>
>>>
>>>
>>> On Tuesday, November 17, 2015, Lindsay Pallickal 
>>> wrote:
>>>

 It looks like expiration is done this way to limit the damage from
 stolen tokens, but if a token were stolen, it could be stolen from the same
 location a username and password will now have to sit, to get around having
 to re-supply them manually once an hour. Yes, forcing re-auth hourly with a
 username and password, at the Keystone level, can limit damage if any of
 the core Openstack services are compromised for the mass theft of tokens.
 But limited damage will depend just as much on how quickly the compromise
 is detected. The more time an attacker has in the system, the less his
 actions will be hampered by a lack of token renewals. VMs can be
 compromised, and data exported or destroyed given just a small window.

>>>
>>> This first part of this is only a "maybe", not a completely true
>>> assertion. There are *many* more opportunities for individual tokens to be
>>> exposed than for the credentials used to create them. In the case if a mass
>>> token compromise, which I consider to be a completely different scenario,
>>> token expiration probably isn't going it be any help because there's
>>> probably always a fresher token available to the attacker, anyway, until
>>> the exploit is closed. Instead, keystone has several means of quickly
>>> revoking tokens in bulk (revocation events, truncating the UUID token
>>> table, restarting the memcache token backend, or rotating all your Fernet
>>> keys away... all depending on your deployment).
>>>
>>
>> The token does get sent a lot more often than the username/password
>> credentials, but as long as both are sent via SSL/TLS, shouldn't the
>> opportunity for exposure be similar? Although, I could see it being easier
>> to accidentally send a token in the clear, or in a way vulnerable to a mitm
>> (ignoring ssl cert issues), as there are frequently more bits of code in
>> the client that deal using a token, versus a just few bits to secure when
>> deal with logging in. Tokens get passed around to far more services with
>> potential vulnerabilities as well, but I see that as a separate issue. I
>> agree with your comment that token expiration will not really help in a
>> mass compromise scenario.
>>
>
> Similar, yes, but with a couple significant differences. Off the top of my
> head:
>
> In keystone's v2 API, bearer tokens are a part of the URL in token
> validation calls and token revocation calls. Of course, access logs are
> everywhere and many deployments do these operations over HTTP! The v3 API
> relegates them to headers, at least. I wonder if it would be possible to
> have keystoneauth / keystoneclient log a client-side warning when it's
> asked to send a bearer token over HTTP?
>
> And certainly, the number of possible attack vectors on a bearer token
> being passed around to a bunch of services is increased by an order of
> magnitude.
>

OpenStack is also fairly free with its tokens. You send it to every service
and every service that needs to do work elsewhere forwards it to someone
else. At least keeping the token system the only service that your
username/password is ever exposed to is keystone. An attacker able to
compromise another service still has a lot of power due to the bearer-ness
of tokens but they didn't get your user/pass.


>>
>>>

 On the other hand, if a user facing application, or a long running
 service supporting one, needs the API and is to remain convenient, they
 have to store usernames and passwords. Sometimes that long running service
 is connected to the outside world and faces security threats, such as when
 relaying Openstack API requests to their respective internal network
 endpoints - similar to what Horizon does. I wonder if I use Horizon
 continuously, whether it will force me to manually re-authenticate hourly,
 and if not, how it is getting around this problem, with or without storing
 the username and password.

>>>
The solution that was developed for this situation in heat was trusts. With
this you delegate certain roles to the service user from the original user
to act on your behalf. This lets those service users perform actions on
behalf of users after the initial request has expired. Now there are
similarly all sorts of security implications to trusts, the ability to
steal trust ids, who sets them up, whats delegated, however if done right
they are a much more secure alternative than simply soring user/pass as at
least they are already project and role restricted.



>
>>> Bearer tokens are 

Re: [openstack-dev] [All] Use of self signed certs in endpoints

2015-11-15 Thread Jamie Lennox
On 14 November 2015 at 19:09, Xav Paice  wrote:

> Hi,
>
> I'm sure I'm not the only one that likes to use SSL everywhere possible,
> and doesn't like to pay for 'real' ssl certs for dev environments.
> Figuring out how to get requests to allow connection to the self signed
> cert would have paid for a real cert many times over.
>
> When I use an SSL cert with a CA not in the Mozilla bundle, and use
> keystonemiddleware to access Keystone endpoints, the ssl verification
> rightly fails.  It turns out requests doesn't use the system ca cert
> bundle, but has it's own.  It's also got a nice easy config option to set
> which ca cert bundle you want to use -
> http://docs.python-requests.org/en/latest/user/advanced/?highlight=ca_bundle#ssl-cert-verification
>
> How do people feel about having that as a config option set somewhere, so
> we can specify a ca cert in, say, heat.conf, so that we can continue with
> the self signed certs of cheapness without needing to hack up the
> cacert.pem that comes with requests, or find a way to pass in environment
> variables?
>
> Am I barking up the wrong tree here?  How would I go about writing a
> blueprint for this, and for which project?  I guess it's something that
> would need to be added to all the projects in the keystone_authtoken
> section?  Or is there a central place where common configs like this can
> live?
>

So this is an area that requests upstream and distros have fought about for
a while now, that and bundling. Typically the distros patch the requests
package so that it correctly reflects the system environment, so if you yum
install or apt-get install python-requests then it will work with the
system CAs. If you are running from a virtualenv/pip installed
python-requests it wont.

Ideally we are moving everything to using keystoneclient/keystoneauth
sessions. These have support for cafile from the built in options loader,
so in future there should be config options that will allow you to always
specify a CA file to use if you're willing to chase down all the config
values.

For now the easiest way i know to do this is using the REQUESTS_CA_BUNDLE
environment variable. If found (and nothing else specified) this will be
used as the default CA bundle file instead of the inbuilt one. It also
respects the CURL_CA_BUNDLE variable.

I'm not sure if people would mind if we did some OS discovery and just
overrode the requests defaults to always find the system CA. It doesn't
bother me. But we could really easily add our own OS_CA_BUNDLE env variable
to do the same things as requests and override a system location.



>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help with getting keystone to migrate to Debian testing: fixing repoze.what and friends

2015-11-12 Thread Jamie Lennox
On 12 November 2015 at 15:09, Clint Byrum  wrote:

> Excerpts from Clint Byrum's message of 2015-11-11 10:57:26 -0800:
> > Excerpts from Morgan Fainberg's message of 2015-11-10 20:17:12 -0800:
> > > On Nov 10, 2015 16:48, "Clint Byrum"  wrote:
> > > >
> > > > Excerpts from Morgan Fainberg's message of 2015-11-10 15:31:16 -0800:
> > > > > On Tue, Nov 10, 2015 at 3:20 PM, Thomas Goirand 
> wrote:
> > > > >
> > > > > > Hi there!
> > > > > >
> > > > > > All of Liberty would be migrating from Sid to Testing (which is
> the
> > > > > > pre-condition for an upload to offical Debian backports) if I
> didn't
> > > > > > have a really annoying situation with the repoze.{what,who}
> packages.
> > > I
> > > > > > feel like I could get some help from the Python export folks
> here.
> > > > > >
> > > > > > What is it about?
> > > > > > =
> > > > > >
> > > > > > Here's the dependency chain:
> > > > > >
> > > > > > - Keystone depends on pysaml2.
> > > > > > - Pysaml2 depends on python-repoze.who >=2, which I uploaded to
> Sid.
> > > > > > - python-repoze.what depends on python-repoze.who < 1.99
> > > > > >
> > > > > > Unfortunately, python-repoze.who doesn't migrate to Debian
> Testing
> > > > > > because it would make python-repoze.what broken.
> > > > > >
> > > > > > To make the situation worse, python-repoze.what build-depends on
> > > > > > python-repoze.who-testutil, which itself doesn't work with
> > > > > > python-repoze.who >= 2.
> > > > > >
> > > > > > Note: repoze.who-testutil is within the package
> > > > > > python-repoze.who-plugins who also contains 4 other plugins
> which are
> > > > > > all broken with repoze.who >= 2, but the others could be dropped
> from
> > > > > > Debian easily). We can't drop repoze.what completely, because
> there's
> > > > > > turbogears2 and another package who needs it.
> > > > > >
> > > > > > There's no hope from upstream, as all of these seem to be
> abandoned
> > > > > > projects.
> > > > > >
> > > > > > So I'm a bit stuck here, helpless, and I don't know how to fix
> the
> > > > > > situation... :(
> > > > > >
> > > > > > What to fix?
> > > > > > 
> > > > > > Make repoze.what and repoze.who-testutil work with repoze.who >=
> 2.
> > > > > >
> > > > > > Call for help
> > > > > > =
> > > > > > I'm a fairly experienced package maintainer, but I still consider
> > > myself
> > > > > > a poor Python coder (probably because I spend all my time
> packaging
> > > > > > rather than programming in Python: I know a way better other
> > > programing
> > > > > > languages).
> > > > > >
> > > > > > So I would enjoy a lot having some help here, also because my
> time is
> > > > > > very limited and probably better invested working on packages to
> > > assist
> > > > > > the whole OpenStack project, rather than upstream code on some
> weirdo
> > > > > > dependencies that I don't fully understand.
> > > > > >
> > > > > > So, would anyone be able to invest a bit of time, and help me
> fix the
> > > > > > problems with repoze.what / repoze.who in Debian? If you can
> help,
> > > > > > please ping me on IRC.
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > > Thomas Goirand (zigo)
> > > > > >
> > > > > >
> > > > > It looks like pysaml2 might be ok with < 1.99 of repoze.who here:
> > > > > https://github.com/rohe/pysaml2/blob/master/setup.py#L30
> > > > >
> > > > > I admit I haven't tested it, but the requirements declaration
> doesn't
> > > seem
> > > > > to enforce the need for > 2. If that is in-fact the case that > 2
> is
> > > > > needed, we are a somewhat of an impass with dead/abandonware
> holding us
> > > > > ransom. I'm not sure what the proper handling of that ends up
> being in
> > > the
> > > > > debian world.
> > > >
> > > > repoze.who doesn't look abandoned to me, so it is just repoze.what:
> > > >
> > > > https://github.com/repoze/repoze.who/commits/master
> > > >
> > > > who's just not being released (does anybody else smell a Laurel and
> > > > Hardy skit coming on?)
> > >
> > > Seriously!
> > >
> > > >
> > > > Also, this may have been something temporary, that then got left
> around
> > > > because nobody bothered to try the released versions:
> > > >
> > > >
> > >
> https://github.com/repoze/repoze.what/commit/b9fc014c0e174540679678af99f04b01756618de
> > > >
> > > > note, 2.0a1 wasn't compatible.. but perhaps 2.2 would work fine?
> > > >
> > > >
> > >
> > > Def something to try out. If this is still an outstanding issue next
> week
> > > (when I have a bit more time) I'll see what I can do to test out the
> > > variations.
> >
> > FYI, I tried 2.0 and it definitely broke repoze.what's test suite. The
> API
> > is simply incompatible (shake your fists at whoever did that please). For
> > those not following along: please make a _NEW_ module when you break
> > your API.
> >
> > On the off chance it could just be dropped, I looked at turbogears2, and
> > this seems to be the only line 

Re: [openstack-dev] [keystone] [puppet] [neutron] Deprecating nova_admin_ options in puppet-neutron

2015-10-14 Thread Jamie Lennox
On re-reading the original email i've been thinking of auth_token
middleware rather than nova_admin_* options (i haven't done much
neutron config). I think everything still applies though and hopefully
this can be a mechanism that's reused across modules.

On 15 October 2015 at 10:37, Jamie Lennox <jamielen...@redhat.com> wrote:
> TL;DR: you can fairly easily convert existing puppet params into auth
> plugin format for now, but we're going to need the hash based config
> soon.
>
> I think as a first step it is a good idea to replace the deprecated
> options and use the password, v2password or v3password plugins*
> because you will need to maintain compatibility with the existing
> auth_user, auth_tenant etc options.
>
> However I would like to suggest all the puppet projects start to look
> at some way of passing around a hash of this information. We are
> currently at the stage where we have both kerberos and x509 auth_token
> middleware authentication working and IMO this will become the
> preferred deployment mechanism for service authentication in
> environments that have this infrastructure. (SAML and other auth
> mechanisms have also been proven to work here but are less likely to
> be used for service users). Note that this will not only apply to
> auth_token service users, but any service who has admin credentials
> configured in their conf file.
>
> I don't think it's necessary for puppet to validate the contents of
> these hashes, but i think it will be a loosing battle to add all the
> options required for all upcoming authentication types to each
> service.
>
> I'm not sure if this makes it easier for you or not, but for
> situations exactly like this loading auth plugins from a config file
> take an auth_section option so you can do:
>
> [keystone_authtoken]
> auth_section = my_authentication_conf
>
> [my_authentication_conf]
> auth_plugin = password
> ...
>
> and essentially dump that hash straight into config without fear of
> having them conflict with existing options. It would also let you
> share credentials if you configure for example the nova service user
> in multiple places in the same config file, you can point multiple
> locations to the same auth_section.
>
>
>
> * The difference is that password queries keystone for available
> versions and uses the discovered urls for the correct endpoint and so
> expects the auth_url to be 'http://keystone.url:5000/' v2 and v3 are
> v2 and v3 specific and so expect the URL to be of the /v2.0 or /v3
> form. Password will work with /v2.0 or /v3 urls because those
> endpoints return only the current url as part of discovery and so it
> is preferred. For the smallest possible change v2password is closer to
> what the old options provide but then you'll have a bigger step to get
> to v3 auth - which we want fast.
>
> On 14 October 2015 at 22:20, Sergey Kolekonov <skoleko...@mirantis.com> wrote:
>> Hi folks,
>>
>> Currently puppet-neutron module sets nova_admin_* options in neutron.conf
>> which are deprecated since Kilo release. I propose to replace them, but we
>> need to discuss how to do it better. I raised this question at
>> puppet-openstack weekly meeting yesterday [0]. So the main concern here is
>> that we need to switch to Keystone auth plugins to get rid of these options
>> [1] [2], but there's a possibility to create a custom plugin, so all
>> required parameters are unknown in general case.
>>
>> It seems reasonable to support only basic plugin (password), or also token
>> as the most common cases, otherwise an ability to pass all required
>> parameters as a hash should be added. which looks like a bit overkill.
>>
>> What do you think?
>>
>> Thanks.
>>
>> [0]
>> http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-10-13-15.00.log.html
>> [1] https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L783
>> [2]
>> http://docs.openstack.org/developer/python-keystoneclient/authentication-plugins.html
>> --
>> Regards,
>> Sergey Kolekonov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [puppet] [neutron] Deprecating nova_admin_ options in puppet-neutron

2015-10-14 Thread Jamie Lennox
TL;DR: you can fairly easily convert existing puppet params into auth
plugin format for now, but we're going to need the hash based config
soon.

I think as a first step it is a good idea to replace the deprecated
options and use the password, v2password or v3password plugins*
because you will need to maintain compatibility with the existing
auth_user, auth_tenant etc options.

However I would like to suggest all the puppet projects start to look
at some way of passing around a hash of this information. We are
currently at the stage where we have both kerberos and x509 auth_token
middleware authentication working and IMO this will become the
preferred deployment mechanism for service authentication in
environments that have this infrastructure. (SAML and other auth
mechanisms have also been proven to work here but are less likely to
be used for service users). Note that this will not only apply to
auth_token service users, but any service who has admin credentials
configured in their conf file.

I don't think it's necessary for puppet to validate the contents of
these hashes, but i think it will be a loosing battle to add all the
options required for all upcoming authentication types to each
service.

I'm not sure if this makes it easier for you or not, but for
situations exactly like this loading auth plugins from a config file
take an auth_section option so you can do:

[keystone_authtoken]
auth_section = my_authentication_conf

[my_authentication_conf]
auth_plugin = password
...

and essentially dump that hash straight into config without fear of
having them conflict with existing options. It would also let you
share credentials if you configure for example the nova service user
in multiple places in the same config file, you can point multiple
locations to the same auth_section.



* The difference is that password queries keystone for available
versions and uses the discovered urls for the correct endpoint and so
expects the auth_url to be 'http://keystone.url:5000/' v2 and v3 are
v2 and v3 specific and so expect the URL to be of the /v2.0 or /v3
form. Password will work with /v2.0 or /v3 urls because those
endpoints return only the current url as part of discovery and so it
is preferred. For the smallest possible change v2password is closer to
what the old options provide but then you'll have a bigger step to get
to v3 auth - which we want fast.

On 14 October 2015 at 22:20, Sergey Kolekonov  wrote:
> Hi folks,
>
> Currently puppet-neutron module sets nova_admin_* options in neutron.conf
> which are deprecated since Kilo release. I propose to replace them, but we
> need to discuss how to do it better. I raised this question at
> puppet-openstack weekly meeting yesterday [0]. So the main concern here is
> that we need to switch to Keystone auth plugins to get rid of these options
> [1] [2], but there's a possibility to create a custom plugin, so all
> required parameters are unknown in general case.
>
> It seems reasonable to support only basic plugin (password), or also token
> as the most common cases, otherwise an ability to pass all required
> parameters as a hash should be added. which looks like a bit overkill.
>
> What do you think?
>
> Thanks.
>
> [0]
> http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-10-13-15.00.log.html
> [1] https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L783
> [2]
> http://docs.openstack.org/developer/python-keystoneclient/authentication-plugins.html
> --
> Regards,
> Sergey Kolekonov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Consistent support for SSL termination proxies across all API services

2015-09-23 Thread Jamie Lennox
So this is a long thread and i may have missed something in it,
however this exact topic came up as a blocker on a devstack patch to
get TLS testing in the gate with HAproxy.

The long term solution we had come up with (but granted not proposed
anywhere public) is that we should transition services to use relative
links.

As far as i'm aware this is only a problem within the services
themselves as the URL they receive is not what was actually requested
if it went via HAproxy. It is not a problem with interservice requests
because they should get URLs from the service catalog (or otherwise
not display them to the user). Which means that this generally affects
the version discovery page, and "links" from resources to like a next,
prev, and base url.

Is there a reason we can't transition this to use a relative URL
possibly with a django style WEBROOT so that a discovery response
returned /v2.0 and /v3 rather than the fully qualified URL and the
clients be smart enough to figure this out?



On 24 September 2015 at 07:51, Julien Danjou  wrote:
> On Wed, Sep 23 2015, Sean Dague wrote:
>
>> Ok, how exactly does that work? Because it seems like
>> oslo_middleware.ssl is only changing the protocol if the proxy sets it.
>>
>> But the host in the urls will still be the individual host, which isn't
>> the proxy hostname/ip. Sorry if I'm being daft here, just want to
>> understand how that flow ends up working.
>
> No problem, you're no supposed to know everything. :)
>
> As ZZelle said too, we can set the correct host and port expected by
> honoring X-Forwarded-Host and X-Forwarded-Port, which are set by HTTP
> proxies when they act as reverse-proxies and forward requests.
> That will make the WSGI application unaware of the fact that there is a
> request proxy in front of them. Magic!
>
> We could do that in the SSL middleware (and maybe rename it?) or in
> another middleware, and enable them by default. So we'd have that
> working by default, which would be great IMHO.
>
>> Will that cover the case of webob's request.application_uri? If so I
>> think that covers the REST documents in at least Nova (one good data
>> point, and one that I know has been copied around). At least as far as
>> the protocol is concerned, it's still got a potential url issue.
>
> That should work with any WSGI request, so I'd say yes.
>
>>> The {public,admin}_endpoint are only useful in the case where you map
>>> http://myproxy/identity -> http://mykeystone/ using a proxy
>>>
>>> Because the prefix is not passed to Keystone. If you map 1:1 the path
>>> part, we could also leverage X-Forwarded-Host and X-Forwarded-Port to
>>> avoid having {public,admin}_endpoint options.
>>
>> It also looks like there are new standards for Forwarded headers, so the
>> middleware should probably support those as well.
>> http://tools.ietf.org/html/rfc7239.
>
> Good point, we should update the middleware as needed.
>
> Though they still not cover that use case where you have a base URL that
> is different between the proxy and the application. I don't think it's a
> widely used case, but still, there are at 2 least two way to support it:
> 1. Having config option (like Keystone currently has)
> 2. Having a special e.g. X-Forwarded-BaseURL header set by the proxy
>that we would catch in our middleware and would prepend to
>environment['SCRIPT_NAME'].
>
> The 2 options are even compatible, though I'd say 2. is probably simpler
> in the long run and more… "unified".
>
> I'm willing to clear that out and come with specs and patches if that
> can help. :)
>
> --
> Julien Danjou
> # Free Software hacker
> # http://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] keystone pluggable model

2015-09-09 Thread Jamie Lennox


- Original Message -
> From: "Murali Allada" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Thursday, 10 September, 2015 6:41:40 AM
> Subject: [openstack-dev] [magnum]  keystone pluggable model
> 
> 
> 
> ​ Hi All,
> 
> 
> 
> 
> 
> In the IRC meeting yesterday, I brought up this new blueprint I opened.
> 
> 
> 
> 
> 
> https://blueprints.launchpad.net/magnum/+spec/pluggable-keystone-model ​
> 
> 
> 
> 
> 
> The goal of this blueprint is to allow magnum operators to integrate with
> their version of keystone easily with downstream patches.
> 
> 
> 
> 
> 
> The goal is NOT to implement support for keystone version 2 upstream, but to
> make it easy for operators to integrate with V2 if they need to.
> 
> 
> 
> 
> 
> Most of the work required for this is already done in this patch.
> 
> 
> 
> 
> 
> https://review.openstack.org/#/c/218699
> 
> 
> 
> 
> 
> However, we didn't want to address this change in the same review.
> 
> 
> 
> 
> 
> We just need to refactor the code a little further and isolate all version
> specific keystone code to one file.
> 
> 
> 
> 
> 
> See my comments in the following review for details on what this change
> entails.
> 
> 
> 
> 
> 
> https://review.openstack.org/#/c/218699/5/magnum/common/clients.py
> 
> 
> 
> 
> 
> https://review.openstack.org/#/c/218699/5/magnum/common/keystone.py
> 
> 
> 
> 
> 
> Thanks,
> 
> 
> Murali

Hi, 

My keystone filter picked this up from the title so i don't really know 
anything specifically about magnum here, but can you explain what you are 
looking for in terms of abstraction a little more? 

Looking at the review the only thing that magnum is doing with the keystone API 
(not auth) is trust creation - and this is a v3 only feature so there's not 
much value to a v2 client there. I know this is a problem that heat has run 
into and done a similar solution where there is a contrib v2 module that short 
circuits some functions and leaves things somewhat broken. I don't think they 
would recommend it.

The other thing is auth. A version independent auth mechanism is something that 
keystoneclient has supplied for a while now. Here's two blog posts that show 
how to use sessions and auth plugins[1][2] from keystoneclient such that it is 
a completely deployment configuration choice what type (service passwords must 
die) or version of authentication is used. All clients i know with the 
exception of swift support sessions and plugins so this would seem like an 
ideal time for magnum to adopt them rather than reinvent auth version 
abstraction as you'll get some wins like not having to hack in already 
authenticated tokens into each client.

From the general design around client management it looks like you've taken 
some inspiration from heat so you might be interested in the recently merged 
patches there that convert to using auth plugins there. 

If you need any help with this please ping me on IRC. 


Jamie


[1] 
http://www.jamielennox.net/blog/2014/09/15/how-to-use-keystoneclient-sessions/
[2] http://www.jamielennox.net/blog/2015/02/17/loading-authentication-plugins/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]how to get service_catalog

2015-09-07 Thread Jamie Lennox
- Original Message -

> From: "王华" <wanghua.hum...@gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Sent: Tuesday, 8 September, 2015 11:36:15 AM
> Subject: Re: [openstack-dev] [keystone]how to get service_catalog

> Hi Jamie,

> We want to reuse the user token in magnum. But there is no convenient way to
> reuse it. It is better that we can use ENV['keystone.token_auth'] to init
> keystoneclient directly. Now we need to construct a auth_ref which is a
> parameter in keystoneclient init function according to
> ENV['keystone.token_auth']. I think it is a common case which service wants
> to reuse user token to do something like getting service_catalog. Can
> keystoneclient provide this feature?

> Regards,
> Wanghua

Yes, that's exactly what ENV['keystone.token_auth'] is for. 

You would need to create a session object [1] which can be on the process (long 
lived). Then when you create a client you pass Client(session=sess, 
auth=ENV['keystone.token_auth']). This works for all the clients i know of with 
the exception of swift. This will reuse the user's auth token and the user's 
service catalog. 

For keystone there is an issue with doing this with the client.Client() object 
as it still wants a URL passed (there is a bug for this i can find if you're 
interested). If you are able to I recommend using client.v3.Client() directly. 

Jamie 

[1] 
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/session.py#L845
 

> On Mon, Sep 7, 2015 at 12:54 PM, Jamie Lennox < jamielen...@redhat.com >
> wrote:

> > - Original Message -
> 
> > > From: "王华" < wanghua.hum...@gmail.com >
> 
> > > To: "OpenStack Development Mailing List (not for usage questions)" <
> > > openstack-dev@lists.openstack.org >
> 
> > > Sent: Monday, 7 September, 2015 12:00:43 PM
> 
> > > Subject: [openstack-dev] [keystone]how to get service_catalog
> 
> > >
> 
> > > Hi all,
> 
> > >
> 
> > > When I use a token to init a keystoneclient and try to get
> > > service_catalog
> > > by
> 
> > > it, error occurs. I find that keystone doesn't return service_catalog
> > > when
> 
> > > we use a token. Is there a way to get service_catalog by token? In
> > > magnum,
> 
> > > we now make a trick. We init a keystoneclient with service_catalog which
> > > is
> 
> > > contained in the token_info returned by keystonemiddleware in auth_ref
> 
> > > parameter.
> 
> > >
> 
> > > I want a way to get service_catalog by token. Or can we init a
> > > keystoneclient
> 
> > > by the token_info return by keystonemiddleware directly?
> 
> > >
> 
> > > Regards,
> 
> > > Wanghua
> 

> > Sort of.
> 

> > The problem you are hitting is that a token is just a string, an identifier
> > for some information stored in keystone. Given a token at __init__ time the
> > client doesn't try to validate this in anyway it just assumes you know what
> > you are doing. You can do a variation of this though in which you use an
> > existing token to fetch a new token with the same rights (the expiry etc
> > will be the same) and then you will get a fresh service catalog. Using auth
> > plugins that's the Token family of plugins.
> 

> > However i don't _think_ that's exactly what you're looking for in magnum.
> > What token are you trying to reuse?
> 

> > If it's the users token then auth_token passes down an auth plugin in the
> > ENV['keystone.token_auth'] variable[1] and you can pass that to a client to
> > reuse the token and service catalog. If you are loading up magnum specific
> > auth then again have a look at using keystoneclient's auth plugins and
> > reusing it across multiple requests.
> 

> > Trying to pass around a bundle of token id and service catalog is pretty
> > much
> > exactly what an auth plugin does and you should be able to do something
> > there.
> 

> > Jamie
> 

> > [1]
> > https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L164
> 
> > > __
> 
> > > OpenStack Development Mailing List (not for usage questions)
> 
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> > > http://lists.openstack.org/cgi-bin/mailma

Re: [openstack-dev] [keystone]how to get service_catalog

2015-09-06 Thread Jamie Lennox


- Original Message -
> From: "王华" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Monday, 7 September, 2015 12:00:43 PM
> Subject: [openstack-dev] [keystone]how to get service_catalog
> 
> Hi all,
> 
> When I use a token to init a keystoneclient and try to get service_catalog by
> it, error occurs. I find that keystone doesn't return service_catalog when
> we use a token. Is there a way to get service_catalog by token? In magnum,
> we now make a trick. We init a keystoneclient with service_catalog which is
> contained in the token_info returned by keystonemiddleware in auth_ref
> parameter.
> 
> I want a way to get service_catalog by token. Or can we init a keystoneclient
> by the token_info return by keystonemiddleware directly?
> 
> Regards,
> Wanghua

Sort of. 

The problem you are hitting is that a token is just a string, an identifier for 
some information stored in keystone. Given a token at __init__ time the client 
doesn't try to validate this in anyway it just assumes you know what you are 
doing. You can do a variation of this though in which you use an existing token 
to fetch a new token with the same rights (the expiry etc will be the same) and 
then you will get a fresh service catalog. Using auth plugins that's the Token 
family of plugins.

However i don't _think_ that's exactly what you're looking for in magnum. What 
token are you trying to reuse? 

If it's the users token then auth_token passes down an auth plugin in the 
ENV['keystone.token_auth'] variable[1] and you can pass that to a client to 
reuse the token and service catalog. If you are loading up magnum specific auth 
then again have a look at using keystoneclient's auth plugins and reusing it 
across multiple requests.

Trying to pass around a bundle of token id and service catalog is pretty much 
exactly what an auth plugin does and you should be able to do something there. 


Jamie

[1] 
https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L164
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Glance] keystonemiddleware & multiple keystone endpoints

2015-09-06 Thread Jamie Lennox
:35357/auth/tokens. Retrying in
> 0.5s.
> 2015-09-06 07:50:52.576 194 INFO keystoneclient.session [-] Failure: Unable
> to establish connection to http://172.17.0.95:35357/auth/tokens. Retrying in
> 1.0s.
> 2015-09-06 07:50:55.576 194 INFO keystoneclient.session [-] Failure: Unable
> to establish connection to http://172.17.0.95:35357/auth/tokens. Retrying in
> 2.0s.
> 2015-09-06 07:50:58.576 194 WARNING keystonemiddleware.auth_token [-]
> Authorization failed for token
> 
> 
> Best Regards
> Chaoyi Huang ( Joe Huang )
> 
> 
> -Original Message-
> From: Hans Feldt [mailto:hans.fe...@ericsson.com]
> Sent: Tuesday, August 25, 2015 5:06 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Keystone][Glance] keystonemiddleware & multiple
> keystone endpoints
> 
> 
> 
> On 2015-08-25 09:37, Jamie Lennox wrote:
> >
> >
> > - Original Message -
> >> From: "Hans Feldt" <hans.fe...@ericsson.com>
> >> To: openstack-dev@lists.openstack.org
> >> Sent: Thursday, August 20, 2015 10:40:28 PM
> >> Subject: [openstack-dev] [Keystone][Glance] keystonemiddleware & multiple
> >>keystone endpoints
> >>
> >> How do you configure/use keystonemiddleware for a specific identity
> >> endpoint among several?
> >>
> >> In an OPNFV multi region prototype I have keystone endpoints per
> >> region. I would like keystonemiddleware (in context of glance-api) to
> >> use the local keystone for performing user token validation. Instead
> >> keystonemiddleware seems to use the first listed keystone endpoint in
> >> the service catalog (which could be wrong/non-optimal in most
> >> regions).
> >>
> >> I found this closed, related bug:
> >> https://bugs.launchpad.net/python-keystoneclient/+bug/1147530
> >
> > Hey,
> >
> > There's two points to this.
> >
> > * If you are using an auth plugin then you're right it will just pick the
> > first endpoint. You can look at project specific endpoints[1] so that
> > there is only one keystone endpoint returned for the services project.
> > I've also just added a review for this feature[2].
> 
> I am not.
> 
> > * If you're not using an auth plugin (so the admin_X options) then keystone
> > will always use the endpoint that is configured in the options
> > (identity_uri).
> 
> Yes for getting its own admin/service token. But for later user token
> validation it seems to pick the first identity service in the stored (?)
> service catalog.
> 
> By patching keystonemiddleware, _create_identity_server and the call to
> Adapter constructor with an endpoint_override parameter I can get it to use
> the local keystone for token validation. I am looking for an official way of
> achieving the same.
> 
> Thanks,
> Hans
> 
> >
> > Hope that helps,
> >
> > Jamie
> >
> >
> > [1]
> > https://github.com/openstack/keystone-specs/blob/master/specs/juno/end
> > point-group-filter.rst [2] https://review.openstack.org/#/c/216579
> >
> >> Thanks,
> >> Hans
> >>
> >> _
> >> _ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican] FYI: Defaulting to Keystone v3 API

2015-09-06 Thread Jamie Lennox
- Original Message -

> From: "Steve Martinelli" <steve...@ca.ibm.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Sent: Monday, 7 September, 2015 9:27:55 AM
> Subject: Re: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican]
> FYI: Defaulting to Keystone v3 API

> So, digging into this a bit more, the failures that Barbican and Ironic saw
> were very different. The barbican team managed to fix their issues by
> replacing their `keystone endpoint-create` and such commands with the
> `openstack endpoint create` alternates.

> Looking at the failure for why Ironic failed led me down a rabbit hole I wish
> I hadn't gone down. There are no plugins managed by the ironic, like there
> were in the barbican case, so there were easy commands to replace.Instead it
> was failing on a few swift related commands, as evidenced in the log that
> Lucas copied:

> http://logs.openstack.org/68/217068/14/check/gate-tempest-dsvm-ironic-agent_ssh/18d8590/logs/devstacklog.txt.gz#_2015-09-04_09_04_55_994
> 2015-09-04 09:04:55.527 | + swift post -m 'Temp-URL-Key: secretkey'
> 2015-09-04 09:04:55.994 | Authorization Failure. Authorization Failed: The
> resource could not be found. (HTTP 404)

> Jamie's patch sets everything to v3, so why is failing now? I tried this in
> my own environment to make sure:
> steve@steve-vm:~/devstack$ export OS_IDENTITY_API_VERSION=3
> steve@steve-vm:~/devstack$ export OS_AUTH_URL='http://10.0.2.15:5000/v3'
> steve@steve-vm:~/devstack$ swift stat
> Authorization Failure. Authorization Failed: The resource could not be found.
> (HTTP 404) (Request-ID: req-63bee2a6-ca9d-49f4-baad-b6a1eef916df)
> steve@steve-vm:~/devstack$ swift --debug stat
> DEBUG:keystoneclient.auth.identity.v2:Making authentication request to
> http://10.0.2.15:5000/v3/tokens

> And saw that swiftclient was creating a v2 client instance with a v3
> endpoint, this is no beuno.

> As I continued to dig into this, it seems like swiftclient doesn't honor the
> OS_IDENTITY_API_VERSION flag that was set, instead it relies on
> --auth-version or OS_AUTH_VERSION
> steve@steve-vm:~/devstack$ export OS_AUTH_VERSION=3
> steve@steve-vm:~/devstack$ swift --debug stat
> DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request to
> http://10.0.2.15:5000/v3/auth/tokens
> INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection
> (1): 10.0.2.15
>  it continued and was happy

> So, we could easily propose Jamies patch again, but this time also set
> OS_AUTH_VERSION, or we could fix swiftclient to honor the
> OS_IDENTITY_API_VERSION flag. I'd prefer doing the former first to get
> Jamie's patch back in, and the latter for the long term plan, but looking at
> the code, there doesn't seem to be a plan on deprecating OS_AUTH_VERSION.

> Thanks,
Thanks for looking into this Steve, as evidenced by how well i phrased my 
response I shouldn't answer emails late at night. 

So what i would like to consider this is a renewed call for deprecating ALL the 
project specific CLIs in favour of using openstack client. The mix and match of 
different parameters accepted by different clients is not just a problem for 
ours users, developers who actually interact with this code can't keep them all 
straight. Does anyone know what the path would be to get this actually agreed 
on by all the projects? Cross-project blueprint? TC? 

Does OSC have the required commands that we can remove use of the swift CLI? 

I can understand having this reverted whilst feature freeze is underway and i 
didn't do a sufficient job in alerting people to the possibility of a breaking 
change (partially as i wasn't expecting it to merge). However to get this back 
in I think the easiest thing to do is just set OS_AUTH_VERSION in devstack with 
a comment about "needed for swift". I think we should consider 
OS_IDENTITY_API_VERSION an OSC flag rather that something we want all the CLIs 
to copy (as API_VERSION shouldn't necessarily imply auth_version/method). 

Jamie 

> Steve Martinelli
> OpenStack Keystone Core

> Jamie Lennox ---2015/09/06 04:41:23 AM---Note that this fixing this does not
> mean ironic has to support keystone v3 (but please fix that too)

> From: Jamie Lennox <jamielen...@redhat.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Date: 2015/09/06 04:41 AM
> Subject: Re: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican]
> FYI: Defaulting to Keystone v3 API

> Note that this fixing this does not mean ironic has to support keystone v3
> (but please fix that too). It just means that somewhere in ironic's gate it
> is doing like an "openst

Re: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican] FYI: Defaulting to Keystone v3 API

2015-09-06 Thread Jamie Lennox
Note that this fixing this does not mean ironic has to support keystone v3 (but 
please fix that too). It just means that somewhere in ironic's gate it is doing 
like an "openstack user create" or a role assignment directly with the OSC tool 
assuming v2 rather than using the helpers that devstack provides like 
get_or_create_user. Keystone v2 still exists and is running we just changed the 
default API for devstack OSC commands.

I'm kind of annoyed we reverted this patch (though i was surprised to see it 
merge recently as it's been around for a while), as it was known to possibly 
break people which is why it was on the discussion for the qa meetings. However 
given that devstack has plugins and there is third party CI there is absolutely 
no way we can make sure that everyone has fixed this and we just need to make a 
breaking change. Granted coinciding with freeze is unfortunate. Luckily this 
doesn't affect most people because they use the devstack helper functions and 
for those that don't it's an almost trivial fix to start using them.

Jamie

- Original Message -
> From: "Steve Martinelli" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Saturday, September 5, 2015 2:38:27 AM
> Subject: Re: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican] 
> FYI: Defaulting to Keystone v3 API
> 
> 
> 
> This change also affected Barbican too, but they quickly tossed up a patch to
> resolve the gate failures [1]. As much as I would like DevStack and
> OpenStackClient to default to Keystone's v3 API, we should - considering how
> close we are in the schedule, revert the initial patch (which I see sdague
> already did). We need to determine which projects are hosting their own
> devstack plugin scripts and update those first before bringing back the
> original patch.
> 
> https://review.openstack.org/#/c/220396/
> 
> Thanks,
> 
> Steve Martinelli
> OpenStack Keystone Core
> 
> Lucas Alvares Gomes ---2015/09/04 10:07:51 AM---Hi, This is email is just a
> FYI: Recently the patch [1] got merged in
> 
> From: Lucas Alvares Gomes 
> To: OpenStack Development Mailing List 
> Date: 2015/09/04 10:07 AM
> Subject: [openstack-dev] [DevStack][Keystone][Ironic][Swit] FYI: Defaulting
> to Keystone v3 API
> 
> 
> 
> 
> Hi,
> 
> This is email is just a FYI: Recently the patch [1] got merged in
> DevStack and broke the Ironic gate [2], I haven't had time to dig into
> the problem yet so I reverted the patch [3] to unblock our gate.
> 
> The work to convert to v3 seems to be close enough but not yet there
> so I just want to bring a broader attention to it with this email.
> 
> Also, the Ironic job that is currently running in the DevStack gate is
> not testing Ironic with the Swift module, there's a patch [4] changing
> that so I hope we will be able to identify the problem before we break
> things next time .
> 
> [1] https://review.openstack.org/#/c/186684/
> [2]
> http://logs.openstack.org/68/217068/14/check/gate-tempest-dsvm-ironic-agent_ssh/18d8590/logs/devstacklog.txt.gz#_2015-09-04_09_04_55_994
> [3] https://review.openstack.org/220532
> [4] https://review.openstack.org/#/c/220516/
> 
> Cheers,
> Lucas
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Glance] keystonemiddleware multiple keystone endpoints

2015-08-25 Thread Jamie Lennox


- Original Message -
 From: Hans Feldt hans.fe...@ericsson.com
 To: openstack-dev@lists.openstack.org
 Sent: Thursday, August 20, 2015 10:40:28 PM
 Subject: [openstack-dev] [Keystone][Glance] keystonemiddleware  multiple 
 keystone endpoints
 
 How do you configure/use keystonemiddleware for a specific identity endpoint
 among several?
 
 In an OPNFV multi region prototype I have keystone endpoints per region. I
 would like
 keystonemiddleware (in context of glance-api) to use the local keystone for
 performing user token
 validation. Instead keystonemiddleware seems to use the first listed keystone
 endpoint in the
 service catalog (which could be wrong/non-optimal in most regions).
 
 I found this closed, related bug:
 https://bugs.launchpad.net/python-keystoneclient/+bug/1147530

Hey, 

There's two points to this. 

* If you are using an auth plugin then you're right it will just pick the first 
endpoint. You can look at project specific endpoints[1] so that there is only 
one keystone endpoint returned for the services project. I've also just added a 
review for this feature[2].
* If you're not using an auth plugin (so the admin_X options) then keystone 
will always use the endpoint that is configured in the options (identity_uri).

Hope that helps,

Jamie


[1] 
https://github.com/openstack/keystone-specs/blob/master/specs/juno/endpoint-group-filter.rst
[2] https://review.openstack.org/#/c/216579

 Thanks,
 Hans
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][devstack][keystone] Updating devstack to Identity V3

2015-08-14 Thread Jamie Lennox
Hi all, 

I've been pushing for a while now to convert devstack to completely use
the identity v3 API as we try to deprecate the v2.0 API. Currently all
the functions in functions-common consume the v3 API via setting --os
-identity-api-version 3 for each command to override the v2 default. 
https://review.openstack.org/#/c/186684/ changes this by exporting
OS_IDENTITY_API_VERSION=3 at the beginning meaning that all keystone
commands will now default to the v3 api. 

As we can see from the tests passing this works for the standard gate
job. However as this involves a command and argument change introducing
this has the potential to break anyone doing a custom CI or possibly
the devstack plugins if they do keystone operations directly. 

I'm interested to know from the community how we advertise this change
and what is going to be required to merge it?

Thanks, 

Jamie 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-12 Thread Jamie Lennox


- Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk
 To: openstack-dev@lists.openstack.org
 Sent: Thursday, 13 August, 2015 3:06:46 AM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
 
 
 On 11/08/2015 01:46, Jamie Lennox wrote:
  
  
  - Original Message -
  From: Jamie Lennox jamielen...@redhat.com To: OpenStack
  Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org Sent: Tuesday, 11 August, 2015
  10:09:33 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
  Federated Login
  
  
  
  - Original Message -
  From: David Chadwick d.w.chadw...@kent.ac.uk To:
  openstack-dev@lists.openstack.org Sent: Tuesday, 11 August, 2015
  12:50:21 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
  Federated Login
  
  
  
  On 10/08/2015 01:53, Jamie Lennox wrote:
  
  
  - Original Message -
  From: David Chadwick d.w.chadw...@kent.ac.uk To:
  openstack-dev@lists.openstack.org Sent: Sunday, August 9,
  2015 12:29:49 AM Subject: Re: [openstack-dev] [Keystone]
  [Horizon] Federated Login
  
  Hi Jamie
  
  nice presentation, thanks for sharing it. I have forwarded it
  to my students working on federation aspects of Horizon.
  
  About public federated cloud access, the way you envisage it,
  i.e. that every user will have his own tailored (subdomain)
  URL to the SP is not how it works in the real world today.
  SPs typically provide one URL, which everyone from every IdP
  uses, so that no matter which browser you are using, from
  wherever you are in the world, you can access the SP (via
  your IdP). The only thing the user needs to know, is the name
  of his IdP, in order to correctly choose it.
  
  So discovery of all available IdPs is needed. You are correct
  in saying that Shib supports a separate discovery service
  (WAYF), but Horizon can also play this role, by listing the
  IdPs for the user. This is the mod that my student is making
  to Horizon, by adding type ahead searching.
  
  So my point at the moment is that unless there's something i'm
  missing in the way shib/mellon discovery works is that horizon
  can't. Because we forward to a common websso entry point there
  is no way (i know) for the users selection in horizon to be
  forwarded to keystone. You would still need a custom select
  your idp discovery page in front of keystone. I'm not sure if
  this addition is part of your students work, it just hasn't
  been mentioned yet.
  
  About your proposed discovery mod, surely this seems to be
  going in the wrong direction. A common entry point to
  Keystone for all IdPs, as we have now with WebSSO, seems to
  be preferable to separate entry points per IdP. Which high
  street shop has separate doors for each user? Or have I
  misunderstood the purpose of your mod?
  
  The purpose of the mod is purely to bypass the need to have a
  shib/mellon discovery page on /v3/OS-FEDERATION/websso/saml2.
  This page is currently required to allow a user to select their
  idp (presumably from the ones supported by keystone) and
  redirect to that IDPs specific login page.
  
  There are two functionalities that are required: a) Horizon
  finding the redirection login URL of the IdP chosen by the user
  b) Keystone finding which IdP was used for login.
  
  The second is already done by Apache telling Keystone in the
  header field.
  
  The first is part of the metadata of the IdP, and Keystone should
  make this available to Horizon via an API call. Ideally when
  Horizon calls Keystone for the list of trusted IdPs, then the
  user friendly name of the IdP (to be displayed to the user) and
  the login page URL should be returned. Then Horizon can present
  the user friendly list to the user, get the login URL that
  matches this, then redirect the user to the IdP telling the IdP
  the common callback URL of Keystone.
  
  So my understanding was that this wasn't possible. Because we want
  to have keystone be the registered service provider and receive the
  returned SAML assertions the login redirect must be issued from
  keystone and not horizon. Is it possible to issue a login request
  from horizon that returns the response to keystone? This seems
  dodgy to me but may be possible if all the trust relationships are
  set up.
  
  Note also that currently this metadata including the login URL is not
  known by keystone. It's controlled by apache in the metadata xml
  files so we would have to add this information to keystone. Obviously
  this is doable just extra config setup that would require double
  handling of this URL.
 
 My idea is to use Horizon as the WAYF/Discovery service, approximately
 as follows
 
 1. The user goes to Horizon to login locally or to discover which
 federated IdP to use
 2. Horizon dynamically populates the list of IDPs by querying Keystone
 3. The user chooses the IdP and Horizon redirects the user to
 Apache/Keystone, telling it the IdP to use
 4. Apache creates

Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-12 Thread Jamie Lennox


- Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk
 To: openstack-dev@lists.openstack.org
 Sent: Thursday, 13 August, 2015 7:46:54 AM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
 Hi Jamie
 
 I have been thinking some more about your Coke and Pepsi use case
 example, and I think it is a somewhat spurious example, for the
 following reasons:
 
 1. If Coke and Pepsi are members of the SAME federation, then they trust
 each other (by definition). Therefore they would not and could not
 object to being listed as alternative IdPs in this federation.
 
 2. If Coke and Pepsi are in different federations because they dont
 trust each other, but they have the same service provider, then their
 service provider would be a member of both federations. In this case,
 the SP would provide different access points to the different
 federations, and neither Coke nor Pepsi would be aware of each other.
 
 regards
 
 David

So yes, my point here is to number 2 and providing multitenancy in a way that 
you can't see who else is available, and in talking with some of the keystone 
people this is essentially what we've come to (and i think i mentioned 
earlier?) that you would need to provide a different access point to different 
companies to keep this information private. It has the side advantage for the 
public cloud folks of providing whitelabelling for horizon. 

The question then once you have multiple access points per customer (not user) 
is how to list IDPs that are associated with that customer. The example i had 
earlier was tagging so you could tag a horizon instance (probably doesn't need 
to be a whole instance, just a login page) with like a value like COKE and when 
you list IDPs from keystone you say list with tag=COKE to find out what should 
show in horizon. This would allow common idps like google to be reused. 

This is why i was saying that public/private may not be fine grained enough. It 
may also be not be a realistic concern. If we are talking a portal per customer 
does the cost of rebooting horizon to staticly add a new idp to the 
local_config matter? This is assumedly a rare operation. 

I think the answer has been for a while that idp listing is going to need to be 
configurable from horizon because we already have a case for list nothing, list 
everything, and use this static list, so if in future we find we need to add 
something more complex like tagging it's another option we can consider then.

 
 On 06/08/2015 00:54, Jamie Lennox wrote:
  
  
  - Original Message -
  From: David Lyle dkly...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Sent: Thursday, August 6, 2015 5:52:40 AM
  Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
  Forcing Horizon to duplicate Keystone settings just makes everything much
  harder to configure and much more fragile. Exposing whitelisted, or all,
  IdPs makes much more sense.
 
  On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews  dolph.math...@gmail.com 
  wrote:
 
 
 
  On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli  steve...@ca.ibm.com 
  wrote:
 
 
 
 
 
  Some folks said that they'd prefer not to list all associated idps, which
  i
  can understand.
  Why?
  
  So the case i heard and i think is fairly reasonable is providing corporate
  logins to a public cloud. Taking the canonical coke/pepsi example if i'm
  coke, i get asked to login to this public cloud i then have to scroll
  though all the providers to find the COKE.COM domain and i can see for
  example that PEPSI.COM is also providing logins to this cloud. Ignoring
  the corporate privacy implications this list has the potential to get
  long. Think about for example how you can do a corporate login to gmail,
  you certainly don't pick from a list of auth providers for gmail - there
  would be thousands.
  
  My understanding of the usage then would be that coke would have been
  provided a (possibly branded) dedicated horizon that backed onto a public
  cloud and that i could then from horizon say that it's only allowed access
  to the COKE.COM domain (because the UX for inputting a domain at login is
  not great so per customer dashboards i think make sense) and that for this
  instance of horizon i want to show the 3 or 4 login providers that
  COKE.COM is going to allow.
  
  Anyway you want to list or whitelist that in keystone is going to involve
  some form of IdP tagging system where we have to say which set of idps we
  want in this case and i don't think we should.
  
  @David - when you add a new IdP to the university network are you having to
  provide a new mapping each time? I know the CERN answer to this with
  websso was to essentially group many IdPs behind the same keystone idp
  because they will all produce the same assertion values and consume the
  same mapping.
  
  Maybe the answer here is to provide the option

Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-10 Thread Jamie Lennox


- Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, 11 August, 2015 12:50:21 AM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
 
 
 On 10/08/2015 01:53, Jamie Lennox wrote:
  
  
  - Original Message -
  From: David Chadwick d.w.chadw...@kent.ac.uk To:
  openstack-dev@lists.openstack.org Sent: Sunday, August 9, 2015
  12:29:49 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
  Federated Login
  
  Hi Jamie
  
  nice presentation, thanks for sharing it. I have forwarded it to
  my students working on federation aspects of Horizon.
  
  About public federated cloud access, the way you envisage it, i.e.
  that every user will have his own tailored (subdomain) URL to the
  SP is not how it works in the real world today. SPs typically
  provide one URL, which everyone from every IdP uses, so that no
  matter which browser you are using, from wherever you are in the
  world, you can access the SP (via your IdP). The only thing the
  user needs to know, is the name of his IdP, in order to correctly
  choose it.
  
  So discovery of all available IdPs is needed. You are correct in
  saying that Shib supports a separate discovery service (WAYF), but
  Horizon can also play this role, by listing the IdPs for the user.
  This is the mod that my student is making to Horizon, by adding
  type ahead searching.
  
  So my point at the moment is that unless there's something i'm
  missing in the way shib/mellon discovery works is that horizon can't.
  Because we forward to a common websso entry point there is no way (i
  know) for the users selection in horizon to be forwarded to keystone.
  You would still need a custom select your idp discovery page in
  front of keystone. I'm not sure if this addition is part of your
  students work, it just hasn't been mentioned yet.
  
  About your proposed discovery mod, surely this seems to be going in
  the wrong direction. A common entry point to Keystone for all IdPs,
  as we have now with WebSSO, seems to be preferable to separate
  entry points per IdP. Which high street shop has separate doors for
  each user? Or have I misunderstood the purpose of your mod?
  
  The purpose of the mod is purely to bypass the need to have a
  shib/mellon discovery page on /v3/OS-FEDERATION/websso/saml2. This
  page is currently required to allow a user to select their idp
  (presumably from the ones supported by keystone) and redirect to that
  IDPs specific login page.
 
 There are two functionalities that are required:
 a) Horizon finding the redirection login URL of the IdP chosen by the user
 b) Keystone finding which IdP was used for login.
 
 The second is already done by Apache telling Keystone in the header field.
 
 The first is part of the metadata of the IdP, and Keystone should make
 this available to Horizon via an API call. Ideally when Horizon calls
 Keystone for the list of trusted IdPs, then the user friendly name of
 the IdP (to be displayed to the user) and the login page URL should be
 returned. Then Horizon can present the user friendly list to the user,
 get the login URL that matches this, then redirect the user to the IdP
 telling the IdP the common callback URL of Keystone.

So my understanding was that this wasn't possible. Because we want to have 
keystone be the registered service provider and receive the returned SAML 
assertions the login redirect must be issued from keystone and not horizon. Is 
it possible to issue a login request from horizon that returns the response to 
keystone? This seems dodgy to me but may be possible if all the trust 
relationships are set up.

In a way this is exactly what my proposal was. A way for horizon to determine a 
unique websso login page for each idp, just my understanding is that this 
request must be bounced through keystone.

  When the response comes back from that
  login it returns to that websso page and we look at remote_ids to
  determine which keystone idp is handling the response from that
  site.
 
 This seems the more secure way of determining the IdP to me, since
 Apache determines the IdP then tells Keystone via the header field. If
 you rely on the IdP to contact the right endpoint, then doesn't this
 allow the IdP to return to a different URL and thereby trick Keystone
 into thinking it was a different IdP?

To me the difference is that if we all return to a common 
/OS-FEDERATION/websso/saml2 endpoint then apache has to let all SAML responses 
through for all registered idps and then keystone must determine which is the 
real idp. By returning to an IDP specific websso route apache can assert that 
this response may only have come from the provider configured for that idp. 
This is really a secondary concern. I don't see there being much security 
difference, just that in this way you offload some additional responsibility 
for validating a SAML assertion to apache and we remove some (not all

Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-10 Thread Jamie Lennox


- Original Message -
 From: Jamie Lennox jamielen...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, 11 August, 2015 10:09:33 AM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
 
 
 - Original Message -
  From: David Chadwick d.w.chadw...@kent.ac.uk
  To: openstack-dev@lists.openstack.org
  Sent: Tuesday, 11 August, 2015 12:50:21 AM
  Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
  
  
  
  On 10/08/2015 01:53, Jamie Lennox wrote:
   
   
   - Original Message -
   From: David Chadwick d.w.chadw...@kent.ac.uk To:
   openstack-dev@lists.openstack.org Sent: Sunday, August 9, 2015
   12:29:49 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
   Federated Login
   
   Hi Jamie
   
   nice presentation, thanks for sharing it. I have forwarded it to
   my students working on federation aspects of Horizon.
   
   About public federated cloud access, the way you envisage it, i.e.
   that every user will have his own tailored (subdomain) URL to the
   SP is not how it works in the real world today. SPs typically
   provide one URL, which everyone from every IdP uses, so that no
   matter which browser you are using, from wherever you are in the
   world, you can access the SP (via your IdP). The only thing the
   user needs to know, is the name of his IdP, in order to correctly
   choose it.
   
   So discovery of all available IdPs is needed. You are correct in
   saying that Shib supports a separate discovery service (WAYF), but
   Horizon can also play this role, by listing the IdPs for the user.
   This is the mod that my student is making to Horizon, by adding
   type ahead searching.
   
   So my point at the moment is that unless there's something i'm
   missing in the way shib/mellon discovery works is that horizon can't.
   Because we forward to a common websso entry point there is no way (i
   know) for the users selection in horizon to be forwarded to keystone.
   You would still need a custom select your idp discovery page in
   front of keystone. I'm not sure if this addition is part of your
   students work, it just hasn't been mentioned yet.
   
   About your proposed discovery mod, surely this seems to be going in
   the wrong direction. A common entry point to Keystone for all IdPs,
   as we have now with WebSSO, seems to be preferable to separate
   entry points per IdP. Which high street shop has separate doors for
   each user? Or have I misunderstood the purpose of your mod?
   
   The purpose of the mod is purely to bypass the need to have a
   shib/mellon discovery page on /v3/OS-FEDERATION/websso/saml2. This
   page is currently required to allow a user to select their idp
   (presumably from the ones supported by keystone) and redirect to that
   IDPs specific login page.
  
  There are two functionalities that are required:
  a) Horizon finding the redirection login URL of the IdP chosen by the user
  b) Keystone finding which IdP was used for login.
  
  The second is already done by Apache telling Keystone in the header field.
  
  The first is part of the metadata of the IdP, and Keystone should make
  this available to Horizon via an API call. Ideally when Horizon calls
  Keystone for the list of trusted IdPs, then the user friendly name of
  the IdP (to be displayed to the user) and the login page URL should be
  returned. Then Horizon can present the user friendly list to the user,
  get the login URL that matches this, then redirect the user to the IdP
  telling the IdP the common callback URL of Keystone.
 
 So my understanding was that this wasn't possible. Because we want to have
 keystone be the registered service provider and receive the returned SAML
 assertions the login redirect must be issued from keystone and not horizon.
 Is it possible to issue a login request from horizon that returns the
 response to keystone? This seems dodgy to me but may be possible if all the
 trust relationships are set up.

Note also that currently this metadata including the login URL is not known by 
keystone. It's controlled by apache in the metadata xml files so we would have 
to add this information to keystone. Obviously this is doable just extra config 
setup that would require double handling of this URL.

 In a way this is exactly what my proposal was. A way for horizon to determine
 a unique websso login page for each idp, just my understanding is that this
 request must be bounced through keystone.
 
   When the response comes back from that
   login it returns to that websso page and we look at remote_ids to
   determine which keystone idp is handling the response from that
   site.
  
  This seems the more secure way of determining the IdP to me, since
  Apache determines the IdP then tells Keystone via the header field. If
  you rely on the IdP to contact the right endpoint, then doesn't this
  allow the IdP to return to a different

Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-09 Thread Jamie Lennox


- Original Message -
 From: Mehdi Abaakouk sil...@sileht.net
 To: openstack-dev@lists.openstack.org
 Sent: Friday, August 7, 2015 1:57:54 AM
 Subject: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares
 
 Hi,
 
 I want to share with you some problems I have recently encountered with
 openstack middlewares and oslo.config.
 
 The issues
 --
 
 In project Gnocchi, I would use oslo.middleware.cors, I have expected to
 just put the name of the middleware to the wsgi pipeline, but I can't.
 The middlewares only works if you pass the oslo_config.cfg.ConfigOpts()
 object or via 'paste-deploy'... Gnocchi doesn't use paste-deploy, so
 I have to modify the code to load it...
 (For the keystonemiddleware, Gnocchi already have a special
 handling/hack to load it [1] and [2]).
 I don't want to write the same hack for each openstack middlewares.
 
 
 In project Aodh (ceilometer-alarm), we recently got an issue with
 keystonemiddleware since we remove the usage of the global object
 oslo_config.cfg.CONF. The middleware doesn't load its options from the
 config file of aodh anymore. Our authentication is broken.
 We can still pass them through paste-deploy configuration but this looks
 a method of the past. I still don't want to write a hack for each
 openstack middlewares.
 
 
 Then I have digged into other middlewares and applications to see how
 they handle their conf.
 
 oslo_middlewarre.sizelimit and oslo_middlewarre.ssl take options only
 via the global oslo_config.cfg.CONF. So they are unusable for application
 that doesn't use this global object.
 
 oslo_middleware.healthcheck take options as dict like any other python
 middleware. This is suitable for 'paste-deploy'. But doesn't allow
 configuration via oslo.config, doesn't have a strong config options
 type checking and co.
 
 Zaqar seems got same kind of issue about keystonemiddleware, and just
 write a hack to workaround the issue (monkeypatch the cfg.CONF of
 keystonemiddleware with their local version of the object [3] and then
 transform the loaded options into a dict to pass them via the legacy
 middleware dict options [4]) .
 
 Most applications, just still use the global object for the
 configuration and don't, yet, see those issues.
 
 
 All of that is really not consistent.
 
 This is confusing for developer to have some middlewares that need pre-setup,
 enforce them to rely on global python object, and some others not.
 This is confusing for deployer their can't do the configuration of
 middlewares in the same way for each middlewares and each projects.
 
 But keystonemiddleware, oslo.middleware.cors,... are supposed to be wsgi
 middlewares, something that is independant of the app.
 And this is not really the case.
 
 From my point of view and what wsgi looks like generally in python, the
 middleware object should be just MyMiddleware(app, options_as_dict),
 if the middleware want to rely to another configuration system it should
 do the setup/initialisation itself.
 
 
 
 So, how to solve that ?
 
 
 Do you agree:
 
 * all openstack middlewares should load their options with oslo.config ?
   this permits type checking and all other features it provides, it's cool :)
   configuration in paste-deploy conf is thing of past
 
 * we must support local AND global oslo.config object ?
   This is an application choice not something enforced by middleware.
   The deployer experience should be the same in both case.
 
 * the middleware must be responsible of the section name in the oslo.config ?
   Gnocchi/Zaqar hack have to hardcode the section name in their code,
   this doesn't looks good.
 
 * we must support legacy python signature for WSGI object,
   MyMiddleware(app, options_as_dict) ? To be able to use paste for
   application/deployer that want it and not break already deployed things.
 
 
 I really think all our middlewares should be consistent:
 
 * to be usable by all applications without enforcing them to write crap
 around them.
 * and to made the deployer life easier.
 
 
 Possible solution:
 --
 
 I have already started to work on something that do all of that for all
 middlewares [5], [6]
 
 The idea is, the middleware should create a oslo_config.cfg.ConfigOpts()
 (instead of rely on the global one) and load the configuration file of the
 application in. oslo.config will discover the file location just with the
 name of application as usual.
 
 So the middleware can now be loaded like this:
 
 code example:
 
app = MyMiddleware(app, {oslo_config_project: aodh})
 
 paste-deploy example:
 
[filter:foobar]
paste.filter_factory = foobar:MyMiddleware.filter_factory
oslo_config_project = aodh
 
 oslo_config.cfg.ConfigOpts() will easly find the /etc/aodh/aodh.conf,
 This cut the hidden links between middleware and the application
 (through the global object).
 
 And of course if oslo_config_project is not provided, the middleware
 fallback the global 

Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-09 Thread Jamie Lennox


- Original Message -
 From: Jamie Lennox jamielen...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, August 10, 2015 12:36:14 PM
 Subject: Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares
 
 
 
 - Original Message -
  From: Mehdi Abaakouk sil...@sileht.net
  To: openstack-dev@lists.openstack.org
  Sent: Friday, August 7, 2015 1:57:54 AM
  Subject: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares
  
  Hi,
  
  I want to share with you some problems I have recently encountered with
  openstack middlewares and oslo.config.
  
  The issues
  --
  
  In project Gnocchi, I would use oslo.middleware.cors, I have expected to
  just put the name of the middleware to the wsgi pipeline, but I can't.
  The middlewares only works if you pass the oslo_config.cfg.ConfigOpts()
  object or via 'paste-deploy'... Gnocchi doesn't use paste-deploy, so
  I have to modify the code to load it...
  (For the keystonemiddleware, Gnocchi already have a special
  handling/hack to load it [1] and [2]).
  I don't want to write the same hack for each openstack middlewares.
  
  
  In project Aodh (ceilometer-alarm), we recently got an issue with
  keystonemiddleware since we remove the usage of the global object
  oslo_config.cfg.CONF. The middleware doesn't load its options from the
  config file of aodh anymore. Our authentication is broken.
  We can still pass them through paste-deploy configuration but this looks
  a method of the past. I still don't want to write a hack for each
  openstack middlewares.
  
  
  Then I have digged into other middlewares and applications to see how
  they handle their conf.
  
  oslo_middlewarre.sizelimit and oslo_middlewarre.ssl take options only
  via the global oslo_config.cfg.CONF. So they are unusable for application
  that doesn't use this global object.
  
  oslo_middleware.healthcheck take options as dict like any other python
  middleware. This is suitable for 'paste-deploy'. But doesn't allow
  configuration via oslo.config, doesn't have a strong config options
  type checking and co.
  
  Zaqar seems got same kind of issue about keystonemiddleware, and just
  write a hack to workaround the issue (monkeypatch the cfg.CONF of
  keystonemiddleware with their local version of the object [3] and then
  transform the loaded options into a dict to pass them via the legacy
  middleware dict options [4]) .
  
  Most applications, just still use the global object for the
  configuration and don't, yet, see those issues.
  
  
  All of that is really not consistent.
  
  This is confusing for developer to have some middlewares that need
  pre-setup,
  enforce them to rely on global python object, and some others not.
  This is confusing for deployer their can't do the configuration of
  middlewares in the same way for each middlewares and each projects.
  
  But keystonemiddleware, oslo.middleware.cors,... are supposed to be wsgi
  middlewares, something that is independant of the app.
  And this is not really the case.
  
  From my point of view and what wsgi looks like generally in python, the
  middleware object should be just MyMiddleware(app, options_as_dict),
  if the middleware want to rely to another configuration system it should
  do the setup/initialisation itself.
  
  
  
  So, how to solve that ?
  
  
  Do you agree:
  
  * all openstack middlewares should load their options with oslo.config ?
this permits type checking and all other features it provides, it's cool
:)
configuration in paste-deploy conf is thing of past
  
  * we must support local AND global oslo.config object ?
This is an application choice not something enforced by middleware.
The deployer experience should be the same in both case.
  
  * the middleware must be responsible of the section name in the oslo.config
  ?
Gnocchi/Zaqar hack have to hardcode the section name in their code,
this doesn't looks good.
  
  * we must support legacy python signature for WSGI object,
MyMiddleware(app, options_as_dict) ? To be able to use paste for
application/deployer that want it and not break already deployed things.
  
  
  I really think all our middlewares should be consistent:
  
  * to be usable by all applications without enforcing them to write crap
  around them.
  * and to made the deployer life easier.
  
  
  Possible solution:
  --
  
  I have already started to work on something that do all of that for all
  middlewares [5], [6]
  
  The idea is, the middleware should create a oslo_config.cfg.ConfigOpts()
  (instead of rely on the global one) and load the configuration file of the
  application in. oslo.config will discover the file location just with the
  name of application as usual.
  
  So the middleware can now be loaded like this:
  
  code example:
  
 app = MyMiddleware(app

Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-09 Thread Jamie Lennox


- Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk
 To: openstack-dev@lists.openstack.org
 Sent: Sunday, August 9, 2015 12:29:49 AM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
 Hi Jamie
 
 nice presentation, thanks for sharing it. I have forwarded it to my
 students working on federation aspects of Horizon.
 
 About public federated cloud access, the way you envisage it, i.e. that
 every user will have his own tailored (subdomain) URL to the SP is not
 how it works in the real world today. SPs typically provide one URL,
 which everyone from every IdP uses, so that no matter which browser you
 are using, from wherever you are in the world, you can access the SP
 (via your IdP). The only thing the user needs to know, is the name of
 his IdP, in order to correctly choose it.
 
 So discovery of all available IdPs is needed. You are correct in saying
 that Shib supports a separate discovery service (WAYF), but Horizon can
 also play this role, by listing the IdPs for the user. This is the mod
 that my student is making to Horizon, by adding type ahead searching.

So my point at the moment is that unless there's something i'm missing in the 
way shib/mellon discovery works is that horizon can't. Because we forward to a 
common websso entry point there is no way (i know) for the users selection in 
horizon to be forwarded to keystone. You would still need a custom select your 
idp discovery page in front of keystone. I'm not sure if this addition is part 
of your students work, it just hasn't been mentioned yet.

 About your proposed discovery mod, surely this seems to be going in the
 wrong direction. A common entry point to Keystone for all IdPs, as we
 have now with WebSSO, seems to be preferable to separate entry points
 per IdP. Which high street shop has separate doors for each user? Or
 have I misunderstood the purpose of your mod?

The purpose of the mod is purely to bypass the need to have a shib/mellon 
discovery page on /v3/OS-FEDERATION/websso/saml2. This page is currently 
required to allow a user to select their idp (presumably from the ones 
supported by keystone) and redirect to that IDPs specific login page. When the 
response comes back from that login it returns to that websso page and we look 
at remote_ids to determine which keystone idp is handling the response from 
that site.

If we were to move that to 
/v3/OS-FEDERATION/identity_providers/{idp_id}/protocols/saml2/websso then we 
can more easily support selection from horizon, or otherwise do discovery 
without relying on shib/mellons discovery mechanism. A selection from horizon 
would forward us to the idp specific websso on keystone, which would forward to 
the idp's login page (without needing discovery because we already know the 
idp) and the response from login would go to the idp specific page negating the 
need for dealing with remote_ids.

So i'm not looking for a seperate door so much as a way to indicate that the 
user picked an IDP in horizon and i don't want to do discovery again.
 
 regards
 
 David
 
 On 07/08/2015 01:29, Jamie Lennox wrote:
  
  
  
  
  *From: *Dolph Mathews dolph.math...@gmail.com
  *To: *OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  *Sent: *Friday, August 7, 2015 9:09:25 AM
  *Subject: *Re: [openstack-dev] [Keystone] [Horizon] Federated Login
  
  
  On Thu, Aug 6, 2015 at 11:25 AM, Lance Bragstad lbrags...@gmail.com
  mailto:lbrags...@gmail.com wrote:
  
  
  
  On Thu, Aug 6, 2015 at 10:47 AM, Dolph Mathews
  dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote:
  
  
  On Wed, Aug 5, 2015 at 6:54 PM, Jamie Lennox
  jamielen...@redhat.com mailto:jamielen...@redhat.com wrote:
  
  
  
  - Original Message -
   From: David Lyle dkly...@gmail.com
   mailto:dkly...@gmail.com
   To: OpenStack Development Mailing List (not for usage
   questions) openstack-dev@lists.openstack.org
  mailto:openstack-dev@lists.openstack.org
   Sent: Thursday, August 6, 2015 5:52:40 AM
   Subject: Re: [openstack-dev] [Keystone] [Horizon]
   Federated Login
  
   Forcing Horizon to duplicate Keystone settings just makes
   everything much
   harder to configure and much more fragile. Exposing
   whitelisted, or all,
   IdPs makes much more sense.
  
   On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews 
   dolph.math...@gmail.com mailto:dolph.math...@gmail.com
   
   wrote:
  
  
  
   On Wed, Aug 5

Re: [openstack-dev] [keystone] policy issues when generating trusts with different clients

2015-08-06 Thread Jamie Lennox


- Original Message -
 From: michael mccune m...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, August 7, 2015 1:21:53 AM
 Subject: Re: [openstack-dev] [keystone] policy issues when generating trusts 
 with different clients
 
 On 08/05/2015 10:27 PM, Jamie Lennox wrote:
  Hey Mike,
 
  I think it could be one of the hacks that are in place to try and keep
  compatibility with the old and new way of using the client is returning
  the wrong thing. Compare the output of trustor.user_id and
  trustor_auth.get_user_id(sess). For me trustor.user_id is None which will
  make sense why you'd get permission errors.
 
  Whether this is a bug in keystoneclient is debatable because we had to keep
  compatibility with the old options just not update them for the new paths,
  the ambiguity is certainly bad.
 
  The command that works for me is:
 
  trustor.trusts.create(
   trustor_user=trustor_auth.get_user_id(sess),
   trustee_user=trustee_auth.get_user_id(sess),
   project=trustor_auth.get_project_id(sess),
   role_names=['Member'],
   impersonation=True,
   expires_at=None)
 
  We're working on a keystoneclient 2.0 that will remove all that old code.
 
 
  Let me know if that fixes it for you.
 
 hi Jamie,
 
 this does work for me. but now i have a few questions as i start to
 refactor our code.

Great

 previously we have been handing around keystone Client objects to
 perform all of our operations. this leads to some trouble as we expected
 the user_id, and project_id, to be present in the client. so, 3 questions.
 
 1. is it safe to set the user_id and project_id on a Client object?
 (i notice that i am able to perform this operation and it would make
 things slightly easier to refactor)

It's safe in that if you force set it on the client then there isn't anything 
in the client that will override it, I don't know if i'd recommend it though.
 
 2. are there plans for the new keystoneclient to automatically fill in
 user_id and project_id for Session/Auth based clients?

No
 
 3. would it be better to transform our code to pass around Auth plugin
 objects instead of Client objects?

Absolutely.

So conceptually this is what we're trying to get to with keystoneclient. 
Keystone has 2 related but distinct jobs, one is to provide authentication for 
all the services and one is to manage it's own CRUD operations. Using sessions 
and auth plugins are authentication opreations and the existing keystoneclient 
should be used only for keystone's CRUD operations. This is the also the intent 
behind the upcoming keystoneauth/keystoneclient split where authentication 
options that are common to all clients are going to get moved to keystoneauth.

If you find yourself using a keystoneclient object for auth I would consider 
that a bug.

There is a larger question here about what sahara is doing with the user_id and 
project_id, typically this would be received from auth_token middleware or 
otherwise be implied by the token that is passed around. If you are passing 
these parameters to other clients we should fix those clients. For most 
projects this is handled by passing around the Context object which contains a 
session, a plugin and all the information from auth_token middleware.

 thanks again for the help,
 mike
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread Jamie Lennox


- Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk
 To: openstack-dev@lists.openstack.org
 Sent: Thursday, August 6, 2015 6:25:29 PM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
 
 
 On 06/08/2015 00:54, Jamie Lennox wrote:
  
  
  - Original Message -
  From: David Lyle dkly...@gmail.com To: OpenStack Development
  Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org Sent: Thursday, August 6, 2015
  5:52:40 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
  Federated Login
  
  Forcing Horizon to duplicate Keystone settings just makes
  everything much harder to configure and much more fragile. Exposing
  whitelisted, or all, IdPs makes much more sense.
  
  On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews 
  dolph.math...@gmail.com  wrote:
  
  
  
  On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli 
  steve...@ca.ibm.com  wrote:
  
  
  
  
  
  Some folks said that they'd prefer not to list all associated idps,
  which i can understand. Why?
  
  So the case i heard and i think is fairly reasonable is providing
  corporate logins to a public cloud. Taking the canonical coke/pepsi
  example if i'm coke, i get asked to login to this public cloud i then
  have to scroll though all the providers to find the COKE.COM domain
  and i can see for example that PEPSI.COM is also providing logins to
  this cloud. Ignoring the corporate privacy implications this list has
  the potential to get long.
 
 This is the whole purpose of the mod we are currently making to Horizon.
 If you look at our screenshots on InVision, you will see the user has
 the choice to either list all (potentially hundreds of) IdPs, or start
 to type in the name of his organisation. With type ahead, we then filter
 the IdPs to match the characters the user enters. This is also shown in
 our screenshots. So using your coke/pepsi example above, the Coke user
 would type C and the list of IdPs would be immediately culled to only
 contain those with C in their name (and pepsi would be removed from the
 list). When he enters O then the list is further culled to only IdPs
 containing co in their name. Consequently with only minor effort from
 the user (both mental load and physical load) the user's IdP is very
 quickly revealed to him, allowing him to login. See
 
 https://openstack.invisionapp.com/d/#/console/4277424/92772663/preview

So my point here was that in many situations you strictly don't want to allow 
people to see this entire list. 

 Think about for example how you can do a
  corporate login to gmail, you certainly don't pick from a list of
  auth providers for gmail - there would be thousands.
 
 Actually gmail (at least for me) works in a different way. It takes your
 email address and ASSUMES that your idp is the same as your domain name.
 So no list of IdPs is presented. Instead the IdP name is computed
 automatically from your email address. This approach wont work for everyone.
 
 
  
  My understanding of the usage then would be that coke would have been
  provided a (possibly branded) dedicated horizon that backed onto a
  public cloud and that i could then from horizon say that it's only
  allowed access to the COKE.COM domain (because the UX for inputting a
  domain at login is not great so per customer dashboards i think make
  sense) and that for this instance of horizon i want to show the 3 or
  4 login providers that COKE.COM is going to allow.
  
  Anyway you want to list or whitelist that in keystone is going to
  involve some form of IdP tagging system where we have to say which
  set of idps we want in this case and i don't think we should.
  
  @David - when you add a new IdP to the university network are you
 
 the list of IdPs is centrally (i.e. nationally) managed, and every UK
 university/federation member is sent a new list periodically. So we do
 not add new IdPs, we simply use the list that is provided to us.
 
 
  having to provide a new mapping each time?
 
 Since all federation members use the EduPerson schema, then one set of
 mapping rules are applicable to all IdPs. They dont need to be updated.
 
 So to conclude
 a) we dont need to do anything when the federation membership changes
 (except use the new list)
 b) we dont need to change mapping rules
 c) we dont need to tailor user interfaces
 
 We would like to move OpenStack in this direction, where there is
 minimal effort to managing federation membership. We believe our
 proposed change to Horizon is one step in the right direction.
 
 
  I know the CERN answer to
  this with websso was to essentially group many IdPs behind the same
  keystone idp because they will all produce the same assertion values
  and consume the same mapping.
 
 Not a good solution from a trust perspective since you dont know who the
 actual IdP is. You are told it is always the proxy IdP.
 
  
  Maybe the answer here is to provide the option in
  django_openstack_auth, a plugin (again

Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread Jamie Lennox
- Original Message -

 From: Dolph Mathews dolph.math...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Sent: Friday, August 7, 2015 9:09:25 AM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login

 On Thu, Aug 6, 2015 at 11:25 AM, Lance Bragstad  lbrags...@gmail.com 
 wrote:

  On Thu, Aug 6, 2015 at 10:47 AM, Dolph Mathews  dolph.math...@gmail.com 
  wrote:
 

   On Wed, Aug 5, 2015 at 6:54 PM, Jamie Lennox  jamielen...@redhat.com 
   wrote:
  
 

- Original Message -
   
  
 
 From: David Lyle  dkly...@gmail.com 
   
  
 
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
   
  
 
 Sent: Thursday, August 6, 2015 5:52:40 AM
   
  
 
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
   
  
 

   
  
 
 Forcing Horizon to duplicate Keystone settings just makes everything
 much
   
  
 
 harder to configure and much more fragile. Exposing whitelisted, or
 all,
   
  
 
 IdPs makes much more sense.
   
  
 

   
  
 
 On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews 
 dolph.math...@gmail.com
 
   
  
 
 wrote:
   
  
 

   
  
 

   
  
 

   
  
 
 On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli 
 steve...@ca.ibm.com
 
   
  
 
 wrote:
   
  
 

   
  
 

   
  
 

   
  
 

   
  
 

   
  
 
 Some folks said that they'd prefer not to list all associated idps,
 which
 i
   
  
 
 can understand.
   
  
 
 Why?
   
  
 

So the case i heard and i think is fairly reasonable is providing
corporate
logins to a public cloud. Taking the canonical coke/pepsi example if
i'm
coke, i get asked to login to this public cloud i then have to scroll
though
all the providers to find the COKE.COM domain and i can see for example
that
PEPSI.COM is also providing logins to this cloud. Ignoring the
corporate
privacy implications this list has the potential to get long. Think
about
for example how you can do a corporate login to gmail, you certainly
don't
pick from a list of auth providers for gmail - there would be
thousands.
   
  
 

My understanding of the usage then would be that coke would have been
provided a (possibly branded) dedicated horizon that backed onto a
public
cloud and that i could then from horizon say that it's only allowed
access
to the COKE.COM domain (because the UX for inputting a domain at login
is
not great so per customer dashboards i think make sense) and that for
this
instance of horizon i want to show the 3 or 4 login providers that
COKE.COM
is going to allow.
   
  
 

Anyway you want to list or whitelist that in keystone is going to
involve
some form of IdP tagging system where we have to say which set of idps
we
want in this case and i don't think we should.
   
  
 

   That all makes sense, and I was admittedly only thinking of the private
   cloud
   use case. So, I'd like to discuss the public and private use cases
   separately:
  
 

   In a public cloud, is there a real use case for revealing *any* IdPs
   publicly? If not, the entire list should be made private using
   policy.json, which we already support today.
  
 

  The user would be required to know the id of the IdP in which they want to
  federate with, right?
 

 As a federated end user in a public cloud, I'd be happy to have a custom URL
 / bookmark for my IdP / domain (like http://customer-x.cloud.example.com/ or
 http://cloud.example.com/customer-x ) that I need to know to kickoff the
 correct federated handshake with my IdP using a single button press
 (Login).

I always envisioned the subdomain method. I would say no to listing IdPs, but 
it's not simply making the list private because you will still need to provide 
at least one IdP option manually in that horizon's local_settings and at which 
point you should just turn off listing because you know it's always going to 
get a 403. I'm not sure how this would be managed today because we have a 
single WebSSO entry point so you can't really specify the IdP you want from the 
login page, it's expected to have your own discovery page - hence the spec 
https://review.openstack.org/#/c/199339/ 

   In a private cloud, is there a real use case for fine-grained
   public/private
   attributes per IdP? (The stated use case was for a public cloud.) It
   seems
   the default behavior should be that horizon fetches the entire list from
   keystone.
  
 

I don't think so. I think privately you would list everything. 

I feel at this point I'm missing something obvious, so let me explain my 
understanding of the current flow. 

* From horizon you select federated provider, the key of this is a protocol 
name, eg saml. 
* On login you get redirected

Re: [openstack-dev] [keystone] policy issues when generating trusts with different clients

2015-08-05 Thread Jamie Lennox
Hey Mike, 

I think it could be one of the hacks that are in place to try and keep 
compatibility with the old and new way of using the client is returning the 
wrong thing. Compare the output of trustor.user_id and 
trustor_auth.get_user_id(sess). For me trustor.user_id is None which will make 
sense why you'd get permission errors. 

Whether this is a bug in keystoneclient is debatable because we had to keep 
compatibility with the old options just not update them for the new paths, the 
ambiguity is certainly bad. 

The command that works for me is: 

trustor.trusts.create(
trustor_user=trustor_auth.get_user_id(sess),
trustee_user=trustee_auth.get_user_id(sess),
project=trustor_auth.get_project_id(sess),
role_names=['Member'],
impersonation=True,
expires_at=None)

We're working on a keystoneclient 2.0 that will remove all that old code.


Let me know if that fixes it for you.

jamie



- Original Message -
 From: michael mccune m...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, August 5, 2015 11:37:10 PM
 Subject: Re: [openstack-dev] [keystone] policy issues when generating trusts 
 with different clients
 
 On 08/05/2015 02:34 AM, Steve Martinelli wrote:
  I think this is happening because the last session created was based
  off of trustee_auth. Try creating 2 sessions, one for each user (trustor
  and trustee). Maybe Jamie will chime in.
 
 just as a followup, i tried creating new Session objects for each client
 and i still get permission errors. i'm going to dig into the trust
 permission validation stuff a little.
 
 mike
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-05 Thread Jamie Lennox


- Original Message -
 From: David Lyle dkly...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, August 6, 2015 5:52:40 AM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
 Forcing Horizon to duplicate Keystone settings just makes everything much
 harder to configure and much more fragile. Exposing whitelisted, or all,
 IdPs makes much more sense.
 
 On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews  dolph.math...@gmail.com 
 wrote:
 
 
 
 On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli  steve...@ca.ibm.com 
 wrote:
 
 
 
 
 
 Some folks said that they'd prefer not to list all associated idps, which i
 can understand.
 Why?

So the case i heard and i think is fairly reasonable is providing corporate 
logins to a public cloud. Taking the canonical coke/pepsi example if i'm coke, 
i get asked to login to this public cloud i then have to scroll though all the 
providers to find the COKE.COM domain and i can see for example that PEPSI.COM 
is also providing logins to this cloud. Ignoring the corporate privacy 
implications this list has the potential to get long. Think about for example 
how you can do a corporate login to gmail, you certainly don't pick from a list 
of auth providers for gmail - there would be thousands. 

My understanding of the usage then would be that coke would have been provided 
a (possibly branded) dedicated horizon that backed onto a public cloud and that 
i could then from horizon say that it's only allowed access to the COKE.COM 
domain (because the UX for inputting a domain at login is not great so per 
customer dashboards i think make sense) and that for this instance of horizon i 
want to show the 3 or 4 login providers that COKE.COM is going to allow. 

Anyway you want to list or whitelist that in keystone is going to involve some 
form of IdP tagging system where we have to say which set of idps we want in 
this case and i don't think we should.

@David - when you add a new IdP to the university network are you having to 
provide a new mapping each time? I know the CERN answer to this with websso was 
to essentially group many IdPs behind the same keystone idp because they will 
all produce the same assertion values and consume the same mapping. 

Maybe the answer here is to provide the option in django_openstack_auth, a 
plugin (again) of fetch from keystone, fixed list in settings or let it point 
at a custom text file/url that is maintained by the deployer. Honestly if 
you're adding and removing idps this frequently i don't mind making the 
deployer maintain some of this information out of scope of keystone.


Jamie

 
 
 
 
 
 Actually, I like jamie's suggestion of just making horizon a bit smarter, and
 expecting the values in the horizon settings (idp+protocol)
 But, it's already in keystone.
 
 
 
 
 
 
 
 Thanks,
 
 Steve Martinelli
 OpenStack Keystone Core
 
 Dolph Mathews ---2015/08/05 01:38:09 PM---On Wed, Aug 5, 2015 at 5:39 AM,
 David Chadwick  d.w.chadw...@kent.ac.uk  wrote:
 
 From: Dolph Mathews  dolph.math...@gmail.com 
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 Date: 2015/08/05 01:38 PM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
 
 
 
 
 On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick  d.w.chadw...@kent.ac.uk 
 wrote:
 
 On 04/08/2015 18:59, Steve Martinelli wrote:  Right, but that API is/should
 be protected. If we want to list IdPs  *before* authenticating a user, we
 either need: 1) a new API for listing  public IdPs or 2) a new policy that
 doesn't protect that API. Hi Steve yes this was my understanding of the
 discussion that took place many months ago. I had assumed (wrongly) that
 something had been done about it, but I guess from your message that we are
 no further forward on this Actually 2) above might be better reworded as - a
 new policy/engine that allows public access to be a bona fide policy rule
 The existing policy simply seems wrong. Why protect the list of IdPs?
 
 
 regards David   Thanks,   Steve Martinelli  OpenStack Keystone Core  
 Inactive hide details for Lance Bragstad ---2015/08/04 01:49:29 PM---On 
 Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish drfish@us.iLance Bragstad 
 ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas  Fish
  drf...@us.ibm.com  wrote:  Hi David,   From: Lance Bragstad 
 lbrags...@gmail.com   To: OpenStack Development Mailing List (not for
 usage questions)   openstack-dev@lists.openstack.org   Date: 2015/08/04
 01:49 PM  Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
   
   On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish
 _drf...@us.ibm.com_  mailto: drf...@us.ibm.com  wrote:   Hi David, 
  This is a cool looking UI. I've made a minor comment on it in InVision. 
  I'm curious if this is an implementable idea - 

Re: [openstack-dev] [puppet][keystone] To always use or not use domain name?

2015-08-05 Thread Jamie Lennox


- Original Message -
 From: Adam Young ayo...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Thursday, August 6, 2015 1:03:55 AM
 Subject: Re: [openstack-dev] [puppet][keystone] To always use or not use 
 domain name?
 
 On 08/05/2015 08:16 AM, Gilles Dubreuil wrote:
  While working on trust provider for the Keystone (V3) puppet module, a
  question about using domain names came up.
 
  Shall we allow or not to use names without specifying the domain name in
  the resource call?
 
  I have this trust case involving a trustor user, a trustee user and a
  project.
 
  For each user/project the domain can be explicit (mandatory):
 
  trustor_name::domain_name
 
  or implicit (optional):
 
  trustor_name[::domain_name]
 
  If a domain isn't specified the domain name can be assumed (intuited)
  from either the default domain or the domain of the corresponding
  object, if unique among all domains.
 If you are specifying project by name, you must specify domain either
 via name or id.  If you specify proejct by ID, you run the risk of
 conflicting if you provide a domain speciffiedr (ID or name).
 
 
  Although allowing to not use the domain might seems easier at first, I
  believe it could lead to confusion and errors. The latter being harder
  for the user to detect.
 
  Therefore it might be better to always pass the domain information.
 
 Probably a good idea, as it will catch if you are making some
 assumption.  IE, I say  DomainX  ProejctQ  but I mean DomainQ ProjectQ.

Agreed. Like it or not domains are a major part of using the v3 api and if you 
want to use project names and user names we should enforce that domains are 
provided. 
Particularly at the puppet level (dealing with users who should understand this 
stuff) anything that tries to guess what the user means is a bad idea and going 
to lead to confusion when it breaks.

 
  I believe using the full domain name approach is better.
  But it's difficult to tell because in puppet-keystone and
  puppet-openstacklib now rely on python-openstackclient (OSC) to
  interface with Keystone. Because we can use OSC defaults
  (OS_DEFAULT_DOMAIN or equivalent to set the default domain) doesn't
  necessarily makes it the best approach. For example hard coded value [1]
  makes it flaky.
 
  [1]
  https://github.com/openstack/python-openstackclient/blob/master/openstackclient/shell.py#L40
 
  To help determine the approach to use, any feedback will be appreciated.
 
  Thanks,
  Gilles
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-04 Thread Jamie Lennox


- Original Message -
 From: Steve Martinelli steve...@ca.ibm.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, August 5, 2015 3:59:34 AM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
 
 
 Right, but that API is/should be protected. If we want to list IdPs *before*
 authenticating a user, we either need: 1) a new API for listing public IdPs
 or 2) a new policy that doesn't protect that API.
 
 Thanks,

Is there a real requirement here for this to be a dynamic listing as opposed to 
something that can be edited from the horizon local_settings? There are obvious 
use cases for both situations where you want this to be dynamic or you very 
carefully want to protect which IdPs are available to log in with and from that 
perspective it would be a very unusual API for keystone to have. 

My understanding of the current websso design where we always logged in via 
/v3/OS-FEDERATION/auth/websso/{protocol} was so that you would run a discovery 
page on that address that allowed you to customize which IdPs you exposed 
outside of keystone. Personally i don't like this which is what i wrote this 
spec[1] was for. However my intention there would have been to manually specify 
in the local_settings what IdPs were available and reuse the current horizon 
WebSSO drop down box.

Jamie 


[1] https://review.openstack.org/#/c/199339/  


 Steve Martinelli
 OpenStack Keystone Core
 
 Lance Bragstad ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM,
 Douglas Fish drf...@us.ibm.com wrote:  Hi David,
 
 From: Lance Bragstad lbrags...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 2015/08/04 01:49 PM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 
 
 
 
 
 
 On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish  drf...@us.ibm.com  wrote:
 
 Hi David, This is a cool looking UI. I've made a minor comment on it in
 InVision. I'm curious if this is an implementable idea - does keystone
 support large numbers of 3rd party idps? is there an API to retreive the
 list of idps or does this require carefully coordinated configuration
 between Horizon and Keystone so they both recognize the same list of idps?
 There is an API call for getting a list of Identity Providers from Keystone
 
 http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers
 
 
 
 Doug Fish David Chadwick  d.w.chadw...@kent.ac.uk  wrote on 08/01/2015
 06:01:48 AM:  From: David Chadwick  d.w.chadw...@kent.ac.uk   To:
 OpenStack Development Mailing List  openstack-dev@lists.openstack.org  
 Date: 08/01/2015 06:05 AM  Subject: [openstack-dev] [Keystone] [Horizon]
 Federated Login   Hi Everyone   I have a student building a GUI for
 federated login with Horizon. The  interface supports both a drop down list
 of configured IDPs, and also  Type Ahead for massive federations with
 hundreds of IdPs. Screenshots  are visible in InVision here  
 https://invis.io/HQ3QN2123   All comments on the design are appreciated.
 You can make them directly  to the screens via InVision   Regards  
 David
 __ 
 OpenStack Development Mailing List (not for usage questions)  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 __
 OpenStack Development Mailing List (not for usage questions) Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] keystone session upgrade

2015-07-17 Thread Jamie Lennox

On 18 Jul 2015 6:29 am, michael mccune m...@redhat.com wrote:

 On 07/16/2015 04:31 PM, michael mccune wrote: 
  i will also likely be rewriting the spec to encompass these changes if i 
  can get them working locally. 

 just wanted to follow up before i rewrite the spec. 

 i think the most sensible approach at this point is to store an auth 
 plugin object in our context. this will alleviate some of our reliance 
 on the username/tenant_id/etc. authentication values that we currently 
 use. this object will also be used in conjunction with the session cache 
 to produce clients. 

 the session cache will be removed from the context and instead become a 
 singleton type object in the sahara.service.sessions module. the cache 
 will contain several specific functions for creating session objects for 
 our different needs. for example, nova session will need to pass the 
 specific nova certificate to the session. for non-specific needs the 
 session object can be shared between several clients. 

 the clients themselves will use a combination of auth plugin object and 
 session object for creation. in this manner we can associate different 
 authentications with the sessions as needed and still share the 
 connection pooling and caching that occurs in the session object. this 
 will also allow us to continue to create admin clients as needed, we can 
 either pass an admin authentication object to the client or set the 
 context to have an admin authenticated plugin object. 

 there are still a few questions to be answered about trust scoping, but 
 i think they will fit in this model. i would still be curious to hear 
 any thoughts about this approach, but i will continue to test it and 
 work towards rewriting the spec. 

 regards, 
 mike 

So I'm not familiar with how Sahara handles contexts however from a theoretical 
stand point anything that is defined in config should be able to be cached 
globally. So service specific sessions, and admin auth. The context typically 
would contain things relevant to this request, however for convenience it might 
be worth having a pointer from context to the global.

You then create clients then with the combination of session+auth. You don't 
need/want to attach the auth plugin to the session here as that makes it 
difficult to reuse.

When using sessions like this client creation is cheap (no requests are made) 
so there's no reason to hang on to the client objects themselves.

I'm not sure if that helped at all but catch me on IRC and I'll help out with 
what I can.

Jamie

 __ 
 OpenStack Development Mailing List (not for usage questions) 
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] V3 authentication to swift

2015-07-16 Thread Jamie Lennox
Glancers, 

A while ago I wrote an email outlining a couple of ways we could support 
keystone v3 authentication when using swift as a backend for glance_store. In 
the long term it would be best to transition swiftclient to use sessions as 
this would allow us to do more extensive (kerberos, ssl etc) authentication in 
the future, however this is a longer term plan that requires a lot of effort 
and coordination. For right now glance-swift is the last piece of devstack 
that only works with v2 authentication[1] and with a relatively small amount of 
code we can support v3 username/password which is by far the most common 
scenario.

I've been prompting for a while on IRC however can I please request some 
reviews of https://review.openstack.org/#/c/193422/ as it has now become a 
blocker for the v3 authentication effort.


Thanks for your help, 

Jamie 


[1] Note, that by this i mean that with v2 keystone disabled glance-swift 
communication is the last thing in the standard devstack gate path that can't 
be configured to v3. It does not mean that tempest will succeed or that all of 
OpenStack is v3 compatible just that we can start running meaningful gate jobs.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] V3 Authentication for swift store

2015-06-18 Thread Jamie Lennox


- Original Message -
 From: stuart mclaren stuart.mcla...@hp.com
 To: openstack-dev@lists.openstack.org
 Sent: Thursday, 18 June, 2015 7:06:12 PM
 Subject: Re: [openstack-dev] [glance] V3 Authentication for swift store
 
 Hi Jamie,
 
 Glance has another way of specifying the swift credentials for the single
 tenant store which may (?) be useful here.
 
 In glance-swift.conf you can specify something like:
 
 [ref1]
 user = tenant:user1
 key = key1
 auth_address = auth...@example.com
 
 which means that in the database 'ref1' is stored instead of the
 credentials: the location ends up looking something like:
 
 swift+config://ref1@swift/container/object
 
 The swift+config schema is used to indicate the real creds should be
 fetched from the config file. (This avoids having to put them in the
 database which isn't desirable and complicates password changing.)
 
 We'd have to think about corner cases (I think --copy-from should be ok).
 
 -Stuart
 

snip 

Interesting, this actually maps really closely with one of the concepts that 
loading auth plugins from conf already provides. You can say

[keystone_authtoken]
auth_section = ref1
...

[ref1]
auth_plugin = password
auth_url = ... 
username = ...
...

So that you can share or separate auth. We can probably leverage something 
similar here.

Is this the correct way to configure glance? Would i be working on a 
deprecated section of code to make this support plugins? I can't really tell 
what is recommended from the code.

So the question still comes down to fix it properly or get it working. If we 
configure glance this way i can add user_domain_id, user_domain_name, 
project_domain_id, project_domain_name to the variables that are read from 
[ref1] and get v3 support for user/pass that way fairly easily. To move to 
fully loading auth_plugins we would need to either get swiftclient support for 
sessions or move to using the HTTP API directly. 

I'm inclined to just patch it in for now, work on getting swiftclient session 
support and then use that when ready. Would this be ok? 

Jamie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] V3 Authentication for swift store

2015-06-18 Thread Jamie Lennox


- Original Message -
 From: Alistair Coles alistair.co...@hp.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, 18 June, 2015 4:39:52 PM
 Subject: Re: [openstack-dev] V3 Authentication for swift store
 
 
 
  -Original Message-
  From: Jamie Lennox [mailto:jamielen...@redhat.com]
  Sent: 18 June 2015 07:02
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [glance] V3 Authentication for swift store
  
  Hey everyone,
  
  TL;DR: glance_store requires a way to do v3 authentication to the swift
  backend.
  
  snip
  
  However in future we are trying to open up authentication so it's not
  limited to
  only user/password authentication. Immediate goals for service to service
  communications are to enable SSL client certificates and kerberos
  authentication. This would be handled by keystoneclient sessions but they
  are
  not supported by swift and it would require a significant rewrite of
  swiftclient to
  do, and the swift team has indicated they do not which to invest more time
  into
  their client.
 
 If we consider specifically the swiftclient Connection class, I wonder how
 significant a rewrite would be to support session objects? I'm not too
 familiar with sessions - is a session an object with an interface to fetch a
 token and service endpoint url? If so maybe Connection could accept a
 session in lieu of auth options and call the session rather than its
 get_auth methods.
 
 If we can move towards sessions in swiftclient then that would be good IMHO,
 since we have also have requirement to support fetching a service token [1],
 which I guess would (now or in future) also be handled by the session?
 
 [1] https://review.openstack.org/182640
 
 Alistair
 

So the sessions work is built on, and is modelled after requests.Session. It 
consists of two parts, the session which is your transport object involving 
things like CA certs, verify flags etc and an auth plugin which is how we can 
handle new auth mechanisms. Once coupled the interface looks very similar to a 
requests.Session with get(), post(), request() etc methods, with the addition 
that requests are automatically authenticated and things like the service 
catalog are handled for you. I wrote a blog post a while back which explains 
many of the concepts[2].

The fastest way I could see including Sessions into swiftclient would be to 
create new Connection and HttpConnection objects. Would this be something swift 
is interested in? I didn't mean to offend when saying that you didn't want to 
put any more time into the client, there was a whole session in Paris about how 
the client had problems but it was just going to limp along until SDK was 
ready. Side note, i don't know how this decision will be affected based on 
Vancouver conversations about how SDK may not be suitable for service-service 
communications. 

Regarding service tokens, we have an auth plugin that is passed down from 
auth_token middleware that will include X-Service-Token in requests which I 
think swiftclient would benefit from.


Jamie

[2] 
http://www.jamielennox.net/blog/2014/09/15/how-to-use-keystoneclient-sessions/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] V3 Authentication for swift store

2015-06-18 Thread Jamie Lennox
Hey everyone, 

TL;DR: glance_store requires a way to do v3 authentication to the swift backend.

The keystone team is making a push to properly deprecate the v2 authentication 
APIs this cycle. As part of that we have a series of devstack reviews that 
moves devstack over to only using v3 APIs[1] and an experimental gate job that 
runs devstack with the keystone v2 api disabled.

The current blocker for this gate job is that in glance's single-tenant swift 
backend mode the config options only allow v2 authentication.

Looking at glance store the username and password are stored as part of the 
location parameter in the form: 

swift://username:project_name:password@keystone/container

even though devstack is still using the (deprecated?) 

swift_store_user = username:project_name
swift_store_key = password
swift_store_container = container 

I don't know how this relates to swift_store_config_files.

There is support for v3 in swiftclient (though it's kind of ugly), to do v3 
authentication i have to do: 

c = swiftclient.Connection(authurl='http://keystoneurl:5000/v3',
   user=username,
   key=password,
   auth_version='3',
   os_options={'project_name': project_name,
   'project_domain_id': 'default',
   'user_domain_id': 'default'})

However in future we are trying to open up authentication so it's not limited 
to only user/password authentication. Immediate goals for service to service 
communications are to enable SSL client certificates and kerberos 
authentication. This would be handled by keystoneclient sessions but they are 
not supported by swift and it would require a significant rewrite of 
swiftclient to do, and the swift team has indicated they do not which to invest 
more time into their client.

This leads me to my question: How do we support additional authentication 
parameters and in future different parameters? 

We could undo the deprecation of the config file specified credentials and add 
the additional options there. This would get us the short term win of moving to 
v3 auth but would need to be addressed in future for newer authentication 
mechanissms.

I wrote a blog a while ago regarding how sessions supports loading different 
types of authentication from conf files[2], however as swiftclient doesn't 
support this the best it could do is fetch a url/token combo with which glance 
could make requests and it would have to handle reauthentication and retries 
somewhat manually. I actually think rewriting the required parts of the client 
wouldn't be too difficult, the problem then is whether this should live in 
glance or in swiftclient. This would also involve credentials in the config 
file rather than the location option. 

I'm not overly familiar with glance_store so there may be other options or what 
i've suggested may not be possible but I'd love to hear some opinions from the 
glance team as this is quickly becoming a blocker for keystone.


Thanks,

Jamie 



[1] https://review.openstack.org/#/c/186684/
[2] http://www.jamielennox.net/blog/2015/02/17/loading-authentication-plugins/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-08 Thread Jamie Lennox


- Original Message -
 From: David Chadwick d.w.chadw...@kent.ac.uk
 To: openstack-dev@lists.openstack.org
 Sent: Saturday, 6 June, 2015 6:01:10 PM
 Subject: Re: [openstack-dev] [keystone][reseller] New way to get a project 
 scoped token by name
 
 
 
 On 06/06/2015 00:24, Adam Young wrote:
  On 06/05/2015 01:15 PM, Henry Nash wrote:
  I am sure I have missed something along the way, but can someone
  explain to me why we need this at all.  Project names are unique
  within a domain, with the exception of the project that is acting as
  its domain (i.e. they can only every be two names clashing in a
  hierarchy at the domain level and below).  So why isn’t specifying
  “is_domain=True/False” sufficient in an auth scope along with the
  project name?
  
  The limitation of  Project names are unique within a domain is
  artificial and somethi8ng we should not be enforcing.  Names should only
  be unique within parent project.
 
 +++1

I said the exact same thing as Henry in the other thread that seems to be on 
the same topic. You're correct the limitation of Project names are unique 
within a domain is completely artificial, but it's a constraint that allows us 
to maintain the auth systems we currently have and will not harm the reseller 
model (because they would be creating new domains).

It's also a constraint that we can relax later when multitenancy is a bit more 
established and someone has a real issue with the limitation - it's not 
something we can ever claw back again if we allow some looking up projects by 
name with delimiters. 

I think for the time being it's an artificial constraint we should maintain.



  
  This whole thing started by trying to distinguish a domain from a
  project within that domain that both have the same name. We can special
  case that, but it is not a great solution.
  
  
  
 
  Henry
 
  On 5 Jun 2015, at 18:02, Adam Young ayo...@redhat.com
  mailto:ayo...@redhat.com wrote:
 
  On 06/03/2015 05:05 PM, Morgan Fainberg wrote:
  Hi David,
 
  There needs to be some form of global hierarchy delimiter - well
  more to the point there should be a common one across OpenStack
  installations to ensure we are providing a good and consistent (and
  more to the point inter-operable) experience to our users. I'm
  worried a custom defined delimiter (even at the domain level) is
  going to make it difficult to consume this data outside of the
  context of OpenStack (there are applications that are written to use
  the APIs directly).
  We have one already.  We are working JSON, and so instead of project
  name being a string, it can be an array.
 
  Nothing else is backwards compatible.  Nothing else will ensure we
  don;t break exisiting deployments.
 
  Moving forward, we should support DNS notation, but it has to be an
  opt in
 
 
  The alternative is to explicitly list the delimiter in the project (
  e.g. {hierarchy: {delim: ., domain.project.project2}} ). The
  additional need to look up the delimiter / set the delimiter when
  creating a domain is likely to make for a worse user experience than
  selecting one that is not different across installations.
 
  --Morgan
 
  On Wed, Jun 3, 2015 at 12:19 PM, David Chadwick
  d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk wrote:
 
 
 
  On 03/06/2015 14:54, Henrique Truta wrote:
   Hi David,
  
   You mean creating some kind of delimiter attribute in the domain
   entity? That seems like a good idea, although it does not
  solve the
   problem Morgan's mentioned that is the global hierarchy delimiter.
 
  There would be no global hierarchy delimiter. Each domain would
  define
  its own and this would be carried in the JSON as a separate
  parameter so
  that the recipient can tell how to parse hierarchical names
 
  David
 
  
   Henrique
  
   Em qua, 3 de jun de 2015 às 04:21, David Chadwick
   d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk
  mailto:d.w.chadw...@kent.ac.uk
  mailto:d.w.chadw...@kent.ac.uk escreveu:
  
  
  
   On 02/06/2015 23:34, Morgan Fainberg wrote:
Hi Henrique,
   
I don't think we need to specifically call out that we
  want a
   domain, we
should always reference the namespace as we do today.
  Basically, if we
ask for a project name we need to also provide it's
  namespace (your
option #1). This clearly lines up with how we handle
  projects in
   domains
today.
   
I would, however, focus on how to represent the
  namespace in a single
(usable) string. We've been delaying the work on this
  for a while
   since
we have historically not provided a clear way to delimit the
   hierarchy.
If we solve the issue with what is the delimiter
  between domain,
project, and 

Re: [openstack-dev] [Keystone] Domain and Project naming

2015-06-04 Thread Jamie Lennox


- Original Message -
 From: Adam Young ayo...@redhat.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Thursday, 4 June, 2015 2:25:52 PM
 Subject: [openstack-dev] [Keystone] Domain and Project naming
 
 With Hierarchical Multitenantcy, we have the issue that a project is
 currentl restricted in its naming further than it should be.  The domain
 entity enforces that all project namess under the domain domain be
 unique, but really what we should say is that all projects under a
 single parent project be unique.  However, we have, at present, an API
 which allows a user to specify the domain either name or id and project
 again, either by name or ID, but here we care only about the name.  This
 can be used either in specifying the token, or in operations ion the
 project API.
 
 We should change projec naming to be nestable, and since we don't have a
 delimiter set, we should expect the names to be an array, where today we
 might have:
 
  project: {
  domain: {
  id: 1789d1,
  name: example.com
  },
  id: 263fd9,
  name: project-x
  }
 
 we should allow and expect:
 
  project: {
  domain: {
  id: 1789d1,
  name: example.com
  },
  id: 263fd9,
  name: [ grandpa, dad, daughter]
  }
 
 This will, of course, break Horizon and lots of other things, which
 means we need a reasonable way to display these paths.  The typical UI
 approach is a breadcrumb trail, and I think something where we put the
 segments of the path in the UI, each clickable, should be
 understandable: I'll defer to the UX experts if this is reasonable or not.
 
 The alternative is that we attempt to parse the project names. Since we
 have not reserved a delimeter, we will break someone somewhere if we
 force one on people.
 
 
 As an alternative, we should start looking in to following DNS standards
 for naming projects and hosts.  While a domain should not be required to
 be a DNS registred domain name, we should allow for the case where a
 user wants that to be the case, and to synchronize nam,ing across
 multiple clouds.  In order to enforce this, we would have to have an
 indicator on a domain name that it has been checked with DNS;  ideally,
 the user would add a special SRV or Text record or something that
 Keystone could use to confirm that the user has oked this domain name
 being used by this cloud...or something perhaps with DNSSEC, checking
 that auser has permission to assign a specific domain name to a set of
 resources in the cloud.  If we do that, the projects under that domain
 should also be valid DNS subzones, and the hosts either  FQDNs or some
 alternate record...this would tie in Well with Designate.
 
 Note that I am not saying force this  but rather allow this as it
 will simplify the naming when bursting from cloud to cloud:  the Domain
 and project names would then be synchronized via DNS regardless of
 hosting provider.
 
 As an added benefit, we could provide a SRV or TEXT record (or some new
 URL type..I heard one is coming) that describes where to find the home
 Keystone server for a specified domain...it would work nicely with the
 K2K strategy.
 
 If we go with DNS project naming, we can leave all project names in a
 flat string.
 
 
 Note that the DNS approach can work even if the user does not wish to
 register their own DNS.  A hosting provider (I'll pick dreamhost, cuz  I
 know they are listening)  could say the each of their tenants picks a
 user name...say that mine i admiyo,  they would then create a subdomain
 of admiyo.dreamcompute.dreamhost.com.  All of my subprojects would then
 get additional zones under that.  If I were then to burst from there to
 Bluebox, the Keystone domain name would be the one that I was assigned
 back at Dreamhost.

Back up. Is our current restrictions a problem?

Even with hierarchical projects is it a problem to say that a project name 
still must be unique per domain? I get that in theory you might want to be able 
to identify a nested project by name under other projects but that's not 
something we have to allow immediately.

I haven't followed the reseller case closely but in any situation where you had 
off control like that we are re-establishing a domain and so in a multitenancy 
situation each domain can still use their own project names. 

I feel like discussions around nested naming schemes and tieing domains to DNS 
is really premature until we have people that are actually using hierarchical 
projects. 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][clients] - Should we implement project to endpoint group?

2015-05-10 Thread Jamie Lennox


- Original Message -
 From: Enrique Garcia garcianava...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, May 11, 2015 2:19:43 AM
 Subject: Re: [openstack-dev] [keystone][clients] - Should we implement 
 project to endpoint group?
 
 Hi Marcos,
 
 I'm not part of the OpenStack development team but coincidently I implemented
 some of these actions on a fork the past week because I needed them in a
 project. If there is interest in these actions I could contribute them back
 or send a link to the repo if it would be helpful for you or someone else.
 Let me know if I can help.
 
 Cheers,
 Enrique.

Absolutely there is always interest for things like this to be contributed 
upstream! We'd love to have it. 

There are fairly extensive docs on how to get started contributing 
https://wiki.openstack.org/wiki/How_To_Contribute and if you're not sure or 
need a hand there is typically people in either #openstack-dev or 
#openstack-keystone that can help. 


Jamie 

 On Fri, 8 May 2015 at 16:03 Marcos Fermin Lobo  marcos.fermin.l...@cern.ch 
 wrote:
 
 
 
 Hi all,
 
 I would like to know if any of you would be interested to implement project
 to endpoint group actions (
 http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-ep-filter-ext.html#project-to-endpoint-group-relationship
 ) for keystone client. Are any of you already behind this?.
 
 Cheers,
 Marcos.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][python-heatclient] Does python-heatclient works with keystone sessions?

2015-05-09 Thread Jamie Lennox
- Original Message -

 From: Jay Reslock jresl...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Sent: Saturday, May 9, 2015 6:42:48 AM
 Subject: Re: [openstack-dev] [heat][python-heatclient] Does python-heatclient
 works with keystone sessions?

 Interestingit is definitely a service endpoint mismatch.

 UI:

 http://10.25.17.63:8004/v1/dac1095f448d476e9990046331415cf6

 keystoneclient.services.list():

 http://10.25.17.63:35357/v3/services/e0a18f2f4b574c75ba56823964a7d7eb

 What can I do to make these match up correctly?

They're network URLs - i can't see anything there. Try using: `openstack 
catalog list`. 

Otherwise i'd turn on python's debug logging, something like: 

import logging 

logging.basicConfig(level=logging.DEBUG) 

which will give you a bunch of output - though the service catalog will be 
hidden because it's part of the token exchange. 

 On Fri, May 8, 2015 at 4:22 PM Jay Reslock  jresl...@gmail.com  wrote:

  Hi Jamie,
 

  How do I see the service catalog that I am getting back?
 

  On Fri, May 8, 2015 at 3:25 AM Jamie Lennox  jamielen...@redhat.com 
  wrote:
 

   - Original Message -
  
 
From: Jay Reslock  jresl...@gmail.com 
  
 
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
  
 
Sent: Friday, May 8, 2015 7:42:50 AM
  
 
Subject: Re: [openstack-dev] [heat][python-heatclient] Does
python-heatclient works with keystone sessions?
  
 
   
  
 
Thanks very much to both of you for your help!
  
 
   
  
 
I was able to get to another error now about EndpointNotFound. I will
  
 
troubleshoot more and review the bugs mentioned by Sergey.
  
 
   
  
 
-Jason
  
 

   It's nice to see people using sessions for this sort of script. Just as a
   pointer EndpointNotFound generally means that it couldn't find a url for
   the
   service you wanted in the service catalog. Have a look at the catalog
   you're
   getting and make sure the heat entry matches what it should, you may have
   to
   change the service_type or interface to match.
  
 

On Thu, May 7, 2015 at 5:34 PM Sergey Kraynev  skray...@mirantis.com 
  
 
wrote:
  
 
   
  
 
   
  
 
   
  
 
Hi Jay.
  
 
   
  
 
AFAIK, it works, but we can have some minor issues. There several
atches
on
  
 
review to improve it:
  
 
   
  
 
https://review.openstack.org/#/q/status:open+project:openstack/python-heatclient+branch:master+topic:improve-sessionclient,n,z
  
 
   
  
 
Also as I remember we really had bug mentioned by you, but fix was
merged.
  
 
Please look:
  
 
https://review.openstack.org/#/c/160431/1
  
 
https://bugs.launchpad.net/python-heatclient/+bug/1427310
  
 
   
  
 
Which version of client do you use? Try to use code from master, it
should
  
 
works.
  
 
Also one note: the best place for such questions is
  
 
openst...@lists.openstack.org or http://ask.openstack.org/ . And of
course
  
 
channel #heat in IRC.
  
 
   
  
 
Regards,
  
 
Sergey.
  
 
   
  
 
On 7 May 2015 at 23:43, Jay Reslock  jresl...@gmail.com  wrote:
  
 
   
  
 
   
  
 
   
  
 
Hi,
  
 
This is my first mail to the group. I hope I set the subject correctly
and
  
 
that this hasn't been asked already. I searched archives and did not
see
  
 
this question asked or answered previously.
  
 
   
  
 
I am working on a client thing that uses the python-keystoneclient and
  
 
python-heatclient api bindings to set up an authenticated session and
then
  
 
use that session to talk to the heat service. This doesn't work for
heat
but
  
 
does work for other services such as nova and sahara. Is this because
  
 
sessions aren't supported in the heatclient api yet?
  
 
   
  
 
sample code:
  
 
   
  
 
https://gist.github.com/jreslock/a525abdcce53ca0492a7
  
 
   
  
 
I'm using fabric to define tasks so I can call them via another tool.
When
I
  
 
run the task I get:
  
 
   
  
 
TypeError: Client() takes at least 1 argument (0 given)
  
 
   
  
 
The documentation does not say anything about being able to pass
session
to
  
 
the heatclient but the others seem to work. I just want to know if this
is
  
 
intended/expected behavior or not.
  
 
   
  
 
-Jason
  
 
   
  
 
   
  
 
   
  
 
__
  
 
OpenStack Development Mailing List (not for usage questions)
  
 
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
   
  
 
__
  
 
OpenStack Development Mailing List (not for usage questions

Re: [openstack-dev] [heat][python-heatclient] Does python-heatclient works with keystone sessions?

2015-05-08 Thread Jamie Lennox


- Original Message -
 From: Jay Reslock jresl...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, May 8, 2015 7:42:50 AM
 Subject: Re: [openstack-dev] [heat][python-heatclient] Does python-heatclient 
 works with keystone sessions?
 
 Thanks very much to both of you for your help!
 
 I was able to get to another error now about EndpointNotFound. I will
 troubleshoot more and review the bugs mentioned by Sergey.
 
 -Jason

It's nice to see people using sessions for this sort of script. Just as a 
pointer EndpointNotFound generally means that it couldn't find a url for the 
service you wanted in the service catalog. Have a look at the catalog you're 
getting and make sure the heat entry matches what it should, you may have to 
change the service_type or interface to match. 

 On Thu, May 7, 2015 at 5:34 PM Sergey Kraynev  skray...@mirantis.com 
 wrote:
 
 
 
 Hi Jay.
 
 AFAIK, it works, but we can have some minor issues. There several atches on
 review to improve it:
 
 https://review.openstack.org/#/q/status:open+project:openstack/python-heatclient+branch:master+topic:improve-sessionclient,n,z
 
 Also as I remember we really had bug mentioned by you, but fix was merged.
 Please look:
 https://review.openstack.org/#/c/160431/1
 https://bugs.launchpad.net/python-heatclient/+bug/1427310
 
 Which version of client do you use? Try to use code from master, it should
 works.
 Also one note: the best place for such questions is
 openst...@lists.openstack.org or http://ask.openstack.org/ . And of course
 channel #heat in IRC.
 
 Regards,
 Sergey.
 
 On 7 May 2015 at 23:43, Jay Reslock  jresl...@gmail.com  wrote:
 
 
 
 Hi,
 This is my first mail to the group. I hope I set the subject correctly and
 that this hasn't been asked already. I searched archives and did not see
 this question asked or answered previously.
 
 I am working on a client thing that uses the python-keystoneclient and
 python-heatclient api bindings to set up an authenticated session and then
 use that session to talk to the heat service. This doesn't work for heat but
 does work for other services such as nova and sahara. Is this because
 sessions aren't supported in the heatclient api yet?
 
 sample code:
 
 https://gist.github.com/jreslock/a525abdcce53ca0492a7
 
 I'm using fabric to define tasks so I can call them via another tool. When I
 run the task I get:
 
 TypeError: Client() takes at least 1 argument (0 given)
 
 The documentation does not say anything about being able to pass session to
 the heatclient but the others seem to work. I just want to know if this is
 intended/expected behavior or not.
 
 -Jason
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TC][Keystone] Rehashing the Pecan/Falcon/other WSGI debate

2015-05-01 Thread Jamie Lennox
Hi all, 

At around the time Barbican was applying for incubation there was a
discussion about supported WSGI frameworks. From memory the decision
at the time was that Pecan was to be the only supported framework and
that for incubation Barbican had to convert to Pecan (from Falcon).

Keystone is looking to ditch our crusty old, home-grown wsgi layer for
an external framework and both Pecan and Falcon are in global
requirements. 

In the experimenting I've done Pecan provides a lot of stuff we don't
need and some that just gets in the way. To call out a few:
 * the rendering engine really doesn't make sense for us, for APIs, and
where we are often returning different data (not just different views or
data) based on Content-Type. 
 * The security enforcement within Pecan does not really mesh with how
we enforce policy, nor does the way we build controller objects per
resource. It seems we will have to build this for ourselves on top of
pecan

and there are just various other niggles. 

THIS IS NOT SUPPOSED TO START A DEBATE ON THE VIRTUES OF EACH FRAMEWORK.

Everything I've found can be dealt with and pecan will be a vast
improvement over what we use now. I have also not written a POC with
Falcon to know that it will suit any better.

My question is: Does the ruling that Pecan is the only WSGI framework
for OpenStack stand? I don't want to have 100s of frameworks in the
global requirements, but given falcon is already there iff a POC
determines that Falcon is a better fit for keystone can we use it? 


Thanks, 

Jamie 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ERROR: openstackclient.shell Exception raised: python-keystoneclient 1.4.0

2015-04-26 Thread Jamie Lennox
Rick, 

This is a problem of dependency resolution rather than an issue of 
keystoneclient specifically. You can see that glanceclient has a cap on 
keystoneclient that the installed version doesn't meet. 

Dependency resolution has always been a problem but is raising its head again 
recently. If you are seeing it and infra isn't it's probably because you are 
installing specific versions of libraries that don't play well together. I'd go 
and talk to folks in #openstack-infra.


Jamie

- Original Message -
 From: Rick Chen rick.c...@prophetstor.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, 24 April, 2015 4:27:27 PM
 Subject: [openstack-dev] ERROR: openstackclient.shell Exception raised:   
 python-keystoneclient 1.4.0
 
 
 
 HI All:
 
 Recently, our local machine often happened “Failed to launch devstack”.
 
 Does anyone happen below issue in the Opnestack third-part CI devstack
 machine? Any workaround or solution to fixed it.
 
 2015-04-24 06:00:18.352 | ERROR: openstackclient.shell Exception raised:
 (python-keystoneclient 1.4.0 (/opt/stack/new/python-keystoneclient),
 Requirement.parse('python-keystoneclient1.4.0,=1.1.0'),
 set(['python-glanceclient']))
 
 
 
 pip list | grep keystone
 
 keystone (2015.2.dev88, /opt/stack/new/keystone)
 
 keystonemiddleware (1.5.0)
 
 python-keystoneclient (1.4.0, /opt/stack/new/python-keystoneclient)
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Keystone] [Nova] How to validate teanant-id for admin operation

2015-04-26 Thread Jamie Lennox


- Original Message -
 From: German Eichberger german.eichber...@hp.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Saturday, 25 April, 2015 8:55:23 AM
 Subject: Re: [openstack-dev] [Neutron][Keystone] [Nova] How to validate 
 teanant-id for admin operation
 
 
 
 Hi Brant,
 
 
 
 Sorry, for being confusing earlier. We have operations an
 administrator/operator is performing on behalf of a user, e.g. “Create
 Loadbalancer X for user tenant-id 123”. Now we are not checking the
 tenant-id and are wondering how to make the operation more robust with
 kesyone’s help.
 
 
 
 Thanks,
 
 German

Not to speak for Brant, but i think the confusion here is why you are doing 
this. From my perspective you should never be in a position where the admin has 
to enter a raw project id like that.

I think the problem here is the assumption of an all powerful admin user, and 
i'd encourage you to change your policy files to scrap that idea. A role is 
granted on a project and this project is mentioned in the token. If there is 
some role that is provided that lets you perform operations outside of the 
project id specified in that token please file a bug and i'd consider marking 
it a security issue. 

The X-Service-Token concept will allow for the combination of a user token and 
a service token to authenticate an action, so the user can ask for an action to 
be performed on it's behalf via a service - and in which case the user's 
project id is communicated via the token. 

In lieu of all this the quick answer is no. If you are taking a project id from 
the command line and you want to validate its existence then you have to ask 
keystone, but you should always be getting this information from a token. 

Jamie
 
 
 From: Brant Knudson [mailto:b...@acm.org]
 Sent: Friday, April 24, 2015 11:43 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][Keystone] [Nova] How to validate
 teanant-id for admin operation
 
 
 
 
 
 
 
 
 
 
 
 On Fri, Apr 24, 2015 at 11:53 AM, Eichberger, German 
 german.eichber...@hp.com  wrote:
 
 All,
 
 Following up from the last Neutron meeting:
 
 If Neutron is performing an operation as an admin on behalf of a user that
 user's tenant-id (or project-id) isn't validated - in particular an admin
 can mistype and create object on behalf of non existent users. I am
 wondering how other projects (e.g. Nova) deal with that and if there is some
 API support in keystone to save us a round trip (e.g. authenticate admin +
 validate additional user-id).
 
 
 
 
 
 Not to long ago we got support in the auth_token middleware for a service
 token in addition to the user's token. The user token is sent in the
 x-auth-token header and the service token is sent in the x-service-token,
 and then fields from both tokens are available to the application (e.g., the
 user project is in HTTP_X_PROJECT_ID and the service token roles are in
 HTTP_X_SERVICE_ROLES). So you could potentially have a policy rule on the
 server for the operation that required the service token to have the
 'service' role, and what neutron could do is send the original user token in
 x-auth-token and send its own token as the service token. This seems to be
 what you're asking for here.
 
 
 - Brant
 
 
 
 
 
 
 
 Thanks,
 German
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][DOA] Extending OpenStack Auth for new mechanisms (websso, kerberos, k2k etc)

2015-03-17 Thread Jamie Lennox


- Original Message -
 From: Douglas Fish drf...@us.ibm.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 18, 2015 2:07:56 AM
 Subject: Re: [openstack-dev] [Horizon][DOA] Extending OpenStack Auth for new 
 mechanisms (websso, kerberos, k2k etc)
 
 
 Steve Martinelli steve...@ca.ibm.com wrote on 03/17/2015 12:52:33 AM:
 
  From: Steve Martinelli steve...@ca.ibm.com
  To: OpenStack Development Mailing List \(not for usage questions\)
  openstack-dev@lists.openstack.org
  Date: 03/17/2015 12:55 AM
  Subject: Re: [openstack-dev] [Horizon][DOA] Extending OpenStack Auth
  for new mechanisms (websso, kerberos, k2k etc)
 
  I like proposal 1 better, but only because I am already familiar
  with how plugins interact with keystoneclient. The websso work is (i
  think) pretty close to getting merged, and could easily be tweaked
  to use a token plugin (when it's ready). I think the same can be
  said for our k2k issue, but I'm not sure.
 
  Thanks,
 
  Steve Martinelli
  OpenStack Keystone Core
 
  Jamie Lennox jamielen...@redhat.com wrote on 03/15/2015 10:52:31 PM:
 
   From: Jamie Lennox jamielen...@redhat.com
   To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
   Date: 03/15/2015 10:59 PM
   Subject: [openstack-dev] [Horizon][DOA] Extending OpenStack Auth for
   new mechanisms (websso, kerberos, k2k etc)
  
   Hi All,
  
   Please note when reading this that I have no real knowledge of django
 so
   it is very possible I'm overlooking something obvious.
  
   ### Issue
  
   Django OpenStack Auth (DOA) has always been tightly coupled to the
   notion of a username and password.
   As keystone progresses and new authentication mechanisms become
   available to the project we need a way to extend DOA to keep up with
 it.
   However the basic processes of DOA are going to be much the same, it
   still needs to fetch an unscoped token, list available projects and
   handle rescoping and this is too much for every extension mechanism to
   reimplement.
   There is also a fairly tight coupling between how DOA populates the
   request and sets up a User object that we don't really want to reuse.
  
   There are a couple of authentication mechanisms that are currently
 being
   proposed that are requiring this ability immediately.
  
   * websso: https://review.openstack.org/136178
   * kerberos: https://review.openstack.org/#/c/153910/ (patchset 2).
  
   and to a certain extent:
  
   * k2k: https://review.openstack.org/159910
  
   Enabling and using these different authentication mechanisms is going
 to
   need to be configured by an admin at deployment time.
  
   Given that we want to share the basic scoping/rescoping logic between
   these projects I can essentially see two ways to enable this.
  
   ### Proposal 1 - Add plugins to DOA
  
   The easiest way I can see of doing this is to add a plugin model to the
   existing DOA structure.
   The initial differentiating component for all these mechanisms is the
   retrieval of an unscoped token.
  
   We can take the existing DOA structure and simply make that part
   pluggable and add interfaces to that plugin as required in the future.
  
   Review: https://review.openstack.org/#/c/153910/
  
   Pros:
  
   * Fairly simple and extensible as required.
   * Small plugin interface.
  
   Cons:
  
   * Ignores that django already has an authentication plugin system.
   * Doesn't work well for adding views that run these backends.
  
   ### Proposal 2 - Make the existing DOA subclassable.
  
   The next method is to essentially re-use the existing Django
   authentication module architecture.
   We can extract into a base class all the current logic around token
   handling and develop new modules around that.
  
   Review: https://review.openstack.org/#/c/164071/
   An example of using it:
   https://github.com/jamielennox/django-openstack-auth-kerberos
  
   Pros:
  
   * Reusing Django concepts.
   * Seems easier to handle adding of views.
  
   Cons:
  
   * DOA has to start worrying about public interfaces.
  
   ### Required reviews:
  
   Either way I think these two reviews are going to be required to make
   this work:
  
   * Redirect to login page: https://review.openstack.org/#/c/153174/ - If
   we want apache modules to start handling parts of auth we need to mount
   those at dedicated paths, we can't put kerberos login at /
   * Additional auth urls: https://review.openstack.org/#/c/164068/ - We
   need to register additional views so that we can handle the output of
   these apache modules and call the correct authenticate() parameters.
  
   ### Conclusion
  
   Essentially either of these could work and both will require some
   tweaking and extending to be useful in all situations.
  
   However I am kind of passing through on DOA and Django and would like
   someone with more experience in the field to comment on what feels more

Re: [openstack-dev] [keystone][congress][group-policy] Fetching policy from a remote source

2015-03-16 Thread Jamie Lennox


- Original Message -
 From: Adam Young ayo...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, March 17, 2015 8:59:17 AM
 Subject: Re: [openstack-dev] [keystone][congress][group-policy] Fetching 
 policy from a remote source
 
 On 03/16/2015 03:24 PM, Doug Hellmann wrote:
  Excerpts from Adam Young's message of 2015-03-16 14:17:16 -0400:
  On 03/16/2015 01:45 PM, Doug Hellmann wrote:
  All of these are reasons we have so far resisted building a service to
  deploy updates to oslo.config's input files, and rely on provisioning
  tools to update them.
 
  Have we consider using normal provisioning tools for pushing out
  changes to policy files, and having the policy library look at the
  timestamp of the file(s) to decide if it needs to re-read them
  before evaluating a rule? Maybe we wouldn't always scan the file
  system, but wait for some sort of signal that the scan needs to be
  done.
  I like this last idea.  Thew trigger needs to be app specific, I think.
  I was thinking of a callback to be triggered by 'kill -HUP $pid'. We can
  make a little framework for registering callbacks on signals (if there
  isn't something like that already) to allow multiple refresh actions on
  the signal.
 
  Doug
  I think policy files are not config files.  We've treated them as such
  in the past as they are not dynamic, but I don't think I want to *have*
  to do this:

(sorry about the bad threading, zimbra...) 

To pick up on that policy files are not config files, my last impression of 
this from Paris was that keystone was working towards having more fine grained 
roles, eventually pushing these roles to essentially being capabilities. If we 
get to this point - aren't the policy files just config files? I like the 
automatic update approach but at what frequency do we anticipate them being 
updated, and what is a reasonable amount of time for the change to propogate? 



 
  1.  Change policy in keystone (somehow)
  2.  Tell Puppet that there is a new file
  3.  Have puppet pick up the3 new file and sync it to the servers.
  Right, I wouldn't do that. I would modify the file in my puppet
  repository and then push that out all at once. Keystone would receive
  the policy files the same way as the other services.
 
 
  Although I would say that we should make it easy to support this workflow.
 
  For one thing, it assumes that all of the comsuers are talking to the
  same config management system, which is only true for a subset of the
  services.
  I'm not sure what you mean here. Do you mean that in a given deployment
  you would expect some services to be configured by puppet and others to
  be configured a different way?
 
 I mean that there could be a puppet server for the core infrastructure,
 and another one (or Ansible or Chef) for Hadoop on top of that.  There
 is no one puppet master that we can assume to be controlling all of the
 servers.  They might be run by different organizations.
 
 
  I see a case for doing this same kind of management for Many of the
  files Keystone produces.  Service catalog is the most obvious candidate.
  Yes, that's another good example, although in that case we do already
  have an API that lets a cloud consumer access the service catalog data
  so it might be viewed as different from the policy rules or oslo.config
  files (the latter at least typically have private data we wouldn't want
  to share through an API).
 
 
 We don't have a single, monolithic Service catalog (anymore) and, with
 endpoint filtering, we expect multiple service catalogs to be the norm.
 
 I want to pursue the idea of git style file identification here, (hash
 of the file as identifier) as that works to split the service catalog
 from the token, and still have multiple service catalogs, but ensure
 that they are correctly linked in remote systems.   It doesn't have to
 be hash, but it makes the process much more verifiable.  This is also
 true for policy files;  there can be more than one active at any given
 point in time, fetchable by remote identifier.  Even as we push towards
 common  rules for defininng the RBAC section, we have to be aware that
 different endpoints might need different policy files.
 
 
 
  If we could have a workflow for managing : PKI certs, Federatiomn
  mappings and  (Group only?) Role Assignments we could decentralize token
  validation.
 
  When doing the PKI tokens, we discussed this, and ended up with a t
  fetch first policy toward the certs.
 
  Puppet does not know how to get a token, so it can't call the keystone
  token-protected APIs to fetch new data.  What forms of authentication do
  the config managment systems support?  Is this an argument for tokenless
  operations against Keystone?
  In my scenario puppet (or chef or whatever) is the source of truth for
  the configuration file, not one of our services. So there's no need for
  the configuration management tool to talk to any of our services beyond
  sending the HUP signal 

[openstack-dev] [Horizon][DOA] Extending OpenStack Auth for new mechanisms (websso, kerberos, k2k etc)

2015-03-15 Thread Jamie Lennox
Hi All, 

Please note when reading this that I have no real knowledge of django so
it is very possible I'm overlooking something obvious.

### Issue

Django OpenStack Auth (DOA) has always been tightly coupled to the
notion of a username and password.
As keystone progresses and new authentication mechanisms become
available to the project we need a way to extend DOA to keep up with it.
However the basic processes of DOA are going to be much the same, it
still needs to fetch an unscoped token, list available projects and
handle rescoping and this is too much for every extension mechanism to
reimplement.
There is also a fairly tight coupling between how DOA populates the
request and sets up a User object that we don't really want to reuse.

There are a couple of authentication mechanisms that are currently being
proposed that are requiring this ability immediately.

* websso: https://review.openstack.org/136178
* kerberos: https://review.openstack.org/#/c/153910/ (patchset 2).

and to a certain extent:

* k2k: https://review.openstack.org/159910

Enabling and using these different authentication mechanisms is going to
need to be configured by an admin at deployment time.

Given that we want to share the basic scoping/rescoping logic between
these projects I can essentially see two ways to enable this.

### Proposal 1 - Add plugins to DOA

The easiest way I can see of doing this is to add a plugin model to the
existing DOA structure.
The initial differentiating component for all these mechanisms is the
retrieval of an unscoped token.

We can take the existing DOA structure and simply make that part
pluggable and add interfaces to that plugin as required in the future.

Review: https://review.openstack.org/#/c/153910/

Pros:

* Fairly simple and extensible as required.
* Small plugin interface.

Cons:

* Ignores that django already has an authentication plugin system.
* Doesn't work well for adding views that run these backends.

### Proposal 2 - Make the existing DOA subclassable.

The next method is to essentially re-use the existing Django
authentication module architecture.
We can extract into a base class all the current logic around token
handling and develop new modules around that.

Review: https://review.openstack.org/#/c/164071/
An example of using it:
https://github.com/jamielennox/django-openstack-auth-kerberos

Pros:

* Reusing Django concepts.
* Seems easier to handle adding of views.

Cons:

* DOA has to start worrying about public interfaces.

### Required reviews:

Either way I think these two reviews are going to be required to make
this work:

* Redirect to login page: https://review.openstack.org/#/c/153174/ - If
we want apache modules to start handling parts of auth we need to mount
those at dedicated paths, we can't put kerberos login at /
* Additional auth urls: https://review.openstack.org/#/c/164068/ - We
need to register additional views so that we can handle the output of
these apache modules and call the correct authenticate() parameters.

### Conclusion

Essentially either of these could work and both will require some
tweaking and extending to be useful in all situations.

However I am kind of passing through on DOA and Django and would like
someone with more experience in the field to comment on what feels more
correct or any issues they see arising with the different approaches.
Either way I think a clear approach on extensibility would be good
before committing to any of the kerberos, websso and k2k definitions.


Please let me know an opinion as there are multiple patches that will
depend upon it.


Thanks,

Jamie



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [devstack] About _member_ role

2015-02-26 Thread Jamie Lennox


- Original Message -
 From: Pasquale Porreca pasquale.porr...@dektech.com.au
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, February 19, 2015 3:24:03 AM
 Subject: Re: [openstack-dev] [Keystone] [devstack] About _member_ role
 
 Analyzing Horizon code I can confirm that the existence of _member_ role
 is required, so the commit https://review.openstack.org/#/c/150667/
 introduced the bug in devstack. More details and a fix proposal in my
 change submission: https://review.openstack.org/#/c/156527/
 
 On 02/18/15 10:04, Pasquale Porreca wrote:
  I saw 2 different bug report that Devstack dashboard gives an error when
  trying to manage projects
  https://bugs.launchpad.net/devstack/+bug/1421616 and
  https://bugs.launchpad.net/horizon/+bug/1421999
  In my devstack environment projects were working just fine, so I tried a
  fresh installation to see if I could reproduce the bug and I could
  confirm that actually the bug is present in current devstack deployment.
  Both reports point to the lack of _member_ role this error, so I just
  tried to manually (i.e. via CLI) add a _member_ role and I verified that
  just having it - even if not assigned to any user - fix the project
  management in Horizon.
 
  I didn't deeply analyze yet the root cause of this, but this behaviour
  seemed quite weird, this is the reason I sent this mail to dev list.
  Your explanation somewhat confirmed my doubts: I presume that adding a
  _member_ role is merely a workaround and the real bug is somewhere else
  - in Horizon code with highest chance.

Ok, so I dug into this today. The problem is that the 'member_role_name' that 
is set in keystone CONF is only being read that first time when it creates the 
default member role if not already present. At all other times keystone works 
with the role id set by 'member_role_id' which has a default value. So even 
though horizon is looking and finding a member_role_name it doesn't match up 
with what keystone is doing when it uses member_role_id. 

IMO this is wrong and i filed a bug against keystone: 
https://bugs.launchpad.net/keystone/+bug/1426184

In the mean time it works if you add both the member_role_name and 
member_role_id to the config file. Unfortunately adding an ID means you need to 
get the value from keystone and then set it into keystone's own config file, so 
restarting keystone. This is similar to a review I had for policy so I modified 
that and put up my own review https://review.openstack.org/#/c/159690

Given the keystone restart I don't know if it's cleaner, however that's the way 
i know to solve this 'properly'. 


Jamie

  On 02/17/15 21:01, Jamie Lennox wrote:
  - Original Message -
  From: Pasquale Porreca pasquale.porr...@dektech.com.au
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Sent: Tuesday, 17 February, 2015 9:07:14 PM
  Subject: [openstack-dev]  [Keystone] [devstack] About _member_ role
 
  I proposed a fix for a bug in devstack
  https://review.openstack.org/#/c/156527/ caused by the fact the role
  _member_ was not anymore created due to a recent change.
 
  But why is the existence of _member_ role necessary, even if it is not
  necessary to be used? Is this a know/wanted feature or a bug by itself?
  So the way to be a 'member' of a project so that you can get a token
  scoped to that project is to have a role defined on that project.
  The way we would handle that from keystone for default_projects is to
  create a default role _member_ which had no permissions attached to it,
  but by assigning it to the user on the project we granted membership of
  that project.
  If the user has any other roles on the project then the _member_ role is
  essentially ignored.
 
  In that devstack patch I removed the default project because we want our
  users to explicitly ask for the project they want to be scoped to.
  This patch shouldn't have caused any issues though because in each of
  those cases the user is immediately granted a different role on the
  project - therefore having 'membership'.
 
  Creating the _member_ role manually won't cause any problems, but what
  issue are you seeing where you need it?
 
 
  Jamie
 
 
  --
  Pasquale Porreca
 
  DEK Technologies
  Via dei Castelli Romani, 22
  00040 Pomezia (Roma)
 
  Mobile +39 3394823805
  Skype paskporr
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin

Re: [openstack-dev] Kerberos in OpenStack

2015-02-24 Thread Jamie Lennox
I replied to almost exactly this email off-list and so thought i would copy my 
reply to -dev.


- Original Message -
 From: Jamie Lennox jamielen...@redhat.com
 To: Sanket Lawangare sanket.lawang...@gmail.com
 Sent: Wednesday, February 25, 2015 6:39:14 AM
 Subject: Re: Kerberos in OpenStack
 
 
 
 - Original Message -
  From: Sanket Lawangare sanket.lawang...@gmail.com
  To: jamielen...@redhat.com
  Sent: Wednesday, February 25, 2015 5:43:38 AM
  Subject: Kerberos in OpenStack
  
  Hello Mr. Jamie Lennox,
  
  My name is Sanket Lawangare. I am a graduate Student studying at The
  University of Texas, at San Antonio. For my Master’s Thesis I am
 working on
  the Identity component of OpenStack. My research is to investigate
 external
  authentication with Identity(keystone) using Kerberos. I am working
 with
  ICS- Institute for Cyber Security at UTSA under Mr. Farhan Patwa.
 
 Hi Sanket, we are working quite hard on kerberos at the moment so it's
 nice to have you on board .
 Make sure you hang around in #openstack-keystone on Freenode, if i'm not
 around (I'm based in Sydney so Timezones clash) Adam Young (ayoung) is
 up to date on all this.
 
  Based on reading your Blogs and my understanding of Kerberos I have
 come up
  with a figure explaining possible interaction of KDC with the
 OpenStack
  client, Keystone and the OpenStack services(Nova, Cinder, Swift...). I
 am
  trying to understand the working of Kerberos in OpenStack.
  
  Please click this link for viewing the figure:
 
 https://docs.google.com/drawings/d/1re0lNbiMDTbnkrqGMjLq6oNoBtR_GA0x7NWacf0Ulbs/edit?usp=sharing
  
  P.S. - [The steps in this figure are self explanatory the basic
  understanding of Kerberos is expected]
  
  Based on the figure i had couple of questions:
  
  
 1.
  
 Is Nova or other services registered with the KDC?
 
 No not at this time. OpenStack does all it's AuthN/AuthZ for
 non-keystone services via a token at this time we purely capture the
 POST /v3/auth/tokens route which issues a token with kerberos and use
 the REMOTE_USER as proof of AuthN rather than needing a user/pass. After
 this point OpenStack operates using a token as per normal.
 
  
 1.
  
 What does keystone do with Kerberos ticket/credentials? Does
 Keystone
 authenticates the users and gives them direct access to other
 services
 such
 as Nova, Swift etc..
 
 Related to the first question, the OpenStack user information and their
 roles is encoded into the token data which is then validated by
 auth_token middleware on each of the services. After the initial AuthN
 kerberos request we do not currently do any additional kerberos auth.
 
  
 1.
  
 After receiving the Ticket from the KDC does keystone embed some
 kerberos credential information in the token?
 
 Keystone will set 'method' in the token field to reflect the method that
 was used to authenticate the token - however i can't remember if it sets
 the method to 'kerberos' or 'external' for kerberos auth.
 
  
 1.
  
 What information does the service (e.g.Nova) see in the Ticket and
 the
 token (Does the token have some kerberos info or some customized
 info
 inside it?).
 
 No this information is completely hidden from the other services.
  
  
  If you could share your insights and guide me on the interaction
 between
  these components. I would be really appreciate it. Thank you for your
 time.
 
 So those answers are pretty short in that kerberos is really only used
 for that initial AuthN contact, after which keystone issues a token and
 everything else works without kerberos. there is an advantage to this in
 that you don't have to do the 401 Negotiate dance on every request but
 it does lack the security of kerberos (tokens are bearer tokens and can
 be intercepted and used).
 
 I think what you are looking for is the 'token binding' feature. This is
 actually one of the first things I worked on in keystone, and we were
 simply not ready for it at the time.
 
 There is the functionality in the tokens that make it so that a token
 can only be used in conjunction with another form of authentication and
 the expectation we had was that kerberos would be the main candidate,
 followed by SSL client certs. It means that if you used a valid token
 without also presenting the appropriate kerberos ticket authentication
 would fail.
 
 The reason that this never took off is due to the way service to service
 communication happens. When you ask nova to boot you a virtual machine
 it makes a number of calls to the other services, glance to get the
 image, cinder for a block store, neutron for a network connection etc.
 To do this it reuses the user token, so nova will forward the token you
 give it to each of those services - and so if you bind it to the
 kerberos ticket then the same feature that gives security essentially
 stops that sharing and OpenStack stops working.
 
 Now, there is an effort underway that when

Re: [openstack-dev] [Keystone] [devstack] About _member_ role

2015-02-17 Thread Jamie Lennox


- Original Message -
 From: Pasquale Porreca pasquale.porr...@dektech.com.au
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, 17 February, 2015 9:07:14 PM
 Subject: [openstack-dev]  [Keystone] [devstack] About _member_ role
 
 I proposed a fix for a bug in devstack
 https://review.openstack.org/#/c/156527/ caused by the fact the role
 _member_ was not anymore created due to a recent change.
 
 But why is the existence of _member_ role necessary, even if it is not
 necessary to be used? Is this a know/wanted feature or a bug by itself?

So the way to be a 'member' of a project so that you can get a token scoped to 
that project is to have a role defined on that project. 
The way we would handle that from keystone for default_projects is to create a 
default role _member_ which had no permissions attached to it, but by assigning 
it to the user on the project we granted membership of that project.
If the user has any other roles on the project then the _member_ role is 
essentially ignored. 

In that devstack patch I removed the default project because we want our users 
to explicitly ask for the project they want to be scoped to.
This patch shouldn't have caused any issues though because in each of those 
cases the user is immediately granted a different role on the project - 
therefore having 'membership'. 

Creating the _member_ role manually won't cause any problems, but what issue 
are you seeing where you need it?


Jamie


 --
 Pasquale Porreca
 
 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)
 
 Mobile +39 3394823805
 Skype paskporr
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [trusts] [all] How trusts should work by design?

2015-02-16 Thread Jamie Lennox


- Original Message -
 From: Alexander Makarov amaka...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, 17 February, 2015 4:00:05 AM
 Subject: Re: [openstack-dev] [keystone] [trusts] [all] How trusts should work 
 by design?
 
 https://blueprints.launchpad.net/keystone/+spec/trust-scoped-re-authentication
 
 On Mon, Feb 16, 2015 at 7:57 PM, Alexander Makarov  amaka...@mirantis.com 
 wrote:
 
 
 
 We could soften this limitation a little by returning token client tries to
 authenticate with.
 I think we need to discuss it in community.
 
 On Mon, Feb 16, 2015 at 6:47 PM, Steven Hardy  sha...@redhat.com  wrote:
 
 
 On Mon, Feb 16, 2015 at 09:02:01PM +0600, Renat Akhmerov wrote:
  Yeah, clarification from keystone folks would be really helpful.
  If Nikolaya**s info is correct (I believe it is) then I actually dona**t
  understand why trusts are needed at all, they seem to be useless. My
  assumption is that they can be used only if we send requests directly to
  OpenStack services (w/o using clients) with trust scoped token included in
  headers, that might work although I didna**t checked that yet myself.
  So please help us understand which one of my following assumptions is
  correct?
  1. We dona**t understand what trusts are.
  2. We use them in a wrong way. (If yes, then whata**s the correct usage?)
 
 One or both of these seems likely, possibly combined with bugs in the
 clients where they try to get a new token instead of using the one you
 provide (this is a common pattern in the shell case, as the token is
 re-requested to get a service catalog).
 
 This provides some (heat specific) information which may help somewhat:
 
 http://hardysteven.blogspot.co.uk/2014/04/heat-auth-model-updates-part-1-trusts.html
 
  3. Trust mechanism itself is in development and cana**t be used at this
  point.
 
 IME trusts work fine, Heat has been using them since Havana with few
 problems.
 
  4. OpenStack clients need to be changed in some way to somehow bypass
  this keystone limitation?
 
 AFAICS it's not a keystone limitation, the behavior you're seeing is
 expected, and the 403 mentioned by Nikolay is just trusts working as
 designed.
 
 The key thing from a client perspective is:
 
 1. If you pass a trust-scoped token into the client, you must not request
 another token, normally this means you must provide an endpoint as you
 can't run the normal auth code which retrieves the service catalog.
 
 2. If you could pass a trust ID in, with a non-trust-scoped token, or
 username/password, the above limitation is removed, but AFAIK none of the
 CLI interfaces support a trust ID yet.
 
 3. If you're using a trust scoped token, you cannot create another trust
 (unless you've enabled chained delegation, which only landed recently in
 keystone). This means, for example, that you can't create a heat stack
 with a trust scoped token (when heat is configured to use trusts), unless
 you use chained delegation, because we create a trust internally.
 
 When you understand these constraints, it's definitely possible to create a
 trust and use it for requests to other services, for example, here's how
 you could use a trust-scoped token to call heat:
 
 heat --os-auth-token trust-scoped-token --os-no-client-auth
 --heat-url http://192.168.0.4:8004/v1/ project-id stack-list
 
 The pattern heat uses internally to work with trusts is:
 
 1. Use a trust_id and service user credentials to get a trust scoped token
 2. Pass the trust-scoped token into python clients for other projects,
 using the endpoint obtained during (1)
 
 This works fine, what you can't do is pass the trust scoped token in
 without explicitly defining the endpoint, because this triggers
 reauthentication, which as you've discovered, won't work.
 
 Hope that helps!
 
 Steve
 

So I think what you are seeing, and what heat has come up against in the past 
is a limitation of the various python-*clients and not a problem of the actual 
delegation mechanism from the keystone point of view. This is a result of the 
basic authentication code being copied around between clients and then not 
being kept updated since... probably havana.

The good news is that if you go with the session based approach then you can 
share these tokens amongst clients without the hacks. 

The identity authentication plugins that keystoneclient offers (v2 and v3 api 
for Token and Password) both accept a trust_id to be scoped to and then the 
plugin can be shared amongst all the clients that support it (AFAIK that's 
almost everyone - the big exceptions being glance and swift). 

Here's an example (untested - off the top of my head):

from keystoneclient import session 
from keystoneclient.auth.identity import v3 
from cinderclient.v2 import client as c_client
from keystoneclient.v3 import client as k_client
from novaclient.v1_1 import client as n_client

a = 

Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-10 Thread Jamie Lennox
+1

- Original Message -
 From: Guang Yee guang@hp.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, 11 February, 2015 10:45:07 AM
 Subject: Re: [openstack-dev] [Keystone] Proposing Marek Denis for the 
 Keystone Core Team
 
 
 
 +1!
 
 
 
 
 
 Guang
 
 
 
 
 
 
 From: Priti Desai [mailto:priti_de...@symantec.com]
 Sent: Tuesday, February 10, 2015 11:47 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Keystone] Proposing Marek Denis for the
 Keystone Core Team
 
 
 
 
 
 +1
 
 
 
 
 
 Cheers
 
 
 Priti
 
 
 
 
 
 From: Brad Topol  bto...@us.ibm.com 
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 Date: Tuesday, February 10, 2015 at 11:04 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 Subject: Re: [openstack-dev] [Keystone] Proposing Marek Denis for the
 Keystone Core Team
 
 
 
 
 
 +1! Marek has been an outstanding Keystone contributor and reviewer!
 
 --Brad
 
 
 Brad Topol, Ph.D.
 IBM Distinguished Engineer
 OpenStack
 (919) 543-0646
 Internet: bto...@us.ibm.com
 Assistant: Kendra Witherspoon (919) 254-0680
 
 
 
 From: David Stanek  dsta...@dstanek.com 
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 Date: 02/10/2015 12:58 PM
 Subject: Re: [openstack-dev] [Keystone] Proposing Marek Denis for the
 Keystone Core Team
 
 
 
 
 
 
 +1
 
 On Tue, Feb 10, 2015 at 12:51 PM, Morgan Fainberg  morgan.fainb...@gmail.com
  wrote:
 Hi everyone!
 
 I wanted to propose Marek Denis (marekd on IRC) as a new member of the
 Keystone Core team. Marek has been instrumental in the implementation of
 Federated Identity. His work on Keystone and first hand knowledge of the
 issues with extremely large OpenStack deployments has been a significant
 asset to the development team. Not only is Marek a strong developer working
 on key features being introduced to Keystone but has continued to set a high
 bar for any code being introduced / proposed against Keystone. I know that
 the entire team really values Marek’s opinion on what is going in to
 Keystone.
 
 Please respond with a +1 or -1 for adding Marek to the Keystone core team.
 This poll will remain open until Feb 13.
 
 --
 Morgan Fainberg
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 David
 blog: http://www.traceback.org
 twitter: http://twitter.com/dstanek
 www: http://dstanek.com
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org ?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-18 Thread Jamie Lennox
+1

- Original Message -
 From: Morgan Fainberg morgan.fainb...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, 19 January, 2015 5:11:02 AM
 Subject: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec   
 core
 
 Hello all,
 
 I would like to nominate Brad Topol for Keystone Spec core (core reviewer for
 Keystone specifications and API-Specification only:
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been a
 consistent voice advocating for well defined specifications, use of existing
 standards/technology, and ensuring the UX of all projects under the Keystone
 umbrella continue to improve. Brad brings to the table a significant amount
 of insight to the needs of the many types and sizes of OpenStack
 deployments, especially what real-world customers are demanding when
 integrating with the services. Brad is a core contributor on pycadf (also
 under the Keystone umbrella) and has consistently contributed code and
 reviews to the Keystone projects since the Grizzly release.
 
 Please vote with +1/-1 on adding Brad to as core to the Keystone Spec repo.
 Voting will remain open until Friday Jan 23.
 
 Cheers,
 Morgan Fainberg
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lets not assume everyone is using the global `CONF` object (zaqar broken by latest keystoneclient release 1.0)

2014-12-21 Thread Jamie Lennox


- Original Message -
 From: Doug Hellmann d...@doughellmann.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Saturday, 20 December, 2014 12:07:59 AM
 Subject: Re: [openstack-dev] [all] Lets not assume everyone is using the  
 global `CONF` object (zaqar broken by latest
 keystoneclientrelease 1.0)
 
 
 On Dec 19, 2014, at 7:17 AM, Flavio Percoco fla...@redhat.com wrote:
 
  Greetings,
  
  DISCLAIMER: The following comments are neither finger pointing the
  author of this work nor the keystone team.

That was me. 

  RANT: We should really stop assuming everyone is using a global `CONF`
  object. Moreover, we should really stop using it, especially in
  libraries.
  
  That said, here's a gentle note for all of us:
  
  If I understood the flow of changes correctly, keystoneclient recently
  introduced a auth_section[0] option, which needs to be registered in
  order for it to work properly. In keystoneclient, it's been correctly
  added a function[1] to register this option in a conf object.
  
  keystonemiddleware was then updated to support the above and a call to
  the register function[1] was then added to the `auth_token` module[2].
  
  The above, unfortunately, broke Zaqar's auth because Zaqar is not
  using the global `CONF` object which means it has to register
  keystonemiddleware's options itself. Since the option was registered
  in the global conf instead of the conf object passed to
  `AuthProtocol`, the new `auth_section` option is not bein registered
  as keystoneclient excepts.
  
  So, as a gentle reminder to everyone, please, lets not assume all
  projects are using the global `CONF` object and make sure all libraries
  provide a good way to register the required options. I think either
  secretly registering options or exposing a function to let consumers
  do so is fine.
  
  I hate complaining without helping to solve the problem so, here's[3] a
  workaround to provide a, hopefully, better way to do this. Note that
  this shouldn't be the definitive fix and that we also implemented a
  workaround in zaqar as well.

That will fix the immediate problem - and i assume is fixing the issue that 
oslo.config sample config generator must not be picking up those options if 
it's not there. 
 
 That change will fix the issue, but a better solution is to have the code in
 keystoneclient that wants the options handle the registration at runtime. It
 looks like keystoneclient/auth/conf.py:load_from_conf_options() is at least
 one place that’s needed, there may be others.

So auth_token middleware was never designed to work this way - but we can fix 
it to. The reason AuthProtocol.__init__ takes a conf dict (it's not an 
oslo.config.Cfg object) is to work with options being included via paste file, 
these are expected to be overrides of the global CONF object. Putting these 
options in paste.ini is something the keystone team has been advising against 
for a while now and my understanding from that was that we were close to 
everyone using the global CONF object. 

Do you know if there are any other projects managing CONF this way? I too 
dislike the global CONF, however this is the only time i've seen a project work 
to not use it.

The problem with the proposed solution is that we are moving towards pluggable 
authentication in keystonemiddleware (and the clients). The auth_section option 
is the first called but the important option there is the auth_plugin option 
which specifies what sort of authentication to perform. The options that will 
be read/registered on CONF are then dependent on the plugin specified by 
auth_plugin. Handling this manually from Zaqar and having the correct options 
registered is going to be a pain.

Given that there are users, I'll have a look into making auth_token middleware 
actually accept a CONF object rather than require people to hack things through 
in the override dictionary. 

Jamie



 Doug
 
  
  Cheers,
  Flavio
  
  [0]
  https://github.com/openstack/python-keystoneclient/blob/41afe3c963fa01f61b67c44e572eee34b0972382/keystoneclient/auth/conf.py#L20
  [1]
  https://github.com/openstack/python-keystoneclient/blob/41afe3c963fa01f61b67c44e572eee34b0972382/keystoneclient/auth/conf.py#L49
  [2]
  https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token.py#L356
  [3] https://review.openstack.org/143063
  
  --
  @flaper87
  Flavio Percoco
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [python-cinderclient] Return request ID to caller

2014-12-17 Thread Jamie Lennox


- Original Message -
 From: Abhijeet Malawade abhijeet.malaw...@nttdata.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, 12 December, 2014 3:54:04 PM
 Subject: [openstack-dev] [python-cinderclient] Return request ID to caller
 
 
 
 HI,
 
 
 
 I want your thoughts on blueprint 'Log Request ID Mappings’ for cross
 projects.
 
 BP: https://blueprints.launchpad.net/nova/+spec/log-request-id-mappings
 
 It will enable operators to get request id's mappings easily and will be
 useful in analysing logs effectively.
 
 
 
 For logging 'Request ID Mappings', client needs to return
 'x-openstack-request-id' to the caller.
 
 Currently python-cinderclient do not return 'x-openstack-request-id' back to
 the caller.
 
 
 
 As of now, I could think of below two solutions to return 'request-id' back
 from cinder-client to the caller.
 
 
 
 1. Return tuple containing response header and response body from all
 cinder-client methods.
 
 (response header contains 'x-openstack-request-id').
 
 
 
 Advantages:
 
 A. In future, if the response headers are modified then it will be available
 to the caller without making any changes to the python-cinderclient code.
 
 
 
 Disadvantages:
 
 A. Affects all services using python-cinderclient library as the return type
 of each method is changed to tuple.
 
 B. Need to refactor all methods exposed by the python-cinderclient library.
 Also requires changes in the cross projects wherever python-cinderclient
 calls are being made.
 
 
 
 Ex. :-
 
 From Nova, you will need to call cinder-client 'get' method like below :-
 
 resp_header, volume = cinderclient(context).volumes.get(volume_id)
 
 
 
 x-openstack-request-id = resp_header.get('x-openstack-request-id', None)
 
 
 
 Here cinder-client will return both response header and volume. From response
 header, you can get 'x-openstack-request-id'.
 
 
 
 2. The optional parameter 'return_req_id' of type list will be passed to each
 of the cinder-client method. If this parameter is passed then cinder-client
 will append ‘'x-openstack-request-id' received from cinder api to this list.
 
 
 
 This is already implemented in glance-client (for V1 api only)
 
 Blueprint :
 https://blueprints.launchpad.net/python-glanceclient/+spec/return-req-id
 
 Review link : https://review.openstack.org/#/c/68524/7
 
 
 
 Advantages:
 
 A. Requires changes in the cross projects only at places wherever
 python-cinderclient calls are being made requiring 'x-openstack-request-id’.
 
 
 
 Dis-advantages:
 
 A. Need to refactor all methods exposed by the python-cinderclient library.
 
 
 
 Ex. :-
 
 From Nova, you will need to pass return_req_id parameter as a list.
 
 kwargs['return_req_id'] = []
 
 item = cinderclient(context).volumes.get(volume_id, **kwargs)
 
 
 
 if kwargs.get('return_req_id'):
 
 x-openstack-request-id = kwargs['return_req_id'].pop()
 
 
 
 python-cinderclient will add 'x-openstack-request-id' to the 'return_req_id'
 list if it is provided in kwargs.
 
 
 
 IMO, solution #2 is better than #1 for the reasons quoted above.
 
 Takashi NATSUME has already proposed a patch for solution #2. Please review
 patch https://review.openstack.org/#/c/104482/.
 
 Would appreciate if you can think of any other better solution than #2.
 
 
 
 Thank you.
 
 

Abhijeet

So option 1 is a massive compatibility break. There's no way you can pull of a 
change in the return value like that without a new major version and every 
getting annoyed. 

My question is why does it need to be returned to the caller? What is the 
caller going to do with it other than send it to the debug log? It's an admin 
who is trying to figure out those logs later that wants the request-id included 
in that information, not the application at run time. 

Why not just have cinderclient log it as part of the standard request logging: 
https://github.com/openstack/python-cinderclient/blob/master/cinderclient/client.py#L170



Jamie

 __
 Disclaimer: This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][oslo] Handling contexts and policy enforcement in services

2014-11-30 Thread Jamie Lennox
TL;DR: I think we can handle most of oslo.context with some additions to
auth_token middleware and simplify policy enforcement (from a service
perspective) at the same time.

There is currently a push to release oslo.context as a
library, for reference:
https://github.com/openstack/oslo.context/blob/master/oslo_context/context.py

Whilst I love the intent to standardize this
functionality I think that many of the requirements in there
are incorrect and don't apply to all services. It is my
understanding for example that read_only, show_deleted are
essentially nova requirements, and the use of is_admin needs
to be killed off, not standardized.

Currently each service builds a context based on headers
made available from auth_token middleware and some
additional interpretations based on that user
authentication. Each service does this slightly differently
based on its needs/when it copied it from nova.

I propose that auth_token middleware essentially handle the
creation and management of an authentication object that
will be passed and used by all services. This will
standardize so much of the oslo.context library that I'm not
sure it will be still needed. I bring this up now as I am
wanting to push this way and don't want to change things
after everyone has adopted oslo.context.

The current release of auth_token middleware creates and
passes to services (via env['keystone.token_auth']) an auth
plugin that can be passed to clients to use the current user
authentication. My intention here is to expand that object
to expose all of the authentication information required for
the services to operate.

There are two components to context that I can see:

 - The current authentication information that is retrieved
   from auth_token middleware.
 - service specific context added based on that user
   information eg read_only, show_deleted, is_admin,
   resource_id

Regarding the first point of current authentication there
are three places I can see this used:

 - communicating with other services as that user
 - associating resources with a user/project
 - policy enforcement

Addressing each of the 'current authentication' needs:

 - As mentioned for service to service communication
   auth_token middleware already provides an auth_plugin
   that can be used with (at this point most) of the
   clients. This greatly simplifies reusing an existing
   token and correctly using the service catalog as each
   client would do this differently. In future this plugin
   will be extended to provide support for concepts such as
   filling in the X-Service-Token [1] on behalf of the
   service, managing the request id, and generally
   standardizing service-service communication without
   requiring explicit support from every project and client.

 - Given that this authentication plugin is built within
   auth_token middleware it is a fairly trivial step to
   provide public properties on this object to give access
   to the current user_id, project_id and other relevant
   authentication data that the services can access. This is
   fairly well handled today but it means it is done without
   the service having to fetch all these objects from
   headers.

 - With upcoming changes to policy to handle features such
   as the X-Service-Token the existing context will need to
   gain a bunch of new entries. With the keystone team
   looking to wrap policy enforcement into its own
   standalone library it makes more sense to provide this
   authentication object directly to policy enforcement.
   This will allow the keystone team to manipulate policy
   data from both auth_token and the enforcement side,
   letting us introduce new features to policy transparent
   to the services. It will also standardize the naming of
   variables within these policy files.

What is left for a context object after this is managing
serialization and deserialization of this auth object and
any additional fields (read_only etc) that are generally
calculated at context creation time. This would be a very
small library.

There are still a number of steps to getting there:

 - Adding enough data to the existing authentication plugin
   to allow policy enforcement and general usage.
 - Making the authentication object serializable for
   transmitting between services.
 - Extracting policy enforcement into a library.

However I think that this approach brings enough benefits to
hold off on releasing and standardizing the use of the
current context objects.

I'd love to hear everyone thoughts on this, and where it
would fall down. I see there could be some issues with how
the context would fit into nova's versioned objects for
example - but I think this would be the same issues that an
oslo.context library would face anyway.

Jamie


[1] This is where service-service communication includes
the service token as well as the user token to allow smarter
policy and resource access. For example, a user can't access
certain neutron functions directly 

[openstack-dev] [oslo][kite] oslo.messaging changes for message security

2014-11-13 Thread Jamie Lennox
Hi all,

To implement kite we need the ability to sign and encrypt the message and the
message data. This needs to happen at a very low level in the oslo.messaging
stack. The existing message security review
(https://review.openstack.org/#/c/109806/) isn't going to be sufficient. It
allows us to sign/encrypt only the message data ignoring the information in the
context and not allowing us to sign the message as a whole. It would also
intercept and sign notifications which is not something that kite can do.

Mostly this is an issue of how the oslo.messaging library is constructed. The
choice of how data is serialized for transmission (including things like how
you arrange context and message data in the payload) is handled individually by
the driver layer rather than in a common higher location. All the drivers use
the same helper functions for this and so it isn't a problem in practice.

Essentially I need a stateful serializing/deserializing object (I need to store
keys and hold things like a connection to the kite server) that either extends
or replaces oslo.messaging._drivers.common.serialize_msg and deserialize_msg
and their exception counterparts.

There are a couple of ways I can see to do what I need:

1. Kite becomes a more integral part of oslo.messaging and the marshalling and
verification code becomes part of the existing RPC path. This is how it was
initially proposed, it does not provide a good story for future or alternative
implementations. Oslo.messaging would either have a dependency on kiteclient,
implement its own ways of talking to the server, or have some hack that imports
kiteclient if available.

2. Essentially I add a global object loaded from conf to the existing common
RPC file. Pro: The existing drivers continue to work as today, Con: global
state held by a library. However given the way oslo messaging works i'm not
really sure how much of a problem this is. We typically load transport from a
predefined location in the conf file and we're not really in a situation where
you might want to construct different transports with different parameters in
the same project.

3. I create a protocol object out of the RPC code that kite can subclass and
the protocol can be chosen by CONF when the transport/driver is created. This
still touches a lot of places as the protocol object would need to be passed to
all messages, consumers etc. It involves changing the interface of the drivers
to accept this new object and changes in each of the drivers to work with the
new protocol object rather than the existing helpers.

4. As the last option requires changing the driver interface anyway we try and
correct the driver interfaces completely. The driver send and receive functions
that currently accept a context and args parameters should only accept a
generic object/string consisting of already marshalled data. The code that
handles serializing and deserializing gets moved to a higher level and kite
would be pluggable there with the current RPC being default.

None of these options involve changing the public facing interfaces nor the
messages emitted on the wire (when kite is not used).

I've been playing a little with option 3 and I don't think it's worth it. There
is a lot of code change and additional object passing that I don't think
improves the library in general.

Before I go too far down the path with option 4 I'd like to hear the thoughts
of people more familiar with the library.

Is there a reason that the drivers currently handle marshalling rather than the
RPC layer?

I know there is ongoing talk about evolving the oslo.messaging library, I
unfortunately didn't make it to the sessions at summit. Has this problem been
raised? How would it affect those talks?

Is there explicit/implicit support for out of tree drivers that would disallow
changing these interfaces?

Does anyone have alternative ideas on how to organize the library for message
security?

Thanks for the help.


Jamie

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Keystone: API Versions and Discovery

2014-10-21 Thread Jamie Lennox


- Original Message -
 From: Dolph Mathews dolph.math...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, October 20, 2014 4:38:25 PM
 Subject: Re: [openstack-dev] Horizon and Keystone: API Versions and Discovery
 
 
 On Mon, Oct 20, 2014 at 7:04 AM, Jamie Lennox  jamielen...@redhat.com 
 wrote:
 
 
 
 
 - Original Message -
  From: Dolph Mathews  dolph.math...@gmail.com 
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org 
  Sent: Tuesday, October 7, 2014 6:56:15 PM
  Subject: Re: [openstack-dev] Horizon and Keystone: API Versions and
  Discovery
  
  
  
  On Tuesday, October 7, 2014, Adam Young  ayo...@redhat.com  wrote:
  
  
  Horizon has a config options which says which version of the Keystone API
  it
  should work against: V2 or V3. I am not certain that there is still any
  reason for Horizon to go against V2. However, If we defer the decision to
  Keystone, we come up against the problem of discovery.
  
  On the surface it is easy, as the Keystone client supports version
  discovery.
  The problem is that discovery must be run for each new client creation, and
  Horizon uses a new client per request. That would mean that every request
  to
  Horizon that talks to Keystone would generate at least one additional
  request.
  
  
  
  The response is cacheable.
 
 Not only is it cachable it is cached by default within the Session object you
 use so that the session will only make one discovery request per service per
 session. So horizon can manage how long to cache discovery for by how long
 they hold on to a session object. As the session object doesn't contain any
 personal or sensitive date (that is all restricted to the auth plugin) the
 session object can be persisted between requests.
 
 Is there any reason not to cache to disk across sessions? The GET response is
 entirely endpoint-specific, not exactly session-based.
 

The only reason is that I didn't want to introduce a global variable cache in a 
library. The session should be a fairly long running object and i'm looking at 
ways we could serialize it to allow horizon/CLIs to manage it themselves.

A quicker way would be to make the discovery cache an actual object and allow 
horizon/CLIs to handle that seperately to the session/auth plugin. I don't know 
which they would prefer. 

 
 Whether or not horizon works that way today - and whether the other services
 work with discovery as well as keystone does i'm not sure.
 
  
  Is this significant?
  
  It gets a little worse when you start thinking about all of the other
  services out there. If each new request that has to talk to multiple
  services needs to run discovery, you can image that soon the majority of
  network chatter would be discovery based.
  
  
  It seems to me that Horizon should somehow cache this data, and share it
  among clients. Note that I am not talking about user specific data like the
  endpoints from the service catalog for a specific project. But the overall
  service catalog, as well as the supported versions of the API, should be
  cacheable. We can use the standard HTTP cache management API on the
  Keystone
  side to specify how long Horizon can trust the data to be current.
  
  I think this actually goes for the rest of the endpoints as well: we want
  to
  get to a much smaller service catalog, and we can do that by making the
  catalog holds on IDs. The constraints spec for endpoint binding will be
  endpoint only anyway, and so having the rest of the endpoint data cached
  will be valuable there as well.
  
  
  __ _
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/ cgi-bin/mailman/listinfo/ openstack-dev
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Question regarding Service Catalog and Identity entries...

2014-10-21 Thread Jamie Lennox


- Original Message -
 From: Ben Meyer ben.me...@rackspace.com
 To: openstack-dev@lists.openstack.org
 Sent: Monday, October 20, 2014 3:53:39 PM
 Subject: Re: [openstack-dev] [Keystone] Question regarding Service Catalog 
 andIdentity entries...
 
 On 10/20/2014 08:12 AM, Jamie Lennox wrote:
  - Original Message -
  From: Ben Meyer ben.me...@rackspace.com
  To: openstack-dev@lists.openstack.org
  Cc: Jamie Painter jamie.pain...@rackspace.com
  Sent: Tuesday, October 7, 2014 4:31:16 PM
  Subject: [openstack-dev] [Keystone] Question regarding Service Catalog and
 Identity entries...
 
  I am trying to use the Python Keystone client to integration
  authentication functionality into a project I am contributing to
  (https://github.com/rackerlabs/deuce-client).
  However, I ran into a situation where if I do the following:
 
  c = keystoneclient.v2_0.client.Client(username='username',
  password='password',
  auth_url=https://keystone-compatible-service.example.com/v2.0/;)
  Failed to retrieve management_url from token
 
  I traced it through the Python Keystoneclient code and it fails due to
  not finding the identity service entry in the Service Catalog. The
  authentication otherwise happens in that it has already received a
  successful response and a full Service Catalog, aside from the
  missing identity service. This happens with both version 0.10 and 0.11
  python keystone clients; I did not try older clients.
 
  Talking with the service provider, their version does not include itself
  in the Service Catalog, and they learned the Keystone itself inserts
  itself into the Service Catalog.
  I can certainly understand why it having the identity service entry be
  part of the Service Catalog, but for them it is (at least for now) not
  desirable to do so.
 
  Questions:
  - Is it now a standard that Keystone inserts itself into the Service
  Catalog?
  It's not a standard that keystone inserts itself into the catalog, the
  cloud operator should maintain the list of endpoints for their deployment
  and the 'identity' service should be amongst those endpoints. I'm unclear
  as to why it would be undesirable to list the identity endpoint in the
  service catalog. How would this addition change their deployment?
 The argument is that the Service Catalog is too big so they are hesitant
 to add new entries to it; and 'identity' in the catalog is redundant
 since you have to know the 'identity' end-point to even get the service
 catalog in the first place.
 
 Not saying I agree, just that's the argument being made. If it is
 required by Keystone to be self-referential then they are likely to
 add it.

It's required for the CRUD operations (managing users, projects, roles etc) of 
keystoneclient. Whether it's realistic that you would ever separate the auth 
process to a different host than the keystone CRUD I'm not sure, i've never 
seen it, but the idea is that beyond that initial auth contact there really is 
no difference between keystone and any other service and keystoneclient will 
look up the catalog to determine how to talk to keystone. 

  The problem with the code that you provided is that the token that is being
  returned from your code is unscoped. Which means that it is not associated
  with a project and therefore it doesn't have a service catalog because the
  catalog can be project specific. Thus when you go to perform an operation
  the client will look for the URL it is supposed to talk to in an empty
  list and fail to find the identity endpoint. This message really needs to
  be improved. If you add a project_id or project_name to the client
  information then you should get back a token with a catalog.
 
 In my normal case I'm using the project_id field; but have found that it
 didn't really matter what was used for the credentials in this case
 since they simply don't have the 'identity' end-points in the Service
 Catalog.
 
  - Or is the Python Keystone Client broken because it is forcing it to be
  so?
  I wouldn't say that it is broken because having an identity endpoint in
  your catalog is a required part of a deployment, in the same way that
  having a 'compute' endpoint is required if you want to talk to nova. I
  would be surprised by any decision to purposefully omit the 'identity'
  endpoint from the service catalog.
 
 See above; but from what you are presenting here it sounds like the
 deployment is broken so it is in fact required by Keystone, even if
 only a required part of a deployment.

As keystoneclient is used by heat, horizon etc I would think it's safe to say 
it's required. 

 Thanks
 
 Ben
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo

Re: [openstack-dev] Horizon and Keystone: API Versions and Discovery

2014-10-20 Thread Jamie Lennox


- Original Message -
 From: Dolph Mathews dolph.math...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, October 7, 2014 6:56:15 PM
 Subject: Re: [openstack-dev] Horizon and Keystone: API Versions and Discovery
 
 
 
 On Tuesday, October 7, 2014, Adam Young  ayo...@redhat.com  wrote:
 
 
 Horizon has a config options which says which version of the Keystone API it
 should work against: V2 or V3. I am not certain that there is still any
 reason for Horizon to go against V2. However, If we defer the decision to
 Keystone, we come up against the problem of discovery.
 
 On the surface it is easy, as the Keystone client supports version discovery.
 The problem is that discovery must be run for each new client creation, and
 Horizon uses a new client per request. That would mean that every request to
 Horizon that talks to Keystone would generate at least one additional
 request.
 
 
 
 The response is cacheable.

Not only is it cachable it is cached by default within the Session object you 
use so that the session will only make one discovery request per service per 
session. So horizon can manage how long to cache discovery for by how long they 
hold on to a session object. As the session object doesn't contain any personal 
or sensitive date (that is all restricted to the auth plugin) the session 
object can be persisted between requests. 

Whether or not horizon works that way today - and whether the other services 
work with discovery as well as keystone does i'm not sure. 

 
 Is this significant?
 
 It gets a little worse when you start thinking about all of the other
 services out there. If each new request that has to talk to multiple
 services needs to run discovery, you can image that soon the majority of
 network chatter would be discovery based.
 
 
 It seems to me that Horizon should somehow cache this data, and share it
 among clients. Note that I am not talking about user specific data like the
 endpoints from the service catalog for a specific project. But the overall
 service catalog, as well as the supported versions of the API, should be
 cacheable. We can use the standard HTTP cache management API on the Keystone
 side to specify how long Horizon can trust the data to be current.
 
 I think this actually goes for the rest of the endpoints as well: we want to
 get to a much smaller service catalog, and we can do that by making the
 catalog holds on IDs. The constraints spec for endpoint binding will be
 endpoint only anyway, and so having the rest of the endpoint data cached
 will be valuable there as well.
 
 
 __ _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/ cgi-bin/mailman/listinfo/ openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Question regarding Service Catalog and Identity entries...

2014-10-20 Thread Jamie Lennox


- Original Message -
 From: Ben Meyer ben.me...@rackspace.com
 To: openstack-dev@lists.openstack.org
 Cc: Jamie Painter jamie.pain...@rackspace.com
 Sent: Tuesday, October 7, 2014 4:31:16 PM
 Subject: [openstack-dev] [Keystone] Question regarding Service Catalog and
 Identity entries...
 
 I am trying to use the Python Keystone client to integration
 authentication functionality into a project I am contributing to
 (https://github.com/rackerlabs/deuce-client).
 However, I ran into a situation where if I do the following:
 
  c = keystoneclient.v2_0.client.Client(username='username',
 password='password',
 auth_url=https://keystone-compatible-service.example.com/v2.0/;)
 Failed to retrieve management_url from token
 
 I traced it through the Python Keystoneclient code and it fails due to
 not finding the identity service entry in the Service Catalog. The
 authentication otherwise happens in that it has already received a
 successful response and a full Service Catalog, aside from the
 missing identity service. This happens with both version 0.10 and 0.11
 python keystone clients; I did not try older clients.
 
 Talking with the service provider, their version does not include itself
 in the Service Catalog, and they learned the Keystone itself inserts
 itself into the Service Catalog.
 I can certainly understand why it having the identity service entry be
 part of the Service Catalog, but for them it is (at least for now) not
 desirable to do so.
 
 Questions:
 - Is it now a standard that Keystone inserts itself into the Service
 Catalog?

It's not a standard that keystone inserts itself into the catalog, the cloud 
operator should maintain the list of endpoints for their deployment and the 
'identity' service should be amongst those endpoints. I'm unclear as to why it 
would be undesirable to list the identity endpoint in the service catalog. How 
would this addition change their deployment? 

The problem with the code that you provided is that the token that is being 
returned from your code is unscoped. Which means that it is not associated with 
a project and therefore it doesn't have a service catalog because the catalog 
can be project specific. Thus when you go to perform an operation the client 
will look for the URL it is supposed to talk to in an empty list and fail to 
find the identity endpoint. This message really needs to be improved. If you 
add a project_id or project_name to the client information then you should get 
back a token with a catalog. 

 - Or is the Python Keystone Client broken because it is forcing it to be so?

I wouldn't say that it is broken because having an identity endpoint in your 
catalog is a required part of a deployment, in the same way that having a 
'compute' endpoint is required if you want to talk to nova. I would be 
surprised by any decision to purposefully omit the 'identity' endpoint from the 
service catalog. 

 Thanks,
 
 Benjamen R. Meyer
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites

2014-10-20 Thread Jamie Lennox


- Original Message -
 From: Nathan Kinder nkin...@redhat.com
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, October 14, 2014 2:25:35 AM
 Subject: Re: [openstack-dev] [all][policy][keystone] Better Policy Model and 
 Representing Capabilites
 
 
 
 On 10/13/2014 01:17 PM, Morgan Fainberg wrote:
  Description of the problem: Without attempting an action on an endpoint
  with a current scoped token, it is impossible to know what actions are
  available to a user.
  
  
  Horizon makes some attempts to solve this issue by sourcing all of the
  policy files from all of the services to determine what a user can
  accomplish with a given role. This is highly inefficient as it requires
  processing the various policy.json files for each request in multiple
  places and presents a mechanism that is not really scalable to understand
  what a user can do with the current authorization. Horizon may not be the
  only service that (in the long term) would want to know what actions a
  token can take.
 
 This is also extremely useful for being able to actually support more
 restricted tokens as well.  If I as an end user want to request a token
 that only has the roles required to perform a particular action, I'm
 going to need to have a way of knowing what those roles are.  I think
 that is one of the main things missing to allow the role-filtered
 tokens option that I wrote up after the last Summit to be a viable
 approach:
 
   https://blog-nkinder.rhcloud.com/?p=101
 
  
  I would like to start a discussion on how we should improve our policy
  implementation (OpenStack wide) to help make it easier to know what is
  possible with a current authorization context (Keystone token). The key
  feature should be that whatever the implementation is, it doesn’t require
  another round-trip to a third party service to “enforce” the policy which
  avoids another scaling point like UUID Keystone token validation.
  
  Here are a couple of ideas that we’ve discussed over the last few
  development cycles (and none of this changes the requirements to manage
  scope of authorization, e.g. project, domain, trust, ...):
  
  1. Keystone is the holder of all policy files. Each service gets it’s
  policy file from Keystone and it is possible to validate the policy (by
  any other service) against a token provided they get the relevant policy
  file from the authoritative source (Keystone).
  
  Pros: This is nearly completely compatible with the current policy system.
  The biggest change is that policy files are published to Keystone instead
  of to a local file on disk. This also could open the door to having
  keystone build “stacked” policies (user/project/domain/endpoint/service
  specific) where the deployer could layer policy definitions (layering
  would allow for stricter enforcement at more specific levels, e.g. users
  from project X can’t terminate any VMs).
 
 I think that there are a some additional advantages to centralizing
 policy storage (not enforcement).
 
 - The ability to centralize management of policy would be very nice.  If
 I want to update the policy for all of my compute nodes, I can do it in
 one location without the need for external configuration management
 solutions.
 
 - We could piggy-back on Keystone's signing capabilities to allow policy
 to be signed, providing protection against policy tampering on an
 individual endpoint.
 
  
  Cons: This doesn’t ease up the processing requirement or the need to hold
  (potentially) a significant number of policy files for each service that
  wants to evaluate what actions a token can do.
 
 Are you thinking of there being a call to keystone that answers what
 can I do with token A against endpoint B?  This seems similar in
 concept to the LDAP get effective rights control.  There would
 definitely be some processing overhead to this though you could set up
 multiple keystone instances and replicate the policy to spread out the
 load.  It also might be possible to index the enforcement points by role
 in an attempt to minimize the processing for this sort of call.
 
  
  
  2. Each enforcement point in a service is turned into an attribute/role,
  and the token contains all of the information on what a user can do
  (effectively shipping the entire policy information with the token).
  
  Pros: It is trivial to know what a token provides access to: the token
  would contain something like `{“nova”: [“terminate”, “boot”], “keystone”:
  [“create_user”, “update_user”], ...}`. It would be easily possible to
  allow glance “get image” nova “boot” capability instead of needing to know
  the roles for policy.json for both glance and nova work for booting a new
  VM.
  
  Cons: This would likely require a central registry of all the actions that
  could be taken (something akin to an IANA port list). Without a grouping
  to apply these authorizations to a user (e.g. keystone_admin would convey
  “create_project, delete_project, update_project, 

Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Jamie Lennox


- Original Message -
 From: Sean Dague s...@dague.net
 To: openstack-dev@lists.openstack.org
 Sent: Thursday, 11 September, 2014 9:44:43 PM
 Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying 
 tokens leads to overall OpenStack fragility
 
 On 09/10/2014 08:46 PM, Jamie Lennox wrote:
  
  - Original Message -
  From: Steven Hardy sha...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Sent: Thursday, September 11, 2014 1:55:49 AM
  Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying
  tokens leads to overall OpenStack fragility
 
  On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
  Going through the untriaged Nova bugs, and there are a few on a similar
  pattern:
 
  Nova operation in progress takes a while
  Crosses keystone token expiration time
  Timeout thrown
  Operation fails
  Terrible 500 error sent back to user
 
  We actually have this exact problem in Heat, which I'm currently trying to
  solve:
 
  https://bugs.launchpad.net/heat/+bug/1306294
 
  Can you clarify, is the issue either:
 
  1. Create novaclient object with username/password
  2. Do series of operations via the client object which eventually fail
  after $n operations due to token expiry
 
  or:
 
  1. Create novaclient object with username/password
  2. Some really long operation which means token expires in the course of
  the service handling the request, blowing up and 500-ing
 
  If the former, then it does sound like a client, or usage-of-client bug,
  although note if you pass a *token* vs username/password (as is currently
  done for glance and heat in tempest, because we lack the code to get the
  token outside of the shell.py code..), there's nothing the client can do,
  because you can't request a new token with longer expiry with a token...
 
  However if the latter, then it seems like not really a client problem to
  solve, as it's hard to know what action to take if a request failed
  part-way through and thus things are in an unknown state.
 
  This issue is a hard problem, which can possibly be solved by
  switching to a trust scoped token (service impersonates the user), but
  then
  you're effectively bypassing token expiry via delegation which sits
  uncomfortably with me (despite the fact that we may have to do this in
  heat
  to solve the afforementioned bug)
 
  It seems like we should have a standard pattern that on token expiration
  the underlying code at least gives one retry to try to establish a new
  token to complete the flow, however as far as I can tell *no* clients do
  this.
 
  As has been mentioned, using sessions may be one solution to this, and
  AFAIK session support (where it doesn't already exist) is getting into
  various clients via the work being carried out to add support for v3
  keystone by David Hu:
 
  https://review.openstack.org/#/q/owner:david.hu%2540hp.com,n,z
 
  I see patches for Heat (currently gating), Nova and Ironic.
 
  I know we had to add that into Tempest because tempest runs can exceed 1
  hr, and we want to avoid random fails just because we cross a token
  expiration boundary.
 
  I can't claim great experience with sessions yet, but AIUI you could do
  something like:
 
  from keystoneclient.auth.identity import v3
  from keystoneclient import session
  from keystoneclient.v3 import client
 
  auth = v3.Password(auth_url=OS_AUTH_URL,
 username=USERNAME,
 password=PASSWORD,
 project_id=PROJECT,
 user_domain_name='default')
  sess = session.Session(auth=auth)
  ks = client.Client(session=sess)
 
  And if you can pass the same session into the various clients tempest
  creates then the Password auth-plugin code takes care of reauthenticating
  if the token cached in the auth plugin object is expired, or nearly
  expired:
 
  https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L120
 
  So in the tempest case, it seems like it may be a case of migrating the
  code creating the clients to use sessions instead of passing a token or
  username/password into the client object?
 
  That's my understanding of it atm anyway, hopefully jamielennox will be
  along
  soon with more details :)
 
  Steve
  
  
  By clients here are you referring to the CLIs or the python libraries?
  Implementation is at different points with each.
  
  Sessions will handle automatically reauthenticating and retrying a request,
  however it relies on the service throwing a 401 Unauthenticated error. If
  a service is returning a 500 (or a timeout?) then there isn't much that a
  client can/should do for that because we can't assume that trying again
  with a new token will solve anything.
  
  At the moment we have keystoneclient, novaclient, cinderclient
  neutronclient and then a number of the smaller projects with support

Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-11 Thread Jamie Lennox


- Original Message -
 From: Steven Hardy sha...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, 12 September, 2014 12:21:52 AM
 Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying 
 tokens leads to overall OpenStack fragility
 
 On Wed, Sep 10, 2014 at 08:46:45PM -0400, Jamie Lennox wrote:
  
  - Original Message -
   From: Steven Hardy sha...@redhat.com
   To: OpenStack Development Mailing List (not for usage questions)
   openstack-dev@lists.openstack.org
   Sent: Thursday, September 11, 2014 1:55:49 AM
   Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying
   tokens leads to overall OpenStack fragility
   
   On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
Going through the untriaged Nova bugs, and there are a few on a similar
pattern:

Nova operation in progress takes a while
Crosses keystone token expiration time
Timeout thrown
Operation fails
Terrible 500 error sent back to user
   
   We actually have this exact problem in Heat, which I'm currently trying
   to
   solve:
   
   https://bugs.launchpad.net/heat/+bug/1306294
   
   Can you clarify, is the issue either:
   
   1. Create novaclient object with username/password
   2. Do series of operations via the client object which eventually fail
   after $n operations due to token expiry
   
   or:
   
   1. Create novaclient object with username/password
   2. Some really long operation which means token expires in the course of
   the service handling the request, blowing up and 500-ing
   
   If the former, then it does sound like a client, or usage-of-client bug,
   although note if you pass a *token* vs username/password (as is currently
   done for glance and heat in tempest, because we lack the code to get the
   token outside of the shell.py code..), there's nothing the client can do,
   because you can't request a new token with longer expiry with a token...
   
   However if the latter, then it seems like not really a client problem to
   solve, as it's hard to know what action to take if a request failed
   part-way through and thus things are in an unknown state.
   
   This issue is a hard problem, which can possibly be solved by
   switching to a trust scoped token (service impersonates the user), but
   then
   you're effectively bypassing token expiry via delegation which sits
   uncomfortably with me (despite the fact that we may have to do this in
   heat
   to solve the afforementioned bug)
   
It seems like we should have a standard pattern that on token
expiration
the underlying code at least gives one retry to try to establish a new
token to complete the flow, however as far as I can tell *no* clients
do
this.
   
   As has been mentioned, using sessions may be one solution to this, and
   AFAIK session support (where it doesn't already exist) is getting into
   various clients via the work being carried out to add support for v3
   keystone by David Hu:
   
   https://review.openstack.org/#/q/owner:david.hu%2540hp.com,n,z
   
   I see patches for Heat (currently gating), Nova and Ironic.
   
I know we had to add that into Tempest because tempest runs can exceed
1
hr, and we want to avoid random fails just because we cross a token
expiration boundary.
   
   I can't claim great experience with sessions yet, but AIUI you could do
   something like:
   
   from keystoneclient.auth.identity import v3
   from keystoneclient import session
   from keystoneclient.v3 import client
   
   auth = v3.Password(auth_url=OS_AUTH_URL,
  username=USERNAME,
  password=PASSWORD,
  project_id=PROJECT,
  user_domain_name='default')
   sess = session.Session(auth=auth)
   ks = client.Client(session=sess)
   
   And if you can pass the same session into the various clients tempest
   creates then the Password auth-plugin code takes care of reauthenticating
   if the token cached in the auth plugin object is expired, or nearly
   expired:
   
   https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L120
   
   So in the tempest case, it seems like it may be a case of migrating the
   code creating the clients to use sessions instead of passing a token or
   username/password into the client object?
   
   That's my understanding of it atm anyway, hopefully jamielennox will be
   along
   soon with more details :)
   
   Steve
  
  
  By clients here are you referring to the CLIs or the python libraries?
  Implementation is at different points with each.
 
 I think for both heat and tempest we're talking about the python libraries
 (Client objects).
 
  Sessions will handle automatically reauthenticating and retrying a request,
  however it relies on the service throwing a 401 Unauthenticated error

Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-09-11 Thread Jamie Lennox


- Original Message -
 From: Travis S Tripp travis.tr...@hp.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, 12 September, 2014 10:30:53 AM
 Subject: [openstack-dev] masking X-Auth-Token in debug output - proposed 
 consistency
 
 
 
 Hi All,
 
 
 
 I’m just helping with bug triage in Glance and we’ve got a bug to update how
 tokens are redacted in the glanceclient [1]. It says to update to whatever
 cross-project approach is agreed upon and references this thread:
 
 
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-June/037345.html
 
 
 
 I just went through the thread and as best as I can tell there wasn’t a
 conclusion in the ML. However, if we are going to do anything, IMO the
 thread leans toward {SHA1}, with Morgan Fainberg dissenting.
 However, he references a patch that was ultimately abandoned.
 
 
 
 If there was a conclusion to this, please let me know so I can update and
 work on closing this bug.

We handle this in the keystoneclient Session object by just printing REDACTED 
or something similar. The problem with using a SHA1 is that for backwards 
compatability we often use the SHA1 of a PKI token as if it were a UUID token 
and so this is still sensitive data. There is working in keystone by 
morganfainberg (which i think was merged) to add a new audit_it which will be 
able to identify a token across calls without exposing any sensitive 
information. We will support this in session when available. 

The best i can say for standardization is that when glanceclient adopts the 
session it will be handled the same way as all the other clients and 
improvements can happen there without you having to worry about it. 


Jamie

 
 [1] https://bugs.launchpad.net/python-glanceclient/+bug/1329301
 
 
 
 Thanks,
 Travis
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-10 Thread Jamie Lennox

- Original Message -
 From: Steven Hardy sha...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, September 11, 2014 1:55:49 AM
 Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying 
 tokens leads to overall OpenStack fragility
 
 On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
  Going through the untriaged Nova bugs, and there are a few on a similar
  pattern:
  
  Nova operation in progress takes a while
  Crosses keystone token expiration time
  Timeout thrown
  Operation fails
  Terrible 500 error sent back to user
 
 We actually have this exact problem in Heat, which I'm currently trying to
 solve:
 
 https://bugs.launchpad.net/heat/+bug/1306294
 
 Can you clarify, is the issue either:
 
 1. Create novaclient object with username/password
 2. Do series of operations via the client object which eventually fail
 after $n operations due to token expiry
 
 or:
 
 1. Create novaclient object with username/password
 2. Some really long operation which means token expires in the course of
 the service handling the request, blowing up and 500-ing
 
 If the former, then it does sound like a client, or usage-of-client bug,
 although note if you pass a *token* vs username/password (as is currently
 done for glance and heat in tempest, because we lack the code to get the
 token outside of the shell.py code..), there's nothing the client can do,
 because you can't request a new token with longer expiry with a token...
 
 However if the latter, then it seems like not really a client problem to
 solve, as it's hard to know what action to take if a request failed
 part-way through and thus things are in an unknown state.
 
 This issue is a hard problem, which can possibly be solved by
 switching to a trust scoped token (service impersonates the user), but then
 you're effectively bypassing token expiry via delegation which sits
 uncomfortably with me (despite the fact that we may have to do this in heat
 to solve the afforementioned bug)
 
  It seems like we should have a standard pattern that on token expiration
  the underlying code at least gives one retry to try to establish a new
  token to complete the flow, however as far as I can tell *no* clients do
  this.
 
 As has been mentioned, using sessions may be one solution to this, and
 AFAIK session support (where it doesn't already exist) is getting into
 various clients via the work being carried out to add support for v3
 keystone by David Hu:
 
 https://review.openstack.org/#/q/owner:david.hu%2540hp.com,n,z
 
 I see patches for Heat (currently gating), Nova and Ironic.
 
  I know we had to add that into Tempest because tempest runs can exceed 1
  hr, and we want to avoid random fails just because we cross a token
  expiration boundary.
 
 I can't claim great experience with sessions yet, but AIUI you could do
 something like:
 
 from keystoneclient.auth.identity import v3
 from keystoneclient import session
 from keystoneclient.v3 import client
 
 auth = v3.Password(auth_url=OS_AUTH_URL,
username=USERNAME,
password=PASSWORD,
project_id=PROJECT,
user_domain_name='default')
 sess = session.Session(auth=auth)
 ks = client.Client(session=sess)
 
 And if you can pass the same session into the various clients tempest
 creates then the Password auth-plugin code takes care of reauthenticating
 if the token cached in the auth plugin object is expired, or nearly
 expired:
 
 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L120
 
 So in the tempest case, it seems like it may be a case of migrating the
 code creating the clients to use sessions instead of passing a token or
 username/password into the client object?
 
 That's my understanding of it atm anyway, hopefully jamielennox will be along
 soon with more details :)
 
 Steve


By clients here are you referring to the CLIs or the python libraries? 
Implementation is at different points with each. 

Sessions will handle automatically reauthenticating and retrying a request, 
however it relies on the service throwing a 401 Unauthenticated error. If a 
service is returning a 500 (or a timeout?) then there isn't much that a client 
can/should do for that because we can't assume that trying again with a new 
token will solve anything. 

At the moment we have keystoneclient, novaclient, cinderclient neutronclient 
and then a number of the smaller projects with support for sessions. That 
obviously doesn't mean that existing users of that code have transitioned to 
the newer way though. David Hu has been working on using this code within the 
existing CLIs. I have prototypes for at least nova to talk to neutron and 
cinder which i'm waiting for Kilo to push. From there it should be easier to do 
this for other services. 

For service to service communication there are 

  1   2   >