[openstack-dev] Unsubscribe

2018-06-05 Thread Henry Nash


> On 5 Jun 2018, at 14:56, Eric Fried  wrote:
> 
> To summarize: cyborg could model things nested-wise, but there would be
> no way to schedule them yet.
> 
> Couple of clarifications inline.
> 
> On 06/05/2018 08:29 AM, Jay Pipes wrote:
>> On 06/05/2018 08:50 AM, Stephen Finucane wrote:
>>> I thought nested resource providers were already supported by
>>> placement? To the best of my knowledge, what is /not/ supported is
>>> virt drivers using these to report NUMA topologies but I doubt that
>>> affects you. The placement guys will need to weigh in on this as I
>>> could be missing something but it sounds like you can start using this
>>> functionality right now.
>> 
>> To be clear, this is what placement and nova *currently* support with
>> regards to nested resource providers:
>> 
>> 1) When creating a resource provider in placement, you can specify a
>> parent_provider_uuid and thus create trees of providers. This was
>> placement API microversion 1.14. Also included in this microversion was
>> support for displaying the parent and root provider UUID for resource
>> providers.
>> 
>> 2) The nova "scheduler report client" (terrible name, it's mostly just
>> the placement client at this point) understands how to call placement
>> API 1.14 and create resource providers with a parent provider.
>> 
>> 3) The nova scheduler report client uses a ProviderTree object [1] to
>> cache information about the hierarchy of providers that it knows about.
>> For nova-compute workers managing hypervisors, that means the
>> ProviderTree object contained in the report client is rooted in a
>> resource provider that represents the compute node itself (the
>> hypervisor). For nova-compute workers managing baremetal, that means the
>> ProviderTree object contains many root providers, each representing an
>> Ironic baremetal node.
>> 
>> 4) The placement API's GET /allocation_candidates endpoint now
>> understands the concept of granular request groups [2]. Granular request
>> groups are only relevant when a user wants to specify that child
>> providers in a provider tree should be used to satisfy part of an
>> overall scheduling request. However, this support is yet incomplete --
>> see #5 below.
> 
> Granular request groups are also usable/useful when sharing providers
> are in play. That functionality is complete on both the placement side
> and the report client side (see below).
> 
>> The following parts of the nested resource providers modeling are *NOT*
>> yet complete, however:
>> 
>> 5) GET /allocation_candidates does not currently return *results* when
>> granular request groups are specified. So, while the placement service
>> understands the *request* for granular groups, it doesn't yet have the
>> ability to constrain the returned candidates appropriately. Tetsuro is
>> actively working on this functionality in this patch series:
>> 
>> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates
>> 
>> 
>> 6) The virt drivers need to implement the update_provider_tree()
>> interface [3] and construct the tree of resource providers along with
>> appropriate inventory records for each child provider in the tree. Both
>> libvirt and XenAPI virt drivers have patch series up that begin to take
>> advantage of the nested provider modeling. However, a number of concerns
>> [4] about in-place nova-compute upgrades when moving from a single
>> resource provider to a nested provider tree model were raised, and we
>> have begun brainstorming how to handle the migration of existing data in
>> the single-provider model to the nested provider model. [5] We are
>> blocking any reviews on patch series that modify the local provider
>> modeling until these migration concerns are fully resolved.
>> 
>> 7) The scheduler does not currently pass granular request groups to
>> placement.
> 
> The code is in place to do this [6] - so the scheduler *will* pass
> granular request groups to placement if your flavor specifies them.  As
> noted above, such flavors will be limited to exploiting sharing
> providers until Tetsuro's series merges.  But no further code work is
> required on the scheduler side.
> 
> [6] https://review.openstack.org/#/c/515811/
> 
>> Once #5 and #6 are resolved, and once the migration/upgrade
>> path is resolved, clearly we will need to have the scheduler start
>> making requests to placement that represent the granular request groups
>> and have the scheduler pass the resulting allocation candidates to its
>> filters and weighers.
>> 
>> Hope this helps highlight where we currently are and the work still left
>> to do (in Rocky) on nested resource providers.
>> 
>> Best,
>> -jay
>> 
>> 
>> [1]
>> https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py
>> 
>> [2]
>> https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html
>> 
>> 
>> [3]
>> 

[openstack-dev] [keystone] Signing off

2018-05-30 Thread Henry Nash
Hi
 
It is with a somewhat heavy heart that I have decided that it is time to hang up my keystone core status. Having been involved since the closing stages of Folsom, I've had a good run! When I look at how far keystone has come since the v2 days, it is remarkable - and we should all feel a sense of pride in that.
 
Thanks to all the hard work, commitment, humour and support from all the keystone folks over the years - I am sure we will continue to interact and meet among the many other open source projects that many of us are becoming involved with. Ad astra!
 
Best regards,
 
Henry
Twitter: @henrynash
linkedIn: www.linkedin.com/in/henrypnash
 Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Colleen Murphy for core

2017-05-02 Thread Henry Nash
Congratulations, Colleen - well deserved.

Henry
> On 2 May 2017, at 19:15, Lance Bragstad  wrote:
> 
> Hey folks,
> 
> During today's keystone meeting we added another member to keystone's core 
> team. For several releases, Colleen's had a profound impact on keystone. Her 
> reviews are meticulous and of incredible quality. She has no hesitation to 
> jump into keystone's most confusing realms and as a result has become an 
> expert on several identity topics like federation and LDAP integration.
> 
> I'd like to thank Colleen for all her hard work and upholding the stability 
> and usability of the project.
> 
> 
> Congratulations, Colleen!
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-17 Thread Henry Nash
Yes, that was indeed the model originally proposed (some referred to it as 
“nested domains”). Back then we didn’t have project hierarchy support in 
keystone (actually the two requirements emerged together and intertwined - and 
for a while there was a joint spec). Today, there are some interesting 
characteristics in keystone:

1) Project hierarchy support
2) Domains are actually projects under-the-hood, with a special attribute 
(is_project == true).
3) Hence domains are already part of the hierarchy - they just are only allowed 
to be the root of a tree.
4) In fact, if we really want to get technical, in keystone the project 
representing a domain does actually have a parent (the hidden “root of all 
domains” which we don’t expose at the API level)

So from the above, once can see that allowing more than one layer of domains at 
the top of the tree would be (implantation wise) relative easy.  Although this 
has traditionally been my preferred solution, just ‘cause it is alluring and 
seems easy, doesn’t mean it is necessarily the right solution.

The most common alternative proposed is to use some kind of federation. The 
most likely scenario would be that the relationship between the cloud owner and 
a reseller would be a federated one, while the relationship between a reseller 
and their customers would be a traditional one of each customer having a 
domain. Some of the challenges to this approach would be:

a) How do we continually sync the catalogs? Presumably you would want all the 
endpoints (except keystone) to be the same in each catalog? 
b) What are the problems with having the same endpoint registered in multiple 
catalogs? How would keystone middleware on a common, say, nova endpoint know 
which keystone to validate with?
c) How to stop “admin” from one keystone from being treated as general “admin” 
on, say, a nova endpoint?
d) On-boarding a reseller would be a more manual process (i.e. you need to set 
up federation mappings etc.)

In some respects, you could argue that if I were a reseller, I would like this 
federated model better. I know, for sure, that nobody outside of my VCO can get 
access (i.e. since I have my own keystone, and token obtained from a different 
reseller’s keystone has no chance of getting in). However, I don’t believe we 
have every explored how to solve the various issues above.

Henry

> On 17 Mar 2017, at 10:38, Adrian Turjak <adri...@catalyst.net.nz> wrote:
> 
> This actually sounds a lot like a domain per reseller, with a sub-domain per 
> reseller customer. With the reseller themselves probably also running a 
> sub-domain or two for themselves. Mayb even running their own external 
> federated user system for that specific reseller domain.
> 
> That or something like it could be doable. The reseller would be aware of the 
> resources (likely to bill) and projects (since you would still likely bill 
> project or at least tag invoice items per project), so the hidden project 
> concept doesn't really fit.
> 
> One thing that I do think is useful, and we've done for our cloud, is letting 
> users see who exactly has access to their projects. For our Horizon we have a 
> custom identity/access control panel that shows clearly who has access on 
> your project and once I add on proper inheritance support will also list 
> users who has inherited access for the project you are currently scoped to. 
> This means a customer knows and can see when an admin has added themselves to 
> their project to help debug something. Plus it even helps them in general 
> manage their own user access better.
> 
> I know we've been looking at the reseller model ourselves but haven't really 
> gotten anywhere with it, partly because the margins didn't seem worth it, but 
> also because the complexity of shoe-horning it around our existing openstack 
> deployment. Domain per reseller (reseller as domain admin) and sub-domain per 
> reseller customer (as sub-domain admin) I'm interested in!
> 
> 
> 
> On 17 Mar. 2017 10:27 pm, Henry Nash <henryna...@mac.com> wrote:
> OK, so I worked on the original spec for this in Keystone, based around real 
> world requirements we (IBM) saw.  For the record, here’s the particular goal 
> we wanted to achieve:
> 
> 1) A cloud owner (i.e. the enterprise owns and maintains the core of the 
> cloud), wants to attract more traffic by using third-party resellers or 
> partners.
> 2) Those resellers will sell “cloud” to their own customers and be 
> responsible for finding & managing such customers. These resellers want to be 
> able to onboard such customers and “hand them the admin keys” to they 
> respective (conceptual) pieces of the cloud. I.e. Reseller X signs up 
> Customer Y. Customer Y gets “keystone admin” for their bit of the (shared) 
> cloud, and then can create and manage their own users without

Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-17 Thread Henry Nash
OK, so I worked on the original spec for this in Keystone, based around real 
world requirements we (IBM) saw.  For the record, here’s the particular goal we 
wanted to achieve:

1) A cloud owner (i.e. the enterprise owns and maintains the core of the 
cloud), wants to attract more traffic by using third-party resellers or 
partners.
2) Those resellers will sell “cloud” to their own customers and be responsible 
for finding & managing such customers. These resellers want to be able to 
onboard such customers and “hand them the admin keys” to they respective 
(conceptual) pieces of the cloud. I.e. Reseller X signs up Customer Y. Customer 
Y gets “keystone admin” for their bit of the (shared) cloud, and then can 
create and manage their own users without either the Reseller or the Overall 
cloud owner being involved. In keystone terms, each actual customer gets the 
equivalent of a domain, so that their users are separate from another other 
customers’ users across all resellers.
3) Resellers will want to be able to have a view of all their customers (but 
ONLY their customers, not those of another reseller), e.g. assign quotas to 
each one…and make sure the overall quota for all their customers has not 
exceeded their own quota agreed with the overall cloud owner. Resellers may 
sell addiction services to their customers, e.g. act as support for problems, 
do backups whatever - things that might need them to have controlled access 
particular customers' pieces of the cloud - i.e. they might need roles (or at 
least some kind of access rights) on their customer’s cloud.
4) In all of the above, by default, all hardware is shared and their are no 
dedicated endpoints (e.g. nova, neutron, keystone etc. are all shared), 
although such dedication should not be prevented should a customer want it.

The above is somewhat analogous to how mobile virtual networks operators (MVNO) 
work - e.g. in the UK if I sign up for Sky Mobile, it is actually using the O2 
network. As a Sky customer, I just know Sky - I’m not aware that Sky don’t own 
the network. Sky do own there on BSS/CRM systems - but they are not core 
network components. The idea was to provide someone similar for an OpenStack 
cloud provider, where they could support VCOs (Virtual Cloud Operators) on the 
their cloud.

I agree there are multiple ways to provide such a capability - and it is often 
difficult to decide what should be within the “Openstack” scope, and what 
should be provided outside of it.

Henry

> On 16 Mar 2017, at 21:10, Lance Bragstad  wrote:
> 
> Hey folks,
> 
> The reseller use case [0] has been popping up frequently in various 
> discussions [1], including unified limits.
> 
> For those who are unfamiliar with the reseller concept, it came out of early 
> discussions regarding hierarchical multi-tenancy (HMT). It essentially allows 
> a certain level of opaqueness within project trees. This opaqueness would 
> make it easier for providers to "resell" infrastructure, without having 
> customers/providers see all the way up and down the project tree, hence it 
> was termed reseller. Keystone originally had some ideas of how to implement 
> this after the HMT implementation laid the ground work, but it was never 
> finished.
> 
> With it popping back up in conversations, I'm looking for folks who are 
> willing to represent the idea. Participating in this thread doesn't mean 
> you're on the hook for implementing it or anything like that. 
> 
> Are you interested in reseller and willing to provide use-cases?
> 
> 
> 
> [0] 
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description
>  
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] removing Guang Yee (gyee) from keystone-core

2017-02-02 Thread Henry Nash
Thanks, Guang, for your valuable contributions.

Henry
> On 2 Feb 2017, at 05:13, Steve Martinelli  wrote:
> 
> Due to inactivity and a change in his day job, Guang was informed that he 
> would be removed from keystone-core, a change he understands and supports.
> 
> I'd like to publicly thank Guang for his years of service as a core member. 
> He juggled upstream and downstream responsibilities at HP while bringing real 
> world use cases to the table.
> 
> Thanks for everything Guang, o\
> 
> Steve
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Stepping down from keystone core

2016-11-23 Thread Henry Nash
Echoing the comments of others. - thanks for all your hard work and expertise.

Henry
> On 23 Nov 2016, at 15:05, Lance Bragstad  wrote:
> 
> Thanks for all your hard work, Marek. It's been a pleasure working with you. 
> Best of luck and hopefully our paths cross in the future!
> 
> On Tue, Nov 22, 2016 at 7:57 PM, Steve Martinelli  > wrote:
> Marek, thanks for everything you've done in Keystone. It was loads of fun to 
> develop some of the early federation work with you back in the Icehouse 
> release! Good luck in whatever the future holds for you! 
> 
> On Tue, Nov 22, 2016 at 5:18 PM, Marek Denis  > wrote:
> Hi,
> 
> Due to my current responsibilities I cannot serve as keystone core anymore. I 
> also feel I should make some space for others who will surely bring new 
> quality to the OpenStack Identity Program.
> 
> It's been a great journey, I surely learned a lot and improved both my 
> technical and soft skills. I can only hope my contributions and reviews have 
> been useful and made OpenStack a little bit better.
> 
> I wish you all the best in the future.
> 
> With kind regards,
> Marek Denis
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pike PTL

2016-11-23 Thread Henry Nash
Steve,

It’s been a pleasure working with you as PTL - an excellent tenure. Enjoy 
taking some time back!

Henry
> On 21 Nov 2016, at 19:38, Steve Martinelli  wrote:
> 
> one of these days i'll learn how to spell :)
> 
> On Mon, Nov 21, 2016 at 12:52 PM, Steve Martinelli  > wrote:
> Keystoners, 
> 
> I do not intend to run for the PTL position of the Pike development cycle. 
> I'm sending this out early so I can work with folks interested in the role, 
> If you intend to run for PTL in Pike and are interested in learning the ropes 
> (or just want to hear more about what the role means) then shoot me an email.
> 
> It's been an unforgettable ride. Being PTL a is very rewarding experience, I 
> encourage anyone interested to put your name forward. I'm not going away from 
> OpenStack, I just think three terms as PTL has been enough. It'll be nice to 
> have my evenings back :) 
> 
> To *all* the keystone contributors (cores and non-cores), thank you for all 
> your time and commitment. More importantly thank you for putting up with my 
> many questions, pings, pokes and -1s. Each of you are amazing and together 
> you make an awesome team. It has been an absolute pleasure to serve as PTL, 
> thank you for letting me do so.
> 
> stevemar
> 
> 
> 
> 
> Thanks for the idea Lana [1]
> [1] 
> http://lists.openstack.org/pipermail/openstack-docs/2016-November/009357.html 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-14 Thread Henry Nash
Jay,

I agree with your distinction - and when I am referring to rolling upgrades for 
keystone I am referring to when you are running a cluster of keystones (for 
performance and/or redundancy), and you want to roll the upgrade across the 
cluster without creating downtime of the overall keystone service. Such a 
keystone cluster deployment will be common in large clouds - and prior to 
Newton, keystone did not support such a rolling upgrade (you had to take all 
the nodes down, upgrade the DB and then boot them all back up). In order to 
support such a rolling upgrade you either need to have code that can work on 
different DB versions (either explicitly or via versioned objects), or you hide 
the schema changes by “data synchronisation via Triggers”, which is where this 
whole thread came from.

Henry
> On 14 Sep 2016, at 23:08, Jay Pipes <jaypi...@gmail.com> wrote:
> 
> On 09/01/2016 05:29 AM, Henry Nash wrote:
>> So as the person who drove the rolling upgrade requirements into
>> keystone in this cycle (because we have real customers that need it),
>> and having first written the keystone upgrade process to be
>> “versioned object ready” (because I assumed we would do this the same
>> as everyone else), and subsequently re-written it to be “DB Trigger
>> ready”…and written migration scripts for both these cases for the (in
>> fact very minor) DB changes that keystone has in Newton…I guess I
>> should also weigh in here :-)
> 
> Sorry for delayed response. PTO and all... I'd just like to make a 
> clarification here. Henry, you are not referring to *rolling upgrades* but 
> rather *online database migrations*. There's an important distinction between 
> the two concepts.
> 
> Online schema migrations, as discussed in this thread, are all about 
> minimizing the time that a database server is locked or otherwise busy 
> performing the tasks of changing SQL schemas and moving the underlying stored 
> data from their old location/name to their new location/name. As noted in 
> this thread, there's numerous ways of reducing the downtime experienced 
> during these data and schema migrations.
> 
> Rolling upgrades are not the same thing, however. What rolling upgrades refer 
> to is the ability of a *distributed system* to have its distributed component 
> services running different versions of the software and still be able to 
> communicate with the other components of the system. This time period during 
> which the components of the distributed system may run different versions of 
> the software may be quite lengthy (days or weeks long). The "rolling" part of 
> "rolling upgrade" refers to the fact that in a distributed system of 
> thousands of components or nodes, the upgraded software must be "rolled out" 
> to those thousands of nodes over a period of time.
> 
> Glance and Keystone do not participate in a rolling upgrade, because Keystone 
> and Glance do not have a distributed component architecture. Online data 
> migrations will reduce total downtime experienced during an *overall upgrade 
> procedure* for an OpenStack cloud, but Nova, Neutron and Cinder are the only 
> parts of OpenStack that are going to participate in a rolling upgrade because 
> they are the services that are distributed across all the many compute nodes.
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] new core reviewer (rderose)

2016-09-01 Thread Henry Nash
Welcome, Ron!

Henry

> On 1 Sep 2016, at 15:44, Steve Martinelli  wrote:
> 
> I want to welcome Ron De Rose (rderose) to the Keystone core team. In a short 
> time Ron has shown a very positive impact. Ron has contributed feature work 
> for shadowing LDAP and federated users, as well as enhancing password support 
> for SQL users. Implementing these features and picking up various bugs along 
> the way has helped Ron to understand the keystone code base. As a result he 
> is able to contribute to the team with quality code reviews. 
> 
> Thanks for all your hard work Ron, we sincerely appreciate it.
> 
> Steve
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread Henry Nash
So as the person who drove the rolling upgrade requirements into keystone in 
this cycle (because we have real customers that need it), and having first 
written the keystone upgrade process to be “versioned object ready” (because I 
assumed we would do this the same as everyone else), and subsequently 
re-written it to be “DB Trigger ready”…and written migration scripts for both 
these cases for the (in fact very minor) DB changes that keystone has in 
Newton…I guess I should also weigh in here :-)

For me, the argument comes down to:

a) Is the pain that needs to cured by the rolling upgrade requirement broadly 
in the same place in the various projects (i.e. nova, glance, keystone etc.)? 
If it is, then working towards a common solution is always preferable (whatever 
that solution is)
b) I would characterise the difference between the trigger approach, the 
versioned objects approach and the “n-app approach as: do we want a small 
amount of very nasty complexity vs. spreading that complexity out to be not as 
bad, but over a broader area. Probably fewer people can (successfully) write 
the nasty complexity trigger work, than they can, say, the “do it all in the 
app” work. LOC (which, of course, isn’t always a good measure) is also 
reflected in this characterisation, with the trigger code having probably the 
fewest LOC, and the app code having the greatest. 
c) I don’t really follow the argument that somehow the trigger code in 
migrations is less desirable because we use higher level sqla abstractions in 
our main-line code - I’ve always seen migration as different and expected that 
we might have to do strange things there. Further, we should be aware of the 
time-preiods…the migration cycle is a small % of elapsed time the cloud is 
running (well, hopefully) - so again, do we solve the “issues of migration” as 
part of the migration cycle (which is what the trigger approach does) or make 
our code be (effectively) continually migration aware (using versioned objects 
or in-app code)
d) The actual process (for an operator) is simpler for a rolling upgrade 
process with Triggers than the alternative (since you don’t require several of 
the checkpoints, e.g. when you know you can move out of compatibility mode 
etc.). Operator error is also a cause of problems in upgrades (especially as 
the complexity of a cloud increases).

From a purely keystone perspective, my gut feeling is that actually the trigger 
approach is likely to lead to a more robust, not less, solution - due to the 
fact that we solve the very specific problems of a given migration (i.e. need 
to keep column A in sync with Column B) or a short period of time, right at the 
point of pain, with well established techniques - albeit they be complex ones 
that need experienced coders in those techniques. I actually prefer the small 
locality of complexity (marked with “there be dragons there, be careful”), as 
opposed to spreading medium pain over a large area, which by definition is 
updated by many…and  may do the wrong thing inadvertently. It is simpler for 
operators.

I do recognise, however, the “let’s not do different stuff for a core project 
like keytsone” as a powerful argument. I just don’t know how to square this 
with the fact that although I started in the “versioned objects camp”, having 
worked through many of the issues have come to believe that the Trigger 
approach will be more reliable overall for this specific use case. From the 
other reaction to this thread, I don’t detect a lot of support for the Trigger 
approach becoming our overall, cross-project solution.

The actual migrations in Keystone needed for Newton are minor, so one 
possibility is we use keystone as a guinea pig for this approach in Newton…if 
we had to undo this in a subsequent release, we are not talking about rafts of 
migration code to redo.

Henry



> On 1 Sep 2016, at 09:45, Robert Collins  wrote:
> 
> On 31 August 2016 at 01:57, Clint Byrum  wrote:
>> 
>> 
>> It's simple, these are the holy SQL schema commandments:
>> 
>> Don't delete columns, ignore them.
>> Don't change columns, create new ones.
>> When you create a column, give it a default that makes sense.
> 
> I'm sure you're aware of this but I think its worth clarifying for non
> DBAish folk: non-NULL values can change a DDL statements execution
> time from O(1) to O(N) depending on the DB in use. E.g. for Postgres
> DDL requires an exclusive table lock, and adding a column with any
> non-NULL value (including constants) requires calculating a new value
> for every row, vs just updating the metadata - see
> https://www.postgresql.org/docs/9.5/static/sql-altertable.html
> """
> When a column is added with ADD COLUMN, all existing rows in the table
> are initialized with the column's default value (NULL if no DEFAULT
> clause is specified). If there is no DEFAULT clause, this is merely a
> metadata change and does not require any immediate 

Re: [openstack-dev] [cinder] [keystone] cinder quota behavior differences after Keystone mitaka upgrade

2016-06-28 Thread Henry Nash
Hi Matt,

So the keystone changes were intentional. From Mitaka onwards, a domain is 
represented as a project which is “acting as a domain” (it has an attribute 
“is_domain” set to true). The situation you describe, where what were top level 
projects now have the project acting as the default domain as their parent, is 
what I would expect to happen after the update.

During Mitaka development, we  worked with the cinder folks - who were to 
re-designing their quota code anyway - and this was modified to take account of 
the project structure. I’m not sure if the quota semantics you see are what was 
intended (I’ll let the cinder team comment). Code can, if desired, distinguish 
the case of top projects that are at the top level, vs projects somewhere way 
down the hierarchy, by looking at the parent and seeing if it is a project 
acting as a domain.

Henry
keystone core
> On 27 Jun 2016, at 17:13, Matt Fischer  wrote:
> 
> We upgraded our dev environment last week to Keystone stable/mitaka. Since 
> then we're unable to show or set quotas on projects of which the admin is not 
> a member. Looking at the cinder code, it seems that cinder is pulling a 
> project list and attempting to determine a hierarchy.  On Liberty Keystone, 
> projects seem to lack parents:
> 
>  id=9e839870dd0d4a2f96f9d71b7e7c5a4e, is_domain=False, links={u'self': 
> u'https://liberty-endpoint:5000/v3/projects/9e839870dd0d4a2f96f9d71b7e7c5a4e' 
> },
>  name=admin, parent_id=None, subtree=None>
> 
> In Mitaka, it seems that projects are children of the default domain:
> 
>  id=4764ba822ecb43e582794b875751924c, is_domain=False, links={u'self': 
> u'http://mitaka-endpoint:5000/v3/projects/4764ba822ecb43e582794b875751924c' 
> }, 
> name=admin, parent_id=default, subtree=None>
> 
> In Liberty since all projects were parentless, the authorize_* code blocks 
> were skipped since both conditionals were false:
> 
> https://github.com/openstack/cinder/blob/stable/liberty/cinder/api/contrib/quotas.py#L174-L191
>  
> 
> 
> But now in Mitaka, the code is run, and it fails out since the projects are 
> "brothers", both with the parent of the default domain, but not 
> hierarchically related.
> 
> Previously it was a useful ability for us to be able to (as admins) set and 
> view  quotas for cinder projects. Now we need to scope into the user's 
> project to even be able to view their quotas, much less change them. This 
> seems broken, but I'm not sure where the issue is or why the keystone 
> behavior changed. Is this the expected behavior?
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-20 Thread Henry Nash
Hi

So following discussion on this, I have modified the spec to try and take 
account of the various concerns etc. See: 
https://review.openstack.org/#/c/318605/

For reference, here is an extract from this spec that summarizes the level of 
compatibility being proposed (A “3.6 client” refers to a Mitaka or earlier 
client speaking either directly to keystone server, or via our client 
libraries):

Summary of Compatibility Impacts
-

Here is a summary of the impact on clients when the server they are talking to
is upgraded to Newton:

* For a 3.6 client the names of project entities created before the server
  upgrade will not change (i.e. they will not contain the path).
* For either a 3.6 or a 3.7 client, any projects entities after the server is
  upgraded will be returned with names including the path.
* A 3.6 client will continue to be able to scope an auth to a project that
  existed before the upgrade without specifying a path, wherever that project
  exists in the hierarchy. To scope to a project created after the upgrade then
  the name (by definition) includes the path.
* Code written that extracts the project name from any of the project APIs that
  returns project entities, and then plugs that name into an auth scope will
  continue to work unmodifed, irrespective of the version of the client or
  server
* A 3.7 client will always see project names that include the path,
  irrespective of whether those projects were created before or after the
  server was upgraded. The implication of this is that once a client has
  upgraded to 3.7, then if the name of the project is being entered by a user
  (say for auth scope), then this name must always contain the path. While,
  technically, we could try to maintain the ability to access projects created
  before the server upgrade via their non-path name, it would get very
  confusing for users having to remember which projects you could or couldn't
  access this way.

Feel free to raise any concerns you have about the above.

Henry

> On 14 Jun 2016, at 16:22, Morgan Fainberg <morgan.fainb...@gmail.com> wrote:
> 
> On Jun 14, 2016 00:46, "Henry Nash" <henryna...@mac.com 
> <mailto:henryna...@mac.com>> wrote:
> 
> 
>> On 14 Jun 2016, at 07:34, Morgan Fainberg <morgan.fainb...@gmail.com 
>> <mailto:morgan.fainb...@gmail.com>> wrote:
>> 
>> 
>> 
>> On Mon, Jun 13, 2016 at 3:30 PM, Henry Nash <henryna...@mac.com 
>> <mailto:henryna...@mac.com>> wrote:
>> So, I think it depends what level of compatibility we are aiming at. Let me 
>> articulate them, and we can agree which we want:
>> 
>> C1) In all version of the our APIs today (v2 and v3.0 to v3.6), you have 
>> been able to issue an auth request which used project/tenant name as the 
>> scoping directive (with v3 you need a domain component as well, but that’s 
>> not relevant for this discussion). In these APIs, we absolutely expect that 
>> if you could issue an auth request to. say project “test”, in, say, v3.X, 
>> then you could absolutely issue the exact same command at V3.(X+1). This has 
>> remained true, even when we introduced project hierarchies, i.e.: if I 
>> create:
>> 
>> /development/myproject/test
>> 
>> ...then I can still scope directly to the test project by simply specifying 
>> “test” as the project name (since, of course, all project names must still 
>> be unique in the domain). We never want to break this for so long as we 
>> formally support any APIs that once allowed this.
>> 
>> C2) To aid you issuing an auth request scoped by project (either name or 
>> id), we support a special API as part of the auth url (GET/auth/projects) 
>> that lists the projects the caller *could* scope to (i.e. those they have 
>> any kind of role on). You can take the “name” or “id” returned by this API 
>> and plug it directly into the auth request. Again for any API we currently 
>> support, we can’t break this.
>> 
>> C3) The name attribute of a project is its node-name in the hierarchy. If we 
>> decide to change this in a future API, we would not want a client using the 
>> existing API to get surprised and suddenly receive a path instead of the 
>> just the node-name (e.g. what if this was a UI of some type). 
>> 
>> Given all the above, there is no solution that can keep the above all true 
>> and allow more than one project of the same name in, say, v3.7 of the API. 
>> Even if we relaxed C2 and C2 -  C1 can never be guaranteed to be still 
>> supported. Neither of the original proposed solutions can address this 
>> (since it is a data modelling problem, not an API problem).
>> 
>> However, given that we will h

Re: [openstack-dev] Version header for OpenStack microversion support

2016-06-18 Thread Henry Nash
…I think it is so you can have a header in a request that, once issued, can be 
passed for service to service, e.g.:

OpenStack-API-Version: identity 3.7, compute 2.11

Henry

> On 18 Jun 2016, at 11:32, Jamie Lennox <jamielen...@gmail.com> wrote:
> 
> Quick question: why do we need the service type or name in there? You really 
> should know what API you're talking to already and it's just something that 
> makes it more difficult to handle all the different APIs in a common way.
> 
> On Jun 18, 2016 8:25 PM, "Steve Martinelli" <s.martine...@gmail.com 
> <mailto:s.martine...@gmail.com>> wrote:
> Looks like Manila is using the service name instead of type 
> (X-OpenStack-Manila-API-Version) according to this link anyway: 
> http://docs.openstack.org/developer/manila/devref/api_microversion_dev.html 
> <http://docs.openstack.org/developer/manila/devref/api_microversion_dev.html>
> Keystone can follow the cross project spec and use the service type (Identity 
> instead of Keystone).
> 
> On Jun 17, 2016 12:44 PM, "Ed Leafe" <e...@leafe.com <mailto:e...@leafe.com>> 
> wrote:
> On Jun 17, 2016, at 11:29 AM, Henry Nash <henryna...@mac.com 
> <mailto:henryna...@mac.com>> wrote:
> 
> > We are currently in the process of implementing microversion support in 
> > keystone - and are obviously trying to follow the cross-projec spec for 
> > this 
> > (http://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html
> >  
> > <http://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html>).
> >
> > One thing I noticed was that the header specified in this spec is of the 
> > form:
> >
> > OpenStack-API-Version: [SERVICE_TYPE] [X,Y]
> >
> > for example:
> >
> > OpenStack-API-Version: identity 3.7
> >
> > However, from what i can see of the current implementations I have seen of 
> > microversioning in OpenStack (Nova, Manilla), they use service-specific 
> > headers, e.g.
> >
> > X-OpenStack-Nova-API-Version: 2.12
> >
> > My question is whether there a plan to converge on the generalized header 
> > format….or are we keep with the service-specific headers? I’d obviously 
> > like to implement the correct one for keystone.
> 
> Yes, the plan is to converge on the more generic headers. The Nova headers 
> (don’t know about Manilla) pre-date the API WG spec, and were the motivation 
> for development of that spec. We’ve even made it possible to accept both 
> header formats [0] until things can be migrated to the recommended format.
> 
> [0] https://review.openstack.org/#/c/300077/ 
> <https://review.openstack.org/#/c/300077/>
> 
> -- Ed Leafe
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Version header for OpenStack microversion support

2016-06-17 Thread Henry Nash
Hi

We are currently in the process of implementing microversion support in 
keystone - and are obviously trying to follow the cross-projec spec for this 
(http://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html
 
).
 

One thing I noticed was that the header specified in this spec is of the form:

OpenStack-API-Version: [SERVICE_TYPE] [X,Y]

for example:

OpenStack-API-Version: identity 3.7

However, from what i can see of the current implementations I have seen of 
microversioning in OpenStack (Nova, Manilla), they use service-specific 
headers, e.g.

X-OpenStack-Nova-API-Version: 2.12

My question is whether there a plan to converge on the generalized header 
format….or are we keep with the service-specific headers? I’d obviously like to 
implement the correct one for keystone.

Thanks

Henry





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-14 Thread Henry Nash

> On 14 Jun 2016, at 07:34, Morgan Fainberg <morgan.fainb...@gmail.com> wrote:
> 
> 
> 
> On Mon, Jun 13, 2016 at 3:30 PM, Henry Nash <henryna...@mac.com 
> <mailto:henryna...@mac.com>> wrote:
> So, I think it depends what level of compatibility we are aiming at. Let me 
> articulate them, and we can agree which we want:
> 
> C1) In all version of the our APIs today (v2 and v3.0 to v3.6), you have been 
> able to issue an auth request which used project/tenant name as the scoping 
> directive (with v3 you need a domain component as well, but that’s not 
> relevant for this discussion). In these APIs, we absolutely expect that if 
> you could issue an auth request to. say project “test”, in, say, v3.X, then 
> you could absolutely issue the exact same command at V3.(X+1). This has 
> remained true, even when we introduced project hierarchies, i.e.: if I create:
> 
> /development/myproject/test
> 
> ...then I can still scope directly to the test project by simply specifying 
> “test” as the project name (since, of course, all project names must still be 
> unique in the domain). We never want to break this for so long as we formally 
> support any APIs that once allowed this.
> 
> C2) To aid you issuing an auth request scoped by project (either name or id), 
> we support a special API as part of the auth url (GET/auth/projects) that 
> lists the projects the caller *could* scope to (i.e. those they have any kind 
> of role on). You can take the “name” or “id” returned by this API and plug it 
> directly into the auth request. Again for any API we currently support, we 
> can’t break this.
> 
> C3) The name attribute of a project is its node-name in the hierarchy. If we 
> decide to change this in a future API, we would not want a client using the 
> existing API to get surprised and suddenly receive a path instead of the just 
> the node-name (e.g. what if this was a UI of some type). 
> 
> Given all the above, there is no solution that can keep the above all true 
> and allow more than one project of the same name in, say, v3.7 of the API. 
> Even if we relaxed C2 and C2 -  C1 can never be guaranteed to be still 
> supported. Neither of the original proposed solutions can address this (since 
> it is a data modelling problem, not an API problem).
> 
> However, given that we will have, for the first time, the ability to 
> microversion the Identity API starting with 3.7, there are things we can do 
> to start us down this path. Let me re-articulate the options I am proposing:
> 
> Option 1A) In v3.7 we add a ‘path_name' attribute to a project entity, which 
> is hence returned by any API that returns a project entity. The ‘path_name' 
> attribute will contain the full path name, including the project itself. 
> (Note that clients speaking 3.6 and earlier will not see this new attribute). 
> Further, for clients speaking 3.7 and later, we add support to allow a 
> ‘path_name' (as an alternative to ‘name' or ‘id') to be used in auth scoping. 
> We do not (yet) relax any uniqueness constraints, but mark API 3.6 and 
> earlier as deprecated, as well as using the ‘name’ attribute in the auth 
> request. (we still support all these, we just mark them as deprecated). At 
> some time in the future (e.g. 3.8), we remove support for using ‘name’ for 
> auth, insisting on the use of ‘path_name’ instead. Sometime later (e.g. 3.10) 
> we remove support for 3.8 and earlier. Then and only then, do we relax the 
> uniqueness constraint allowing projects with duplicate node-names (but with 
> different parents).
> 
> Option 1B) The same as 1A, but we insist on path_name use in auth in v3.7 
> (i.e. no grace-period for still using just ’name', instead relying on the 
> fact that 3.6 clients will still work just fine). Then later (e.g. perhaps 
> v3.9), we remove support for v3.6 and before…and relax the uniqueness 
> constraint.
> 
> 
> Let say the assumption that we are "removing" 3.6 should be stopped right 
> now. I don't want to embark on the "when we remove this" as an option or 
> discuss how we remove previous versions. Please lets assume for the sake of 
> this conversation unless we have a major API version increase (API v4 and do 
> not expose v4 projects via v3 API) this is unlikely happen. Deprecated or 
> not, planning the removal of current supported API auth functionality is not 
> on the table. In v3 we are not going to "relax" the uniqueness constraint in 
> the foreseeable future. Just assume v3.6 is going to live forever for now and 
> we can revisit when/if limits on microversion lower bounds is addressed in 
> OpenStack with TC direction/guidance.

Why should we not be able to remove a microversion (once keystone properl

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-13 Thread Henry Nash
 of a project entity may not 
contain what they expect). For 1A no changes are required at all for a client 
to work with 3.7 and maintain the current experience, although changes ARE of 
course required to start moving away from using the non-pathed ‘name’ 
attribute, but that doesn’t have to be done straight away, we give them a grace 
cycle. For 1B, you need to make changes to support 3.7 (since a path is 
required for auth).

As I have said before, my preference is Option 1, since I think it results in a 
more logical end position, and neither 1 or 2 get us there any more quickly. 
For option 1, I prefer the more gradual approach of 1A, just so that we give 
clients a grace period. Given we need multiple cycles to achieve any of the 
options, let’s decide the final conceptual model we want - and then move 
towards it. Options 1's conceptual mode is that ‘name’ remains the same, and we 
add separate ‘path’ attribute, Option 2’s other redefines ‘name’ to always be a 
full path.

Henry


> On 10 Jun 2016, at 18:48, Morgan Fainberg <morgan.fainb...@gmail.com> wrote:
> 
> 
> 
> On Fri, Jun 10, 2016 at 6:37 AM, Henry Nash <henryna...@mac.com 
> <mailto:henryna...@mac.com>> wrote:
> On further reflection, it seems to me that we can never simply enable either 
> of these approaches in a single release. Even a v4.0 version of the API 
> doesn’t help - since presumably a sever supporting v4 would want to be able 
> to support v3.x for a signification time; and, already discussed, as soon as 
> you allow multiple none-names to have the same name, you can no longer 
> guarantee to support the current API.
> 
> Hence the only thing I think we can do (if we really do want to change the 
> current functionality) is to do this over several releases with a typical 
> deprecation cycle, e.g.
> 
> 1) At release 3.7 we allow you to (optionally) specify path names for 
> auth….but make no changes to the uniqueness constraints. We also change the 
> GET /auth/projects to return a path name. However, you can still auth exactly 
> the way we do today (since there will always only be a single project of a 
> given node-name). If however, you do auth without a path (to a project that 
> isn’t a top level project), we log a warning to say this is deprecated (2 
> cycles, 4 cycles?)
> 2) If you connect with a 3.6 client, then you get the same as today for GET 
> /auth/projects and cannot use a path name to auth.
> 3) At sometime in the future, we deprecate the “auth without a path” 
> capability. We can debate as to whether this has to be a major release.
> 
> If we take this gradual approach, I would be pushing for the “relax project 
> name constraints” approach…since I believe this leads to a cleaner eventual 
> solution (and there is no particular advantage with the hierarchical naming 
> approach) - and (until the end of the deprecation) there is no break to the 
> existing API.
> 
> Henry
> 
> 
> How do you handle relaxed project name constraints without completely 
> breaking 3.6 auth - regardless of future microversion that requires the full 
> path. This is a major api change and will result in very complex matrix of 
> project name mapping (old projects that can be accessed without the full 
> path, new that must always have the path)?
> 
> Simply put, I do not see relaxing project name constraints as viable without 
> a major API change and projects that simply are unavailable for scoping a 
> token to under the base api version (pre-microversions) of 3.6
> 
> I am certain that if all projects post API version 3.6 are created with the 
> full-path name only and that is how they are represented to the user for 
> auth, we get both things for free. Old projects pre-full-path would need 
> optional compatibility for deconstructing a full-path for them.  Basically 
> you end up with "path" and "name", in old projects these differ, in new 
> projects they are the same.  No conflicts, not breaking "currently working 
> auth", no "major API version" needed.
> 
> --Morgan
>  
>> On 7 Jun 2016, at 09:47, Henry Nash <henryna...@mac.com 
>> <mailto:henryna...@mac.com>> wrote:
>> 
>> OK, so thanks for the feedback - understand the message.
>> 
>> However, in terms of compatibility, the one thing that concerns me about the 
>> hierarchical naming approach is that even with microversioing, we might 
>> still surprise a client. An unmodified client (i.e. doesn’t understand 3.7) 
>> would still see a change in the data being returned (the project names have 
>> suddenly become full path names). We have to return this even if they don’t 
>> ask for 3.7, since otherwise there is no difference between this approach 
>> and relax

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-10 Thread Henry Nash
On further reflection, it seems to me that we can never simply enable either of 
these approaches in a single release. Even a v4.0 version of the API doesn’t 
help - since presumably a sever supporting v4 would want to be able to support 
v3.x for a signification time; and, already discussed, as soon as you allow 
multiple none-names to have the same name, you can no longer guarantee to 
support the current API.

Hence the only thing I think we can do (if we really do want to change the 
current functionality) is to do this over several releases with a typical 
deprecation cycle, e.g.

1) At release 3.7 we allow you to (optionally) specify path names for auth….but 
make no changes to the uniqueness constraints. We also change the GET 
/auth/projects to return a path name. However, you can still auth exactly the 
way we do today (since there will always only be a single project of a given 
node-name). If however, you do auth without a path (to a project that isn’t a 
top level project), we log a warning to say this is deprecated (2 cycles, 4 
cycles?)
2) If you connect with a 3.6 client, then you get the same as today for GET 
/auth/projects and cannot use a path name to auth.
3) At sometime in the future, we deprecate the “auth without a path” 
capability. We can debate as to whether this has to be a major release.

If we take this gradual approach, I would be pushing for the “relax project 
name constraints” approach…since I believe this leads to a cleaner eventual 
solution (and there is no particular advantage with the hierarchical naming 
approach) - and (until the end of the deprecation) there is no break to the 
existing API.

Henry
> On 7 Jun 2016, at 09:47, Henry Nash <henryna...@mac.com> wrote:
> 
> OK, so thanks for the feedback - understand the message.
> 
> However, in terms of compatibility, the one thing that concerns me about the 
> hierarchical naming approach is that even with microversioing, we might still 
> surprise a client. An unmodified client (i.e. doesn’t understand 3.7) would 
> still see a change in the data being returned (the project names have 
> suddenly become full path names). We have to return this even if they don’t 
> ask for 3.7, since otherwise there is no difference between this approach and 
> relaxing the project naming in terms of trying to prevent auth breakages.
> 
> In more detail:
> 
> 1) Both approaches were planned to return the path name (instead of the node 
> name) in GET /auth/projects - i.e. the API you are meant to use to find out 
> what you can scope to
> 2) Both approaches were planned to accept the path name in the auth request 
> block
> 3) The difference in hierarchical naming is the if I do a regular GET 
> /project(s) I also see the full path name as the “project name”
> 
> if we don’t do 3), then code that somehow authenticates, and then uses the 
> regular GET /project(s) calls to find a project name and then re-scopes (or 
> re-auths) to that name, will fail if the project they want is not a top level 
> project. However, the flip side is that if there is code that uses these same 
> calls to, say, display projects to the user (e.g. a home grown UI) - then it 
> might get confused until it supports 3.7 (i.e. asking for the old 
> microversion won’t help it) since all the names include the hierarchical path.
> 
> Just want to make sure we understand the implications….
> 
> Henry
> 
>> On 4 Jun 2016, at 08:34, Monty Taylor <mord...@inaugust.com 
>> <mailto:mord...@inaugust.com>> wrote:
>> 
>> On 06/04/2016 01:53 AM, Morgan Fainberg wrote:
>>> 
>>> On Jun 3, 2016 12:42, "Lance Bragstad" <lbrags...@gmail.com 
>>> <mailto:lbrags...@gmail.com>
>>> <mailto:lbrags...@gmail.com <mailto:lbrags...@gmail.com>>> wrote:
>>>> 
>>>> 
>>>> 
>>>> On Fri, Jun 3, 2016 at 11:20 AM, Henry Nash <henryna...@mac.com 
>>>> <mailto:henryna...@mac.com>
>>> <mailto:henryna...@mac.com <mailto:henryna...@mac.com>>> wrote:
>>>>> 
>>>>> 
>>>>>> On 3 Jun 2016, at 16:38, Lance Bragstad <lbrags...@gmail.com 
>>>>>> <mailto:lbrags...@gmail.com>
>>> <mailto:lbrags...@gmail.com <mailto:lbrags...@gmail.com>>> wrote:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash <henryna...@mac.com 
>>>>>> <mailto:henryna...@mac.com>
>>> <mailto:henryna...@mac.com <mailto:henryna...@mac.com>>> wrote:
>>>>>>> 
>>>>>>> 
>>>>>>>> On 3 Jun 2016, at 01:22, Adam Young <ayo...@redhat

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-07 Thread Henry Nash
OK, so thanks for the feedback - understand the message.

However, in terms of compatibility, the one thing that concerns me about the 
hierarchical naming approach is that even with microversioing, we might still 
surprise a client. An unmodified client (i.e. doesn’t understand 3.7) would 
still see a change in the data being returned (the project names have suddenly 
become full path names). We have to return this even if they don’t ask for 3.7, 
since otherwise there is no difference between this approach and relaxing the 
project naming in terms of trying to prevent auth breakages.

In more detail:

1) Both approaches were planned to return the path name (instead of the node 
name) in GET /auth/projects - i.e. the API you are meant to use to find out 
what you can scope to
2) Both approaches were planned to accept the path name in the auth request 
block
3) The difference in hierarchical naming is the if I do a regular GET 
/project(s) I also see the full path name as the “project name”

if we don’t do 3), then code that somehow authenticates, and then uses the 
regular GET /project(s) calls to find a project name and then re-scopes (or 
re-auths) to that name, will fail if the project they want is not a top level 
project. However, the flip side is that if there is code that uses these same 
calls to, say, display projects to the user (e.g. a home grown UI) - then it 
might get confused until it supports 3.7 (i.e. asking for the old microversion 
won’t help it) since all the names include the hierarchical path.

Just want to make sure we understand the implications….

Henry

> On 4 Jun 2016, at 08:34, Monty Taylor <mord...@inaugust.com> wrote:
> 
> On 06/04/2016 01:53 AM, Morgan Fainberg wrote:
>> 
>> On Jun 3, 2016 12:42, "Lance Bragstad" <lbrags...@gmail.com 
>> <mailto:lbrags...@gmail.com>
>> <mailto:lbrags...@gmail.com <mailto:lbrags...@gmail.com>>> wrote:
>>> 
>>> 
>>> 
>>> On Fri, Jun 3, 2016 at 11:20 AM, Henry Nash <henryna...@mac.com 
>>> <mailto:henryna...@mac.com>
>> <mailto:henryna...@mac.com <mailto:henryna...@mac.com>>> wrote:
>>>> 
>>>> 
>>>>> On 3 Jun 2016, at 16:38, Lance Bragstad <lbrags...@gmail.com 
>>>>> <mailto:lbrags...@gmail.com>
>> <mailto:lbrags...@gmail.com <mailto:lbrags...@gmail.com>>> wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>> On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash <henryna...@mac.com 
>>>>> <mailto:henryna...@mac.com>
>> <mailto:henryna...@mac.com <mailto:henryna...@mac.com>>> wrote:
>>>>>> 
>>>>>> 
>>>>>>> On 3 Jun 2016, at 01:22, Adam Young <ayo...@redhat.com 
>>>>>>> <mailto:ayo...@redhat.com>
>> <mailto:ayo...@redhat.com <mailto:ayo...@redhat.com>>> wrote:
>>>>>>> 
>>>>>>> On 06/02/2016 07:22 PM, Henry Nash wrote:
>>>>>>>> 
>>>>>>>> Hi
>>>>>>>> 
>>>>>>>> As you know, I have been working on specs that change the way we
>> handle the uniqueness of project names in Newton. The goal of this is to
>> better support project hierarchies, which as they stand today are
>> restrictive in that all project names within a domain must be unique,
>> irrespective of where in the hierarchy that projects sits (unlike, say,
>> the unix directory structure where a node name only has to be unique
>> within its parent). Such a restriction is particularly problematic when
>> enterprise start modelling things like test, QA and production as
>> branches of a project hierarchy, e.g.:
>>>>>>>> 
>>>>>>>> /mydivsion/projectA/dev
>>>>>>>> /mydivsion/projectA/QA
>>>>>>>> /mydivsion/projectA/prod
>>>>>>>> /mydivsion/projectB/dev
>>>>>>>> /mydivsion/projectB/QA
>>>>>>>> /mydivsion/projectB/prod
>>>>>>>> 
>>>>>>>> Obviously the idea of a project name (née tenant) being unique
>> has been around since near the beginning of (OpenStack) time, so we must
>> be cautions. There are two alternative specs proposed:
>>>>>>>> 
>>>>>>>> 1) Relax project name
>> constraints: https://review.openstack.org/#/c/310048/ 
>>>>>>>> 2) Hierarchical project
>> naming: https://review.openstack.org/#/c/318605/
>>>>>>>> 
>>>>>>>> First, here’s what they 

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-03 Thread Henry Nash
Both proposals allow you to provide a path as the project name in auth (so you 
can still use domain name + project path name). The difference between the two 
is whether you formally represent the path in the name attribute of a project, 
i.e. when it is returned by GET /project.  The relax name constraints works 
like the linux dir tree. If I do an ‘ls’ I get the node names of the all 
entities in that directory, but I can still do 'cd /a/b/c' to jump right to 
where I want.

Henry
> On 3 Jun 2016, at 23:53, Morgan Fainberg <morgan.fainb...@gmail.com> wrote:
> 
> 
> On Jun 3, 2016 12:42, "Lance Bragstad" <lbrags...@gmail.com 
> <mailto:lbrags...@gmail.com>> wrote:
> >
> >
> >
> > On Fri, Jun 3, 2016 at 11:20 AM, Henry Nash <henryna...@mac.com 
> > <mailto:henryna...@mac.com>> wrote:
> >>
> >>
> >>> On 3 Jun 2016, at 16:38, Lance Bragstad <lbrags...@gmail.com 
> >>> <mailto:lbrags...@gmail.com>> wrote:
> >>>
> >>>
> >>>
> >>> On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash <henryna...@mac.com 
> >>> <mailto:henryna...@mac.com>> wrote:
> >>>>
> >>>>
> >>>>> On 3 Jun 2016, at 01:22, Adam Young <ayo...@redhat.com 
> >>>>> <mailto:ayo...@redhat.com>> wrote:
> >>>>>
> >>>>> On 06/02/2016 07:22 PM, Henry Nash wrote:
> >>>>>>
> >>>>>> Hi
> >>>>>>
> >>>>>> As you know, I have been working on specs that change the way we 
> >>>>>> handle the uniqueness of project names in Newton. The goal of this is 
> >>>>>> to better support project hierarchies, which as they stand today are 
> >>>>>> restrictive in that all project names within a domain must be unique, 
> >>>>>> irrespective of where in the hierarchy that projects sits (unlike, 
> >>>>>> say, the unix directory structure where a node name only has to be 
> >>>>>> unique within its parent). Such a restriction is particularly 
> >>>>>> problematic when enterprise start modelling things like test, QA and 
> >>>>>> production as branches of a project hierarchy, e.g.:
> >>>>>>
> >>>>>> /mydivsion/projectA/dev
> >>>>>> /mydivsion/projectA/QA
> >>>>>> /mydivsion/projectA/prod
> >>>>>> /mydivsion/projectB/dev
> >>>>>> /mydivsion/projectB/QA
> >>>>>> /mydivsion/projectB/prod
> >>>>>>
> >>>>>> Obviously the idea of a project name (née tenant) being unique has 
> >>>>>> been around since near the beginning of (OpenStack) time, so we must 
> >>>>>> be cautions. There are two alternative specs proposed:
> >>>>>>
> >>>>>> 1) Relax project name constraints: 
> >>>>>> https://review.openstack.org/#/c/310048/ 
> >>>>>> <https://review.openstack.org/#/c/310048/> 
> >>>>>> 2) Hierarchical project naming: 
> >>>>>> https://review.openstack.org/#/c/318605/ 
> >>>>>> <https://review.openstack.org/#/c/318605/>
> >>>>>>
> >>>>>> First, here’s what they have in common:
> >>>>>>
> >>>>>> a) They both solve the above problem
> >>>>>> b) They both allow an authorization scope to use a path rather than 
> >>>>>> just a simple name, hence allowing you to address a project anywhere 
> >>>>>> in the hierarchy
> >>>>>> c) Neither have any impact if you are NOT using a hierarchy - i.e. if 
> >>>>>> you just have a flat layer of projects in a domain, then they have no 
> >>>>>> API or semantic impact (since both ensure that a project’s name must 
> >>>>>> still be unique within a parent)
> >>>>>>
> >>>>>> Here’s how the differ:
> >>>>>>
> >>>>>> - Relax project name constraints (1), keeps the meaning of the ‘name’ 
> >>>>>> attribute of a project to be its node-name in the hierarchy, but 
> >>>>>> formally relaxes the uniqueness constraint to say that it only has to 
> >>>>>> be unique within its parent. In other words, let’s really model this a 
> >>>>>&

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-03 Thread Henry Nash

> On 3 Jun 2016, at 16:38, Lance Bragstad <lbrags...@gmail.com> wrote:
> 
> 
> 
> On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash <henryna...@mac.com 
> <mailto:henryna...@mac.com>> wrote:
> 
>> On 3 Jun 2016, at 01:22, Adam Young <ayo...@redhat.com 
>> <mailto:ayo...@redhat.com>> wrote:
>> 
>> On 06/02/2016 07:22 PM, Henry Nash wrote:
>>> Hi
>>> 
>>> As you know, I have been working on specs that change the way we handle the 
>>> uniqueness of project names in Newton. The goal of this is to better 
>>> support project hierarchies, which as they stand today are restrictive in 
>>> that all project names within a domain must be unique, irrespective of 
>>> where in the hierarchy that projects sits (unlike, say, the unix directory 
>>> structure where a node name only has to be unique within its parent). Such 
>>> a restriction is particularly problematic when enterprise start modelling 
>>> things like test, QA and production as branches of a project hierarchy, 
>>> e.g.:
>>> 
>>> /mydivsion/projectA/dev
>>> /mydivsion/projectA/QA
>>> /mydivsion/projectA/prod
>>> /mydivsion/projectB/dev
>>> /mydivsion/projectB/QA
>>> /mydivsion/projectB/prod
>>> 
>>> Obviously the idea of a project name (née tenant) being unique has been 
>>> around since near the beginning of (OpenStack) time, so we must be 
>>> cautions. There are two alternative specs proposed:
>>> 
>>> 1) Relax project name constraints:  
>>> <https://review.openstack.org/#/c/310048/>https://review.openstack.org/#/c/310048/
>>>  <https://review.openstack.org/#/c/310048/> 
>>> 2) Hierarchical project naming:  
>>> <https://review.openstack.org/#/c/318605/>https://review.openstack.org/#/c/318605/
>>>  <https://review.openstack.org/#/c/318605/>
>>> 
>>> First, here’s what they have in common:
>>> 
>>> a) They both solve the above problem
>>> b) They both allow an authorization scope to use a path rather than just a 
>>> simple name, hence allowing you to address a project anywhere in the 
>>> hierarchy
>>> c) Neither have any impact if you are NOT using a hierarchy - i.e. if you 
>>> just have a flat layer of projects in a domain, then they have no API or 
>>> semantic impact (since both ensure that a project’s name must still be 
>>> unique within a parent)
>>> 
>>> Here’s how the differ:
>>> 
>>> - Relax project name constraints (1), keeps the meaning of the ‘name’ 
>>> attribute of a project to be its node-name in the hierarchy, but formally 
>>> relaxes the uniqueness constraint to say that it only has to be unique 
>>> within its parent. In other words, let’s really model this a bit like a 
>>> unix directory tree.
> 
> I think I lean towards relaxing the project name constraint. The reason is 
> because we already expose `domain_id`, `parent_id`, and `name` of a project. 
> By relaxing the constraint we can give the user everything the need to know 
> about a project without really changing any of these. When using 3.7, you 
> know what domain your project is in, you know the identifier of the parent 
> project, and you know that your project name is unique within the parent.  
>>> - Hierarchical project naming (2), formally changes the meaning of the 
>>> ‘name’ attribute to include the path to the node as well as the node name, 
>>> and hence ensures that the (new) value of the name attribute remains unique.
> 
> Do we intend to *store* the full path as the name, or just build it out on 
> demand? If we do store the full path, we will have to think about our current 
> data model since the depth of the organization or domain would be limited by 
> the max possible name length. Will performance be something to think about 
> building the full path on every request?   
I now mention this issue in the spec for hierarchical project naming (the relax 
naming approach does not suffer this issue). As you say, we’ll have to change 
the DB (today it is only 64 chars) if we do store the full path . This is 
slightly problematic since the maximum depth of hierarchy is controlled by a 
config option, and hence could be changed. We will absolutely have be able to 
build the path on-the-fly in order to support legacy drivers (who won’t be able 
to store more than 64 chars). We may need to do some performance tests to 
ascertain if we can get away with building the path on-the-fly in all cases and 
avoid changing the table.  One additional point is that, of course, the API 
w

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-03 Thread Henry Nash

> On 3 Jun 2016, at 01:22, Adam Young <ayo...@redhat.com> wrote:
> 
> On 06/02/2016 07:22 PM, Henry Nash wrote:
>> Hi
>> 
>> As you know, I have been working on specs that change the way we handle the 
>> uniqueness of project names in Newton. The goal of this is to better support 
>> project hierarchies, which as they stand today are restrictive in that all 
>> project names within a domain must be unique, irrespective of where in the 
>> hierarchy that projects sits (unlike, say, the unix directory structure 
>> where a node name only has to be unique within its parent). Such a 
>> restriction is particularly problematic when enterprise start modelling 
>> things like test, QA and production as branches of a project hierarchy, e.g.:
>> 
>> /mydivsion/projectA/dev
>> /mydivsion/projectA/QA
>> /mydivsion/projectA/prod
>> /mydivsion/projectB/dev
>> /mydivsion/projectB/QA
>> /mydivsion/projectB/prod
>> 
>> Obviously the idea of a project name (née tenant) being unique has been 
>> around since near the beginning of (OpenStack) time, so we must be cautions. 
>> There are two alternative specs proposed:
>> 
>> 1) Relax project name constraints:  
>> <https://review.openstack.org/#/c/310048/>https://review.openstack.org/#/c/310048/
>>  <https://review.openstack.org/#/c/310048/> 
>> 2) Hierarchical project naming:  
>> <https://review.openstack.org/#/c/318605/>https://review.openstack.org/#/c/318605/
>>  <https://review.openstack.org/#/c/318605/>
>> 
>> First, here’s what they have in common:
>> 
>> a) They both solve the above problem
>> b) They both allow an authorization scope to use a path rather than just a 
>> simple name, hence allowing you to address a project anywhere in the 
>> hierarchy
>> c) Neither have any impact if you are NOT using a hierarchy - i.e. if you 
>> just have a flat layer of projects in a domain, then they have no API or 
>> semantic impact (since both ensure that a project’s name must still be 
>> unique within a parent)
>> 
>> Here’s how the differ:
>> 
>> - Relax project name constraints (1), keeps the meaning of the ‘name’ 
>> attribute of a project to be its node-name in the hierarchy, but formally 
>> relaxes the uniqueness constraint to say that it only has to be unique 
>> within its parent. In other words, let’s really model this a bit like a unix 
>> directory tree.
>> - Hierarchical project naming (2), formally changes the meaning of the 
>> ‘name’ attribute to include the path to the node as well as the node name, 
>> and hence ensures that the (new) value of the name attribute remains unique.
>> 
>> While whichever approach we chose would only be included in a new 
>> microversion (3.7) of the Identity API, although some relevant APIs can 
>> remain unaffected for a client talking 3.6 to a Newton server, not all can 
>> be. As pointed out be jamielennox, this is a data modelling problem - if a 
>> Newton server has created multiple projects called “dev” in the hierarchy, a 
>> 3.6 client trying to scope a token simply to “dev” cannot be answered 
>> correctly (and it is proposed we would have to return an HTTP 409 Conflict 
>> error if multiple nodes with the same name were detected). This is true for 
>> both approaches.
>> 
>> Other comments on the approaches:
>> 
>> - Having a full path as the name seems duplicative with the current project 
>> entity - since we already return the parent_id (hence parent_id + name is, 
>> today, sufficient to place a project in the hierarchy).
> 
> The one thing I like is the ability to specify just the full path for the 
> OS_PROJECT_NAME env var, but we could make that a separate variable.  Just as 
> DOMAIN_ID + PROJECT_NAME is unique today, OS_PROJECT_PATH should be able to 
> fully specify a project unambiguously.  I'm not sure which would have a 
> larger impact on users.
> 
Agreed - and this could be done for both approaches (since this is all part of 
the “auth data flow").
> 
>> - In the past, we have been concerned about the issue of what we do if there 
>> is a project further up the tree that we do not have any roles on. In such 
>> cases, APIs like list project parents will not display anything other than 
>> the project ID for such projects. In the case of making the name the full 
>> path, we would be effectively exposing the name of all projects above us, 
>> irrespective of whether we had roles on them. Maybe this is OK, maybe it 
>> isn’t.
> 
> I think it is OK.  If this info needs to be hidden from a user, the 

[openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-02 Thread Henry Nash
Hi

As you know, I have been working on specs that change the way we handle the 
uniqueness of project names in Newton. The goal of this is to better support 
project hierarchies, which as they stand today are restrictive in that all 
project names within a domain must be unique, irrespective of where in the 
hierarchy that projects sits (unlike, say, the unix directory structure where a 
node name only has to be unique within its parent). Such a restriction is 
particularly problematic when enterprise start modelling things like test, QA 
and production as branches of a project hierarchy, e.g.:

/mydivsion/projectA/dev
/mydivsion/projectA/QA
/mydivsion/projectA/prod
/mydivsion/projectB/dev
/mydivsion/projectB/QA
/mydivsion/projectB/prod

Obviously the idea of a project name (née tenant) being unique has been around 
since near the beginning of (OpenStack) time, so we must be cautions. There are 
two alternative specs proposed:

1) Relax project name constraints: https://review.openstack.org/#/c/310048/ 
 
2) Hierarchical project naming: https://review.openstack.org/#/c/318605/ 


First, here’s what they have in common:

a) They both solve the above problem
b) They both allow an authorization scope to use a path rather than just a 
simple name, hence allowing you to address a project anywhere in the hierarchy
c) Neither have any impact if you are NOT using a hierarchy - i.e. if you just 
have a flat layer of projects in a domain, then they have no API or semantic 
impact (since both ensure that a project’s name must still be unique within a 
parent)

Here’s how the differ:

- Relax project name constraints (1), keeps the meaning of the ‘name’ attribute 
of a project to be its node-name in the hierarchy, but formally relaxes the 
uniqueness constraint to say that it only has to be unique within its parent. 
In other words, let’s really model this a bit like a unix directory tree.
- Hierarchical project naming (2), formally changes the meaning of the ‘name’ 
attribute to include the path to the node as well as the node name, and hence 
ensures that the (new) value of the name attribute remains unique.

While whichever approach we chose would only be included in a new microversion 
(3.7) of the Identity API, although some relevant APIs can remain unaffected 
for a client talking 3.6 to a Newton server, not all can be. As pointed out be 
jamielennox, this is a data modelling problem - if a Newton server has created 
multiple projects called “dev” in the hierarchy, a 3.6 client trying to scope a 
token simply to “dev” cannot be answered correctly (and it is proposed we would 
have to return an HTTP 409 Conflict error if multiple nodes with the same name 
were detected). This is true for both approaches.

Other comments on the approaches:

- Having a full path as the name seems duplicative with the current project 
entity - since we already return the parent_id (hence parent_id + name is, 
today, sufficient to place a project in the hierarchy).
- In the past, we have been concerned about the issue of what we do if there is 
a project further up the tree that we do not have any roles on. In such cases, 
APIs like list project parents will not display anything other than the project 
ID for such projects. In the case of making the name the full path, we would be 
effectively exposing the name of all projects above us, irrespective of whether 
we had roles on them. Maybe this is OK, maybe it isn’t.
- While making the name the path keeps it unique, this is fine if clients 
blindly use this attribute to plug back into another API to call. However if, 
for example, you are Horizon and are displaying them in a UI then you need to 
start breaking down the path into its components, where you don’t today.
- One area where names as the hierarchical path DOES look right is calling the 
/auth/projects API - where what the caller wants is a list of projects they can 
scope to - so you WANT this to be the path you can put in an auth request.

Given that neither can fully protect a 3.6 client, my personal preference is to 
go with the cleaner logical approach which I believe is the Relax project name 
constraints (1), with the addition of changing GET /auth/projects to return the 
path (since this is a specialised API that happens before authentication) - but 
I am open to persuasion (as the song goes).

There are those that might say that perhaps we just can’t change this. I would 
argue that since this ONLY affects people who actually create hierarchies and 
that today such hierarchical use is in its infancy, then now IS the time to 
change this. If we leave it too long, then it will become really hard to change 
what will by then have become a tough restriction.

Henry


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [keystone] New Core Reviewer (sent on behalf of Steve Martinelli)

2016-05-26 Thread Henry Nash

Congratulations - thanks for your continued commitment to OpenStack and 
keystone.

Henry

> On 25 May 2016, at 23:58, Rodrigo Duarte  wrote:
> 
> Thank you all, it's a privilege to be part of a team from where I've learned 
> so much. =)
> 
>> On Wed, May 25, 2016 at 1:05 PM, Brad Topol  wrote:
>> CONGRATULATIONS Rodrigo!!! Very well deserved!!!
>> 
>> --Brad
>> 
>> 
>> Brad Topol, Ph.D.
>> IBM Distinguished Engineer
>> OpenStack
>> (919) 543-0646
>> Internet: bto...@us.ibm.com
>> Assistant: Kendra Witherspoon (919) 254-0680
>> 
>> Lance Bragstad ---05/25/2016 09:09:55 AM---Congratulations 
>> Rodrigo! Thank you for all the continued and consistent reviews.
>> 
>> From: Lance Bragstad 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: 05/25/2016 09:09 AM
>> Subject: Re: [openstack-dev] [keystone] New Core Reviewer (sent on behalf of 
>> Steve Martinelli)
>> 
>> 
>> 
>> 
>> Congratulations Rodrigo!
>> 
>> Thank you for all the continued and consistent reviews.
>> 
>> On Tue, May 24, 2016 at 1:28 PM, Morgan Fainberg  
>> wrote:
>> I want to welcome Rodrigo Duarte (rodrigods) to the keystone core team. 
>> Rodrigo has been a consistent contributor to keystone and has been 
>> instrumental in the federation implementations. Over the last cycle he has 
>> shown an understanding of the code base and contributed quality reviews.
>> 
>> I am super happy (as proxy for Steve) to welcome Rodrigo to the Keystone 
>> Core team.
>> 
>> Cheers,
>> --Morgan
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api][v3]

2016-05-11 Thread Henry Nash
Oops "uncooked"! I meant unscoped! (thank you Apple's auto-correct spell 
checker!)

Sent from my iPad

> On 11 May 2016, at 10:25, Henry Nash <henryna...@mac.com> wrote:
> 
> Hi
> 
> This depends on the policy rule for the get_user call - which is defined by 
> your policy.json file for your keystone. In the default one supplied you need 
> admin to do this, unlike change password where the owner (I.e. A user with an 
> uncooked token) can execute. You could change my the  rule for get_user to be 
> the same if you want to allow users to read their own user record.
> 
> Henry
> Sent from my iPad
> 
>> On 11 May 2016, at 09:49, Ehsan Qarekhani <qarekh...@gmail.com> wrote:
>> 
>> HI 
>> is there any way to retrieve default_project_id of login user from unscoped 
>> token ?
>> as I know I  able to retrieve all projects which user can access but I can't 
>> tell  which one is user default project.
>> 
>> Many thanks in advance.
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api][v3]

2016-05-11 Thread Henry Nash
Hi

This depends on the policy rule for the get_user call - which is defined by 
your policy.json file for your keystone. In the default one supplied you need 
admin to do this, unlike change password where the owner (I.e. A user with an 
uncooked token) can execute. You could change my the  rule for get_user to be 
the same if you want to allow users to read their own user record.

Henry
Sent from my iPad

> On 11 May 2016, at 09:49, Ehsan Qarekhani  wrote:
> 
> HI 
> is there any way to retrieve default_project_id of login user from unscoped 
> token ?
> as I know I  able to retrieve all projects which user can access but I can't 
> tell  which one is user default project.
> 
> Many thanks in advance.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Newton midycle planning

2016-04-14 Thread Henry Nash
Hi Morgan,

Great to be planning this ahead of time!!!

For me either of the July dates are fine - I would have a problem with the June 
date.

Henry
> On 14 Apr 2016, at 14:57, Dolph Mathews  wrote:
> 
> On Wed, Apr 13, 2016 at 9:07 PM, Morgan Fainberg  > wrote:
> It is that time again, the time to plan the Keystone midcycle! Looking at the 
> schedule [1] for Newton, the weeks that make the most sense look to be (not 
> in preferential order):
> 
> R-14 June 27-01
> R-12 July 11-15
> R-11 July 18-22
> 
> They all work equally well for me at this point, but I'd be interested to try 
> one of the earlier options.
>  
> 
> As usual this will be a 3 day event (probably Wed, Thurs, Fri), and based on 
> previous attendance we can expect ~30 people to attend. Based upon all the 
> information (other midcycles, other events, the US July4th holiday), I am 
> thinking that week R-12 (the week of the newton-2 milestone) would be the 
> best offering. Weeks before or after these three tend to push too close to 
> the summit or too far into the development cycle.
> 
> I am trying to arrange for a venue in the Bay Area (most likely will be South 
> Bay, such as Mountain View, Sunnyvale, Palo Alto, San Jose) since we have 
> done east coast and central over the last few midcycles.
> 
> Please let me know your thoughts / preferences. In summary:
> 
> * Venue will be Bay Area (more info to come soon)
> 
> * Options of weeks (in general subjective order of preference): R-12, R-11, 
> R-14
> 
> Cheers,
> --Morgan
> 
> [1] http://releases.openstack.org/newton/schedule.html 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Henry Nash

> On 22 Feb 2016, at 17:45, Thierry Carrez  wrote:
> 
> Amrith Kumar wrote:
>> [...]
>> As a result of this proposal, there will still be four events each year, two 
>> "OpenStack Summit" events and two "MidCycle" events.
> 
> Actually, the OpenStack summit becomes the midcycle event. The new separated 
> contributors-oriented event[tm] happens at the beginning of the new cycle.

So in general a well thought out proposal - and it certainly helps address some 
of the early concerns over a “simplistic” split. I was also worrying, however, 
about the reduction in developer face time - it wasn’t immediate clear that the 
main summit could be treated as a developer midcycle. Is the idea that we just 
let this be informally organized by the projects, or tha there would at least 
be room set aside for each project (but without all the formal cross-project 
structure/agenda that there is in a main developer summit)?

>> [...]
>> Given the number of projects, and leaving aside high bandwidth internet and 
>> remote participation, providing dedicated meeting room for the duration of 
>> the MidCycle event for each project is a considerable undertaking. I believe 
>> therefore that the consequence is that the MidCycle event will end up being 
>> of comparable scale to the current Design Summit or larger, and will likely 
>> need a similar venue.
> 
> It still is an order of magnitude smaller than the "OpenStack Summit". Think 
> 600 people instead of 6000. The idea behind co-hosting is to facilitate 
> cross-project interactions. You know where to find people, and you can easily 
> arrange a meeting between two teams for an hour.
> 
>> [...]
>> At the current OpenStack Summit, there is an opportunity for contributors, 
>> customers and operators to interact, not just in technical meetings, but 
>> also in a social setting. I think this is valuable, even though there seems 
>> to be a number of people who believe that this is not necessarily the case.
> 
> I don't think the proposal removes that opportunity. Contributors /can/ still 
> go to OpenStack Summits. They just don't /have to/. I just don't think every 
> contributor needs to be present at every OpenStack Summit, while I'd like to 
> see most of them present at every separated contributors-oriented event[tm].
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][cinder] Projects acting as a domain at the top of the project hierarchy

2016-02-17 Thread Henry Nash
Michal & Raildo,

So the keystone patch (https://review.openstack.org/#/c/270057/ 
<https://review.openstack.org/#/c/270057/>) is now merged.  Do you perhaps have 
a cinder patch that I could review so we can make sure that this is likely to 
work with the new projects acting as domains? Currently it is the cinder 
tempest tests that are failing.

Thanks

Henry


> On 2 Feb 2016, at 13:30, Raildo Mascena <rail...@gmail.com> wrote:
> 
> See responses inline.
> 
> On Mon, Feb 1, 2016 at 6:25 PM Michał Dulko <michal.du...@intel.com 
> <mailto:michal.du...@intel.com>> wrote:
> On 01/30/2016 07:02 PM, Henry Nash wrote:
> > Hi
> >
> > One of the things the keystone team was planning to merge ahead of 
> > milestone-3 of Mitaka, was “projects acting as a domain”. Up until now, 
> > domains in keystone have been stored totally separately from projects, even 
> > though all projects must be owned by a domain (even tenants created via the 
> > keystone v2 APIs will be owned by a domain, in this case the ‘default’ 
> > domain). All projects in a project hierarchy are always owned by the same 
> > domain. Keystone supports a number of duplicate concepts (e.g. domain 
> > assignments, domain tokens) similar to their project equivalents.
> >
> > 
> >
> > I’ve got a couple of questions about the impact of the above:
> >
> > 1) I already know that if we do exactly as described above, the cinder gets 
> > confused with how it does quotas today - since suddenly there is a new 
> > parent to what it thought was a top level project (and the permission rules 
> > it encodes requires the caller to be cloud admin, or admin of the root 
> > project of a hierarchy).
> 
> These problems are there because our nested quotas code is really buggy
> right now. Once Keystone merges a fix allowing non-admin users to fetch
> his own project hierarchy - we should be able to fix it.
> 
> ++ The patch to fix this problem are closer to be merged, there is just minor 
> comments to fix: https://review.openstack.org/#/c/270057/ 
> <https://review.openstack.org/#/c/270057/>  So I believe that we can fix this 
> bug on cinder in in next days.
> 
> > 2) I’m not sure of the state of nova quotas - and whether it would suffer a 
> > similar problem?
> 
> As far as I know Nova haven't had merged nested quotas code and will not
> do that in Mitaka due to feature freeze. 
> Nested quotas code on Nova is very similar with the Cinder code and we are 
> already fixing the bugs that we found on Cinder. Agreed that It will not be 
> merged in Mitaka due to feature freeze. 
> 
> > 3) Will Horizon get confused by this at all?
> >
> > Depending on the answers to the above, we can go in a couple of directions. 
> > The cinder issues looks easy to fix (having had a quick look at the code) - 
> > and if that was the only issue, then that may be fine. If we think there 
> > may be problems in multiple services, we could, for Mitaka, still create 
> > the projects acting as domains, but not set the parent_id of the current 
> > top level projects to point at the new project acting as a domain - that 
> > way those projects acting as domains remain isolated from the hierarchy for 
> > now (and essentially invisible to any calling service). Then as part of 
> > Newton we can provide patches to those services that need changing, and 
> > then wire up the projects acting as a domain to their children.
> >
> > Interested in feedback to the questions above.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> <mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-14 Thread Henry Nash

On 14 Feb 2016, at 09:53, Henry Nash <henryna...@me.com> wrote:


> On 13 Feb 2016, at 03:06, Adam Young <ayo...@redhat.com 
> <mailto:ayo...@redhat.com>> wrote:
> 
> On 02/12/2016 06:17 AM, Eoghan Glynn wrote:
>> 
>>> Hello all,
>>> 
>>> tl;dr
>>> =
>>> 
>>> I have long thought that the OpenStack Summits have become too
>>> commercial and provide little value to the software engineers
>>> contributing to OpenStack.
>>> 
>>> I propose the following:
>>> 
>>> 1) Separate the design summits from the conferences
>>> 2) Hold only a single OpenStack conference per year
>>> 3) Return the design summit to being a low-key, low-cost working event
> I think you would hurt developer attendance.  I think the unified design 
> summit sneaks under the radar of many companies that will send people to the 
> conference but might not send them to a design-only summit.
> 
> I know a lot of people at smaller companies especially have to do double 
> duty. I'm at a larger company and I have to do double duty, booth and design. 
>  Sometimes my talks get accepted, too.
> 
> I think the combined summit works.  I would not want to have to travel any 
> more than I do now.
> 
> I think the idea of more developer-specific socializing would be great.  
> Downtime is also a good thing, and having the socializing in venues that 
> don;t involve shouting and going hoarse would be a plus in my book.
> 
> 
> TBH, after a day of summit, I am often ready to just disappear for a while, 
> or go out with a small group of friends.  I tend to avoid the large parties.
> 
> That said, the Saxophone is coming to Austin, and I plan on trying to get an 
> informal jam session together with anyone that has an instrument...and we'll 
> see if we can find a piano.

I tend to agree with Adam here - formally splitting the two summits would hurt 
the overall community. The reality of travel budgets in companies large and 
small is that (especially for people working their way up the ranks) al lot of 
justification is always going to be required. In reality, for many openstack 
projects, there are actually 4 “summits” a year already - made up of two 
conferences and two mid-cycles. I actually find the mid cycles more productive 
from a pure developer point of view, but the integration of developers and 
users at conferences provides a valuable, yet different experience. We can 
engage more user/operator input (since they are there) into the sessions at the 
conferences (and hence would agree with other proposals to maybe schedule the 
user/operator portions in a better way so we enhance their feedback into the 
developer community sessions).


>>> details
>>> ===
>>> 
>>> The design summits originally started out as working events. Developers
>>> got together in smallish rooms, arranged chairs in a fishbowl, and got
>>> to work planning and designing.
>>> 
>>> With the OpenStack Summit growing more and more marketing- and
>>> sales-focused, the contributors attending the design summit are often
>>> unfocused. The precious little time that developers have to actually
>>> work on the next release planning is often interrupted or cut short by
>>> the large numbers of "suits" and salespeople at the conference event,
>>> many of which are peddling a product or pushing a corporate agenda.
>>> 
>>> Many contributors submit talks to speak at the conference part of an
>>> OpenStack Summit because their company says it's the only way they will
>>> pay for them to attend the design summit. This is, IMHO, a terrible
>>> thing. The design summit is a *working* event. Companies that contribute
>>> to OpenStack projects should send their engineers to working events
>>> because that is where work is done, not so that their engineer can go
>>> give a talk about some vendor's agenda-item or newfangled product.
>>> 
>>> Part of the reason that companies only send engineers who are giving a
>>> talk at the conference side is that the cost of attending the OpenStack
>>> Summit has become ludicrously expensive. Why have the events become so
>>> expensive? I can think of a few reasons:
>>> 
>>> a) They are held every six months. I know of no other community or open
>>> source project that holds *conference-type* events every six months.
>>> 
>>> b) They are held in extremely expensive hotels and conference centers
>>> because the number of attendees is so big.
>>> 
>>> c) Because the conferences have become sales and marketing-focuse

[openstack-dev] [keystone] Domain Specific Roles vs Local Groups

2016-02-01 Thread Henry Nash
Hi

During the recent keystone midcycle, it was suggested than an alternative 
domain specific roles (see spec: 
https://github.com/openstack/keystone-specs/blob/master/specs/mitaka/domain-specific-roles.rst
 

 and code patches starting at: https://review.openstack.org/#/c/261846/ 
) might be to somehow re-use the 
group concept. This was actually something we had discussed in previous 
proposals for this functionality. As I mentioned during the last day, while 
this is a seductive approach, it doesn’t actually scale well (or in fact 
provide the right abstraction). The best way to illustrate this is with an 
example:

Let’s say a customer is being hosted by a cloud provider. The customer has 
their own domain containing their own users and groups, to keep them segregated 
from other customers. The cloud provider, wanting to attract as many different 
types of customer as possible, has created a set of fine-grained global roles 
tied to APIs via the policy files. The domain admin of the customer wants to 
create a collection of 10 such fine-grained roles that represent some function 
that is meaningful to their setup (perhaps it’s job that allows you to monitor 
resources and fix a subset of problems).

With domain specific roles (DSR) , the domain admin creates a DSR (which is 
just a role with a domain_id attribute), and then adds the 10 global policy 
roles required using the implied roles API. They can then assign this DSR to 
all the projects they need to, probably as a group assignment (where the groups 
could be local, federated or LDAP). One assignment per project is required, so 
if there were, over time, 100 projects, then that’s 100 assignments. Further, 
if they want to add another global role (maybe to allow access to a new API) to 
that DSR, then it’s a single API call to do it.

The proposal to use groups instead would work something like this: We would 
support a concept of “local groups” in keystone, that would be independent of 
whatever groups the identity backend was mapped to. In order to represent the 
DSR, a local group would be created (perhaps with the name of the functional 
job members of the group could carry out). User who could carry out this 
function would be added to this group (presumably we might also have to support 
“remote” groups being members of such local groups, a concept we don’t really 
support today, but not too much of a stretch). This group would then need to be 
assigned to each project in turn, but for each of the 10 global roles that this 
“DSR equivalent” provided in turn (so an immediate increase by a factor of N 
API calls, where N is the number of roles per DSR) - so 1000 assignments in our 
example. If the domain admin wanted to add a new role to (or remove a role 
from) the “DSR”, they would have to do another assignment to each project that 
this “DSR” was being used (100 new assignments in our example).  Again, I would 
suggest, much less convenient.

Given the above, I believe the current DSR proposal does provide the right 
abstraction and scalability, and we should continue to review and merge it as 
planned. Obviously this is still dependant on Implied Roles (either in its 
current form, or a modified version). Alternative code of doing a 
one-level-only inference part of DSRs does exist (from an earlier attempt), but 
I don’t think we want to do that if we are going to have any kind of implied 
roles.

Henry__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][cross-project] Standardized role names and policy

2016-01-30 Thread Henry Nash
Hi Adam,

Fully support this kind of approach.

I am still concerned over the scope check, since we do have examples of when 
there is more than one (target) scope check, e.g.: an API that might operate on 
an object that maybe global, domain or project specific - in which case you 
need to “match up with scope checks with the object in question”, for example 
for a given API:

If cloud admin, allow the API
If domain admin and the object is domain or project specific, then allow the API
If project admin and the object is project specific then allow the API

Today we can (and do with keystone) encode this in policy rules. I’m not clear 
how the “scope check in code” will work in this kind of situation.

Henry

> On 30 Jan 2016, at 17:44, Adam Young  wrote:
> 
> I'd like to bring people's attention to a Cross Project spec that has the 
> potential to really strengthen the security story for OpenStack in a scalable 
> way.
> 
> "A common policy scenario across all projects" 
> https://review.openstack.org/#/c/245629/
> 
> The summary version is:
> 
> Role name or patternExplanation or example
> -:--
> admin:  Overall cloud admin
> service  :  for service users only, not real 
> humans
> {service_type}_admin :  identity_admin, compute_admin, 
> network_admin etc.
> {service_type}_{api_resource}_manager: identity_user_manager,
>compute_server_manager, 
> network_subnet_manager
> observer :  read only access
> {service_type}_observer  : identity_observer, image_observer
> 
> 
> Jamie Lennox originally wrote the spec that got the ball rolling, and Dolph 
> Matthews just took it to the next level.  It is worth a read.
> 
> I think this is the way to go.  There might be details on how to get there, 
> but the granularity is about right.
> If we go with that approach, we might want to rethink about how we enforce 
> policy.  Specifically, I think we should split the policy enforcement up into 
> two stages:
> 
> 1.  Role check.  This only needs to know the service and the api resource.  
> As such, it could happen in middleware.
> 
> 2. Scope check:  for user or project ownership.  This happens in the code 
> where it is currently called.  Often, an object needs to be fetched from the 
> database
> 
> The scope check is an engineering decision:  Nova developers need to be able 
> to say where to find the scope on the virtual machine, Cinder developers on 
> the volume objects.
> 
> Ideally, The python-*clients, Horizon and other tools would be able to 
> determine what capabilities a given token would provide based on the roles 
> included in the validation response. If the role check is based on the URL as 
> opposed to the current keys in the policy file, the client can determine 
> based on the request and the policy file whether the user would have any 
> chance of succeeding in a call. As an example, to create a user in Keystone, 
> the API is:
> 
> POST https://hostname:port/v3/users
> 
> Assuming the client has access to the appropriate policy file, if can 
> determine that a token with only the role "identity_observer" would not have 
> the ability to execute that command.  Horizon could then modify the users 
> view to remove the "add user" form.
> 
> For user management, we want to make role assignments as simple as possible 
> and no simpler.  An admin should not have to assign all of the individual 
> roles that a user needs.  Instead, assigning the role "Member" should imply 
> all of the subordinate roles that a user needs to perform the standard 
> workflows.  Expanding out the implied roles can be done either when issuing a 
> token, or when evaluating the policy file, or both.
> 
> I'd like to get the conversation on this started here on the mailing list, 
> and lead in to a really productive set of talks at the Austin summit.
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][cross-project] Standardized role names and policy

2016-01-30 Thread Henry Nash

> On 30 Jan 2016, at 21:55, Adam Young <ayo...@redhat.com 
> <mailto:ayo...@redhat.com>> wrote:
> 
> On 01/30/2016 04:14 PM, Henry Nash wrote:
>> Hi Adam,
>> 
>> Fully support this kind of approach.
>> 
>> I am still concerned over the scope check, since we do have examples of when 
>> there is more than one (target) scope check, e.g.: an API that might operate 
>> on an object that maybe global, domain or project specific - in which case 
>> you need to “match up with scope checks with the object in question”, for 
>> example for a given API:
>> 
>> If cloud admin, allow the API
>> If domain admin and the object is domain or project specific, then allow the 
>> API
>> If project admin and the object is project specific then allow the API
>> 
>> Today we can (and do with keystone) encode this in policy rules. I’m not 
>> clear how the “scope check in code” will work in this kind of situation.
> I originally favored an approach that a user would need to get a token scoped 
> to a resource in order to affect change on that resource, and admin users 
> could get tokens scoped to anything,  but I know that makes things harder for 
> Administrators trying to fix broken deployments. So I backed off on that 
> approach.
> 
> I think the right answer would be that the role check would set some value to 
> indicate it was an admin override.  So long as the check does not need the 
> actual object from the database, t can perform whatever logic we like.
> 
> The policy check deep in the code can be as strict or permissive as it 
> desires.  If there is a need to re-check the role for an admin check there, 
> policy can still do so.  A role check that passes at the Middleware level can 
> still be blocked at the in-code level.
> 
> "If domain admin and the object is domain or project specific, then allow the 
> API" is trh tricky one, but I don't think we even have a solution for that 
> now.  Domain1->p1->p2->p3 type hierarchies don't allow operations on p3 with 
> a token scoped to Domain1.

So we do actually support things like that, e.g. (from the domain specific role 
additions):

”identity:some_api": role:admin and project_domain_id:%(target.role.domain_id)s 
   (which means I’m project admin and the domain specific role I am going to 
manipulate is specific to my domain)

….and although we don’t have this in our standard policy, you could also write

”identity:some_api": role:admin and domain_id:%(target.project.domain_id)s
(which means I’m domain admin and I can do some operation on any project in my 
domain)

> 
> I think that in those cases, I would still favor the user getting a token 
> from Keystone scoped to p3, and use the inherited-role-assignment approach.
> 
> 
>> 
>> Henry
>> 
>>> On 30 Jan 2016, at 17:44, Adam Young <ayo...@redhat.com 
>>> <mailto:ayo...@redhat.com>> wrote:
>>> 
>>> I'd like to bring people's attention to a Cross Project spec that has the 
>>> potential to really strengthen the security story for OpenStack in a 
>>> scalable way.
>>> 
>>> "A common policy scenario across all projects" 
>>> https://review.openstack.org/#/c/245629/ 
>>> <https://review.openstack.org/#/c/245629/>
>>> 
>>> The summary version is:
>>> 
>>> Role name or patternExplanation or example
>>> -:--
>>> admin:  Overall cloud admin
>>> service  :  for service users only, not real 
>>> humans
>>> {service_type}_admin :  identity_admin, compute_admin, 
>>> network_admin etc.
>>> {service_type}_{api_resource}_manager: identity_user_manager,
>>>compute_server_manager, 
>>> network_subnet_manager
>>> observer :  read only access
>>> {service_type}_observer  : identity_observer, image_observer
>>> 
>>> 
>>> Jamie Lennox originally wrote the spec that got the ball rolling, and Dolph 
>>> Matthews just took it to the next level.  It is worth a read.
>>> 
>>> I think this is the way to go.  There might be details on how to get there, 
>>> but the granularity is about right.
>>> If we go with that approach, we might want to rethink about how we enforce 
>>> policy.  Specifically, I think we should split the policy enforcement up 
>>> into two stages:
>>> 
>>> 1.  Ro

[openstack-dev] [keystone][nova][cinder][horizon] Projects acting as a domain at the top of the project hierarchy

2016-01-30 Thread Henry Nash
Hi

One of the things the keystone team was planning to merge ahead of milestone-3 
of Mitaka, was “projects acting as a domain”. Up until now, domains in keystone 
have been stored totally separately from projects, even though all projects 
must be owned by a domain (even tenants created via the keystone v2 APIs will 
be owned by a domain, in this case the ‘default’ domain). All projects in a 
project hierarchy are always owned by the same domain. Keystone supports a 
number of duplicate concepts (e.g. domain assignments, domain tokens) similar 
to their project equivalents.

The idea of  “projects acting as a domain” is:

- A domain is actually represented as a super-top-level project (with an 
attribute, “is_domain" set to True), and all previous top level projects in the 
domain specify this special project as their parent in their parent_id 
attribute. A project with is_domain=True is said to be a “project acing as a 
domain”. Such projects can not have parents - i.e. they are at the top of the 
tree.
- The project_id of a project acting as a domain is the equivalent of the 
domain_id.
- The existing domain APIs are still supported, but behind the scenes actually 
reference the “project acing as a domain”, although in the long run may be 
deprecated. On migration to Mitaka, the entries of the domain table are moved 
to be projects acting as domains in the project table
- The project api can now be used to create/update/delete a project acting as a 
domain (by setting is_domain=True) just like a regular project - and do the 
equivalent of the domain CRUD APIs
- Although domain scoped tokens are still supported, you can get a project 
scoped token to the project acting as a domain (and the is_domain attribute 
will be part of the token), so you can write policy rules that can solely 
respond to project tokens. We can eventually deprecate domain tokens, if we 
chose.
- Domain assignments (which will still be supported) really just become project 
assignments placed on the project acting as a domain.
- In terms of the impact on the results of list projects:
— There is no change to listing projects within a domain (since you don’t see 
“the domain” is such a listing today)
— A filter is being added to the list projects API to allow filtering by the 
is_domain attribute - with a default of is_domain=False (i.e. so unless you ask 
for them when listing all projects, you won’t see the projects acting as a 
domain). Hence again, by default, no change to the collection returned today.

The above proposed changes have been integrated into the latest version of the 
Identity API spec: 
https://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html

I’ve got a couple of questions about the impact of the above:

1) I already know that if we do exactly as described above, the cinder gets 
confused with how it does quotas today - since suddenly there is a new parent 
to what it thought was a top level project (and the permission rules it encodes 
requires the caller to be cloud admin, or admin of the root project of a 
hierarchy).
2) I’m not sure of the state of nova quotas - and whether it would suffer a 
similar problem?
3) Will Horizon get confused by this at all?

Depending on the answers to the above, we can go in a couple of directions. The 
cinder issues looks easy to fix (having had a quick look at the code) - and if 
that was the only issue, then that may be fine. If we think there may be 
problems in multiple services, we could, for Mitaka, still create the projects 
acting as domains, but not set the parent_id of the current top level projects 
to point at the new project acting as a domain - that way those projects acting 
as domains remain isolated from the hierarchy for now (and essentially 
invisible to any calling service). Then as part of Newton we can provide 
patches to those services that need changing, and then wire up the projects 
acting as a domain to their children.

Interested in feedback to the questions above.

Henry
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Tempest] OS-INHERIT APIs were skipped by Jenkins because "os_inherit" in keystone.conf was disable.

2015-12-09 Thread Henry Nash
Hi Maho,

So in the keystone unit tests, we flip the os_inherit flag back and forth 
during tests to make sure it is honored correctly.  For the tempest case, I 
don’t think you need to do that level of testing. Setting the os_inherit flag 
to true will have no effect if you have not created any role assignments that 
are inherited - you’ll just get the regular assignments back as normal. So 
provided there is no test data leakage between tests (i.e. old data lying 
around from a previous test), I think it should be safe to run tempest with 
os_inherit switched on.

Henry
> On 9 Dec 2015, at 08:45, koshiya maho  wrote:
> 
> Hi all,
> 
> I pushed the patch set of OS-INHERIT API tempest (keystone v3).
> https://review.openstack.org/#/c/250795/
> 
> But, all API tests in patch set was skipped, because "os_inherit" in 
> keystone.conf of 
> Jenkins jobs was disable. So, it couldn't be confirmed.
> 
> Reference information : 
> http://logs.openstack.org/95/250795/5/check/gate-tempest-dsvm-full/fbde6d2/logs/etc/keystone/keystone.conf.txt.gz
> #L1422
> https://github.com/openstack/keystone/blob/master/keystone/common/config.py#L224
> 
> Default "os_inherit" setting is disable. OS-INHERIT APIs need "os_inherit" 
> setting enable.
> 
> For keystone v3 tempests using OS-INHERIT, we should enable "os_inherit" of 
> the existing keystone.conf called by Jenkins.
> Even if "os_inherit" is enable, I think there have no effects on other 
> tempests.
> 
> Do you have any other ideas?
> 
> Thank you and best regards,
> 
> --
> Maho Koshiya
> NTT Software Corporation
> E-Mail : koshiya.m...@po.ntts.co.jp
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Move from active distrusting model to trusting model

2015-11-24 Thread Henry Nash
Good, wide ranging discussion.

From my point of view, this isn’t about trusting cores, rather (as was pointed 
out by others) ensuring people with different customer perspectives be part of 
the approval. Of course, you could argue they could have -1’d it anyway, but I 
think ensuring cross-company approval helps us overall, and so I’m comfortable 
and support with the existing approach.

Henry
> On 24 Nov 2015, at 09:06, Henry Nash <henry.n...@icloud.com> wrote:
> 
> Good, wide ranging discussion.
> 
> From my point of view, this isn’t about trusting cores, rather (as was 
> pointed out by others) ensuring people with different customer perspectives 
> be part of the approval. Of course, you could argue they could have -1’d it 
> anyway, but I think ensuring cross-company approval helps us overall, and so 
> I’m comfortable and support with the existing approach.
> 
> Henry
>> On 24 Nov 2015, at 05:55, Clint Byrum <cl...@fewbar.com 
>> <mailto:cl...@fewbar.com>> wrote:
>> 
>> Excerpts from Adam Young's message of 2015-11-23 20:21:47 -0800:
>>> On 11/23/2015 11:42 AM, Morgan Fainberg wrote:
>>>> Hi everyone,
>>>> 
>>>> This email is being written in the context of Keystone more than any 
>>>> other project but I strongly believe that other projects could benefit 
>>>> from a similar evaluation of the policy.
>>>> 
>>>> Most projects have a policy that prevents the following scenario (it 
>>>> is a social policy not enforced by code):
>>>> 
>>>> * Employee from Company A writes code
>>>> * Other Employee from Company A reviews code
>>>> * Third Employee from Company A reviews and approves code.
>>>> 
>>>> This policy has a lot of history as to why it was implemented. I am 
>>>> not going to dive into the depths of this history as that is the past 
>>>> and we should be looking forward. This type of policy is an actively 
>>>> distrustful policy. With exception of a few potentially bad actors 
>>>> (again, not going to point anyone out here), most of the folks in the 
>>>> community who have been given core status on a project are trusted to 
>>>> make good decisions about code and code quality. I would hope that 
>>>> any/all of the Cores would also standup to their management chain if 
>>>> they were asked to "just push code through" if they didn't sincerely 
>>>> think it was a positive addition to the code base.
>>>> 
>>>> Now within Keystone, we have a fair amount of diversity of core 
>>>> reviewers, but we each have our specialities and in some cases 
>>>> (notably KeystoneAuth and even KeystoneClient) getting the required 
>>>> diversity of reviews has significantly slowed/stagnated a number of 
>>>> reviews.
>>>> 
>>>> What I would like us to do is to move to a trustful policy. I can 
>>>> confidently say that company affiliation means very little to me when 
>>>> I was PTL and nominating someone for core. We should explore making a 
>>>> change to a trustful model, and allow for cores (regardless of company 
>>>> affiliation) review/approve code. I say this since we have clear steps 
>>>> to correct any abuses of this policy change.
>>>> 
>>>> With all that said, here is the proposal I would like to set forth:
>>>> 
>>>> 1. Code reviews still need 2x Core Reviewers (no change)
>>>> 2. Code can be developed by a member of the same company as both core 
>>>> reviewers (and approvers).
>>>> 3. If the trust that is being given via this new policy is violated, 
>>>> the code can [if needed], be reverted (we are using git here) and the 
>>>> actors in question can lose core status (PTL discretion) and the 
>>>> policy can be changed back to the "distrustful" model described above.
>>>> 
>>>> I hope that everyone weighs what it means within the community to 
>>>> start moving to a trusting-of-our-peers model. I think this would be a 
>>>> net win and I'm willing to bet that it will remove noticeable 
>>>> roadblocks [and even make it easier to have an organization work 
>>>> towards stability fixes when they have the resources dedicated to it].
>>>> 
>>>> Thanks for your time reading this.
>>> 
>>> So, having been one of the initial architects of said policy, I'd like 
>>> to reiterate what I felt at the time.  The policy is in place as mu

Re: [openstack-dev] [keystone] Diagnostic APIs for Keystone

2015-11-24 Thread Henry Nash
Some good ideas here, Adam.  I would think that some of the real “diagnostic 
APIs” might only be available via keystone-manage, rather than an exposed API.

Henry
> On 24 Nov 2015, at 03:07, Adam Young  wrote:
> 
> Figuring out what is or is not going to work when a user tries to perform an 
> operation in OpenStack can be frustrating.  I've had a few people ask me for 
> help specifically for configuring LDAP.  With Federation , things will get 
> better.  I mean Worse.
> 
> What kind of diagnostic tooling do we need?  I know the basics:
> 
> If I have a known good user in LDAP, can they .  This is the first thing, and 
> it can be done by asking for an unscoped token.
> 
> Once they have an unscoped token, can they get a scoped token?  Same as 
> before, but adding the project ID or domain name/project name to the token 
> request.
> 
> OK...what about if the users don't want to give you their password? With 
> LDAP, we can do OpenStack user show to see if the user is in the backend.  
> With Federation...not so much.
> 
> 
> Recently, I was trying to debug an issue where a server create failed due to 
> errors in the service to service communication; Neutron could not make the 
> call it needed to Nova due to the service user not having the Admin role.  
> The thing is, the service user was not an actual user, but rather a Service 
> principal authenticate via Kerberos.  I think this is an indicator of the 
> things to come.
> 
> 
> We need an API that will show, given as set of post-validated credentials, 
> communicated via Federation, what will the token validation response look 
> like.  We'll need user domain id, user id, project, roles, and service 
> catalog.
> 
> What else do we need diagnostically?  I know that setting up LDAP is 
> especially tricky, and multiple LDAP backends, added in live config, using 
> the Database backend, is going to be particularly painful to troubleshoot.  
> We need to be able to start with:
> 
> Is the LDAP account used to fetch users working properly?
> If not, what do  the Actual LDAP queries look like?  Ideally, something we 
> could pipe right into ldapsearch to confirm from the command line.
> 
> Then, take a real user, and act like they are trying to authenticate, list 
> the groups they should have, the roles they would be assigned, and the 
> service catalog.  We need this stuff piece by piece, to be able to 
> troubleshoot.
> 
> Is there anything I am missing here? We are not going to have the luxury of 
> cranking up logging and looking at data in a live running server;  My friend 
> Hippa Sarbanes-Oxley has told me point blank that is a no-go.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] HMT/Reseller - The Proposed Plan

2015-11-18 Thread Henry Nash
Hi

During our IRC meeting this week, we decided we needed a recap of this plan, so 
here goes:

Phase 0 (already merged in Liberty):

We already support a hierarchy of projects (using the parent_id attribute of 
the project entity). All the projects in a tree must be in the same domain. 
Role assignment inheritance is supported down the tree (either by assigning the 
role to the domain and have it inherited by the whole tree, or by assigning to 
a node in the project tree and having that assignment inherited by the sub-tree 
below that node).

Phase 1 (all code up for review for Mitaka):

Keep the existing conceptual model for domains, but store them as projects with 
a new attribute (is_domain=True).  The domain API remains (but accesses the 
project table instead), but you can also use the project API to create a domain 
(by setting is_domain=True). The is_domain attribute is immutable - i.e. you 
can’t flip a project between being a regular project and one acting as a 
domain. Projects acting as a domain have no parent (they are always top level 
domains/projects). Domain tokens can be deprecated in place of a project token 
scoped to a project acting as a domain (the token is augmented with the 
is_domain attribute so a policy rule can distinguish between a token on a 
domain and a regular project). This phase does not provide any support for 
resellers.

Phase 1 is covered by the two approved specs: HMT 
(https://review.openstack.org/#/c/139824 
, this actually coves Phase 1 and 2) 
and is_domain token (https://review.openstack.org/#/c/193543/ 
)

Phase 2 (earlier versions of code were proposed for Liberty, need fixing up for 
Mitaka):

At the summit we agreed to re-examine Phase 2 to see if we could perhaps use 
federation instead to cover this use case. As outlined in my email to the list 
(http://lists.openstack.org/pipermail/openstack-dev/2015-October/078063.html 
), 
this does not provide the support required. Hence, as per that email, I am 
proposing we revert the original specification (with restrictions), which is as 
follows:

Extended the concept of domains to allow a hierarchy of domains to support the 
reseller model (the requirements and specifications are in the same approved 
spec that covers Phase 1 above, https://review.openstack.org/#/c/139824 
). Given that we would already have 
Phase 1 in place, the actual changes in Phase 2 would be as follows:

a) Since projects already have a parent_id attribute and domains are 
represented as projects with the is_domain attribute set to True, allow 
projects acting as domains to be nested (like regular projects).
b) The parent of a project acting as a domain must either be another project 
acting as a domain or None (i.e. a root domain). A regular project cannot act 
as a parent for a project acting as a domain. In effect, we allow a hierarchy 
of domains at the top of “the tree” and all regular project trees hang off 
those higher level domains.
c) Projects acting as domains cannot be the recipient of an inherited role 
assignment from their parent - i.e. we don’t inherit assignments between 
domains.
d) All domain names (i.e. project names for projects acting as domains) must 
still be unique (otherwise we break our current auth model)

Comments on the issues that people have raised with Phase 2:

i) It’s too complex!
Well, in terms of code changes, the majority of the changes are actually in 
Phase 0 and 1. We just use the underlying capabilities in Phase 2.

ii) Our restriction of “all domain names must be unique” means that one 
reseller could “squat” on a domain name and prevent anyone else from having a 
domain of that name.
This is absolutely true. In reality, however, reseller arrangements will be 
contractual, so if a cloud provider was really concerned about this, they could 
legislate against this in the contract with the reseller (e.g. "All the domains 
you create must be suffixed with your reseller name”). In addition, if we 
believe that federation will become the dominant auth model, then the physical 
name of the domain becomes less important.

iii) What’s this about name clashing?
Ok, so there are two different scenarios here:
> Since, today, a project can have the same name as it’s domain, this means 
> that when we convert to using projects acting as a domain, the same thing can 
> be true (so this is really a consequence of Phase 1). Although we could, in 
> theory, prevent this for any new domains being created, migrating over the 
> existing domains could always create this situation. However, it doesn’t 
> actually cause any issue to existing APIs, so I don’t think it’s anything to 
> be concerned about.
> In Phase 2, you could have the situation where a project acting as a domain 
> has child projects some of which are 

Re: [openstack-dev] [heat][keystone] How to handle request for global admin in policy.json?

2015-11-10 Thread Henry Nash
Steve,

Currently, your best option is to use something similar to the 
policy.v3cloudsample.json, where you basically “bless” a project (or domain) as 
being the “cloud admin project/domain”.  Having a role on that gives you 
super-powers.  The only trouble with this right now is that you have to paste 
the ID of your blessed project/domain into the policy file (you only have to do 
that once, of course) - basically you replace the “admin_domain_id” with the ID 
of your blessed project/domain.

What we are considering for Mitaka is make this a bit more friendly, so you 
don’t have to modify the policy file - rather you define your “blessed project” 
in your config file, and tokens that are issue on this blessed project will 
have an extra attribute (e.g. “is_admin_project”), which your policy file can 
check for.

Henry
> On 10 Nov 2015, at 09:50, Steven Hardy  wrote:
> 
> Hi all,
> 
> Seeking some guidance (particularly from the keystone folks) ref this bug:
> 
> https://bugs.launchpad.net/heat/+bug/1466694
> 
> tl;dr - Heat has historically been careful to make almost all requests
> scoped to exactly one project.  Being aware of the long-standing bug
> #968696, we've deliberately avoided using any global "is admin" flag
> derived from the admin role.
> 
> However, we're now being told this is operator hostile, and that we should
> provide an option for policy.json to enable global admin, because other
> projects do it.
> 
> What is the best-practice solution to this requirement?
> 
> I'm assuming (to avoid being added to bug #968696) that we must not enable
> global admin by default, but is it acceptable to support optional custom
> policy.json which defeats the default tenant scoping for a requst?
> 
> For example, in policy.v3cloudsample.json[1] there are several options in
> terms of "admin-ness", including admin_required which appears to be the
> traditional global-admin based on the admin role.
> 
> It's quite confusing, are there any docs around best-practices for policy
> authors and/or what patterns services are supposed to support wrt policy?
> 
> I'm wondering if we should we be doing something like this in our default
> policy.json[2]?
> 
> "admin_required": "role:admin",
> "cloud_admin": "rule:admin_required and domain_id:admin_domain_id",
> "owner" : "user_id:%(user_id)s or user_id:%(target.token.user_id)s",
> 
> "stacks:delete": "rule:owner or rule:cloud_admin"
> 
> I'm not yet quite clear where admin_domain_id is specified?
> 
> Any guidance or thoughts would be much appreciated - I'm keen to resolve
> this pain-point for operators, but not in a way that undermines the
> OpenStack-wide desire to move away from global-admin by default.
> 
> Thanks!
> 
> Steve
> 
> [1] 
> https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json
> [2] https://github.com/openstack/heat/blob/master/etc/heat/policy.json
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][keystone] How to handle request for global admin in policy.json?

2015-11-10 Thread Henry Nash
Steve,

Currently, your best option is to use something similar to the 
policy.v3cloudsample.json, where you basically “bless” a project (or domain) as 
being the “cloud admin project/domain”.  Having a role on that gives you 
super-powers.  The only trouble with this right now is that you have to paste 
the ID of your blessed project/domain into the policy file (you only have to do 
that once, of course) - basically you replace the “admin_domain_id” with the ID 
of your blessed project/domain.

What we are considering for Mitaka is make this a bit more friendly, so you 
don’t have to modify the policy file - rather you define your “blessed project” 
in your config file, and tokens that are issue on this blessed project will 
have an extra attribute (e.g. “is_admin_project”), which your policy file can 
check for.

Henry
> On 10 Nov 2015, at 09:50, Steven Hardy  wrote:
> 
> Hi all,
> 
> Seeking some guidance (particularly from the keystone folks) ref this bug:
> 
> https://bugs.launchpad.net/heat/+bug/1466694
> 
> tl;dr - Heat has historically been careful to make almost all requests
> scoped to exactly one project.  Being aware of the long-standing bug
> #968696, we've deliberately avoided using any global "is admin" flag
> derived from the admin role.
> 
> However, we're now being told this is operator hostile, and that we should
> provide an option for policy.json to enable global admin, because other
> projects do it.
> 
> What is the best-practice solution to this requirement?
> 
> I'm assuming (to avoid being added to bug #968696) that we must not enable
> global admin by default, but is it acceptable to support optional custom
> policy.json which defeats the default tenant scoping for a requst?
> 
> For example, in policy.v3cloudsample.json[1] there are several options in
> terms of "admin-ness", including admin_required which appears to be the
> traditional global-admin based on the admin role.
> 
> It's quite confusing, are there any docs around best-practices for policy
> authors and/or what patterns services are supposed to support wrt policy?
> 
> I'm wondering if we should we be doing something like this in our default
> policy.json[2]?
> 
> "admin_required": "role:admin",
> "cloud_admin": "rule:admin_required and domain_id:admin_domain_id",
> "owner" : "user_id:%(user_id)s or user_id:%(target.token.user_id)s",
> 
> "stacks:delete": "rule:owner or rule:cloud_admin"
> 
> I'm not yet quite clear where admin_domain_id is specified?
> 
> Any guidance or thoughts would be much appreciated - I'm keen to resolve
> this pain-point for operators, but not in a way that undermines the
> OpenStack-wide desire to move away from global-admin by default.
> 
> Thanks!
> 
> Steve
> 
> [1] 
> https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json
> [2] https://github.com/openstack/heat/blob/master/etc/heat/policy.json
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Update on Reseller

2015-10-29 Thread Henry Nash
Hi

At the design summit we have had a number of discussions on how to complete the 
reseller functionality (original spec: https://review.openstack.org/#/c/139824 
). We all agreed that we should split 
the implementation into:

1) Refactor the way domains are stored, so that they are actually projects with 
the “is_domain” attribute.  At the end of this phase, all top level projects 
would be acting as domains. The domain API is not removed, but references the 
appropriate project.

2) Investigate alternatives to the original proposal of using nested projects 
acting as a domain to model the reseller use case. One proposed alternative was 
to try and use federated mapping to provide isolation between customers within 
a reseller.

The second part of this has undergone some intensive analysis over the last few 
days - here’s a summary of what we looked at:

This alternative proposal was intended to work like this;

1) The Cloud provider would create a domain for the reseller.
2) The reseller would on-board their customers by creating IdPs and the mapping 
rules, that would land a given customer’s users into a customer specific 
project or tree of projects, within the reseller's domain

A number of issues come out of this:

a) Most serious, is how we provide sufficient isolation between the customers - 
the key being ensuing that admin actions that a customer needs to carry out can 
be protected by generic policy files rules that would be written by the cloud 
provider without any knowledge of the specific reseller and their customers. 
Things that immediately seem hard in this area:
- CRUD of customer specific roles (which would be the analogy of what we had 
been calling domain specific roles)
- CRUD of the mapping rules for the customers IdPs (i.e. a customer’s admin 
would want to be able to change what groups/assignments would be used against 
attributes in the IdP’s assertion)
One could imagine doing the above by having some kind of “special project” for 
each customer (although one could say that this is no different than that 
"special project” being a project with the “is_domain” flag!)

b) Today, at least, all project names within a domain must be unique - which 
would be overly restrictive in this case (since all customer projects of the 
reseller are in the same domain).  So we’d need to move to the "project name 
must be unique within it’s parent” model - which we have discussed before (and 
solve the issues with referring to project by name etc.)

c) This solution really only works for one level of reseller (which would 
probably be OK for now, although is a concern for the future)

d) This solution only works if all a reseller's customers are ready to use a 
federated model, i.e. it won’t work if they want to use their corporate LDAP. 
With support of LDAP via the Apache plugin that would help - but I think the 
issue is more a customer operating model, rather than technically can you 
federate with their LDAP.

After discussing this with a few of the other cores (including Morgan), it was 
agreed that you really should use a domain per customer to ensure we have the 
correct isolation. But perhaps we could just create all the domains at the top 
level (i.e. avoiding the need for nested domains)? Analysis of this throws up 
some few additional issues:

- How is it that we maintain some kind of link/ownership/breadcrumb-trail from 
a customer domain back to their reseller? We might need this to, for instance, 
ensure that reseller A can only see their own customers' domains, and not those 
of reseller B. Interestingly, this linkage did exist in the solution when each 
customer had a special project in the reseller’s domain.
- At some point in the future, we probably won’t want the domain names to be 
all globally unique - rather you would want them unique within the reseller. 
Probably not an issue to start, but eventually this might become a problem. 
It’s not clear how you would provide such a restriction with the domain names 
at the top level. 

It is possible we could somehow use role assignments to provide the above - but 
this seems tenuous at best. This leads us all the way back to the original 
proposal of nested projects acting as domains. This was designed to solve the 
problems above. However, i think there are a couple of reasons why this 
solution has seemed so complicated and concerning:

i) Trying to implement this in one go meant changing multiple concepts at once 
- doing this in two phases (as discussed) solves most of these issues.
ii) The original discussions on nested domains where all very general and 
theoretical. I don’t think it had been explained well enough, that the ONLY 
thing that was trying to be achieved with the nesting of projects acting as 
domains for reseller was the idea of ownership & segregation.  We shouldn’t 
allow/attempt any of the other things that come with project hierarchies (e.g. 
inherited role 

Re: [openstack-dev] [keystone] PTL non-candidacy

2015-09-11 Thread Henry Nash
Gotta add my thanks as well…as I’m sure Dolph will attest, it’s a tough job - 
and we’ve been lucky to have people who have been prepared to put in the really 
significant effort  that is required to make both the role and the project 
successful!

To infinity and beyond….

Henry
> On 11 Sep 2015, at 05:08, Brad Topol  wrote:
> 
> Thank you Morgan for your outstanding leadership, tremendous effort, and your 
> dedication to OpenStack and Keystone in particular. It has been an absolute 
> pleasure getting to work with you these past few years. And I am looking 
> forward to working with you in your new role!!!
> 
> --Brad
> 
> 
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
> 
> Morgan Fainberg ---09/10/2015 05:44:52 PM---As I outlined 
> (briefly) in my recent announcement of changes ( 
> https://www.morganfainberg.com/blog/2 
> 
> From: Morgan Fainberg 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 09/10/2015 05:44 PM
> Subject: [openstack-dev] [keystone] PTL non-candidacy
> 
> 
> 
> 
> As I outlined (briefly) in my recent announcement of changes ( 
> https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/
>  
> 
>  ) I will not be running for PTL of Keystone this next cycle (Mitaka). The 
> role of PTL is a difficult but extremely rewarding job. It has been amazing 
> to see both Keystone and OpenStack grow.
> 
> I am very pleased with the accomplishments of the Keystone development team 
> over the last year. We have seen improvements with Federation, 
> Keystone-to-Keystone Federation, Fernet Tokens, improvements of testing, 
> releasing a dedicated authentication library, cross-project initiatives 
> around improving the Service Catalog, and much, much more. I want to thank 
> each and every contributor for the hard work that was put into Keystone and 
> its associated projects.
> 
> While I will be changing my focus to spend more time on the general needs of 
> OpenStack and working on the Public Cloud story, I am confident in those who 
> can, and will, step up to the challenges of leading development of Keystone 
> and the associated projects. I may be working across more projects, but you 
> can be assured I will be continuing to work hard to see the initiatives I 
> helped start through. I wish the best of luck to the next PTL.
> 
> I guess this is where I get to write a lot more code soon!
> 
> See you all (in person) in Tokyo!
> --Morgan__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FFE Request for completion of data driven assignment testing in Keystone

2015-09-04 Thread Henry Nash
Great, thanks.

Henry
> On 4 Sep 2015, at 09:17, Thierry Carrez  wrote:
> 
> Morgan Fainberg wrote:
>> 
>>>I would like to request an FFE for the remaining two patches that
>>>are already in review
>>>(https://review.openstack.org/#/c/153897/ and 
>>> https://review.openstack.org/#/c/154485/). 
>>>These contain only test code and no functional changes, and
>>>increase our test coverage - as well as enable other items to be
>>>re-use the list_role_assignment backend method.
>>> 
>>> Do we need a FFE for changes to tests?
>>> 
>> 
>> I would say "no". 
> 
> Right. Extra tests (or extra docs for that matter) don't count as a
> "feature" for the freeze. In particular it doesn't change the behavior
> of the software or invalidate testing that may have been conducted.
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FFE Request for moving inherited assignment to core in Keystone

2015-09-04 Thread Henry Nash
Keystone has, for a number of releases,  supported the concept of inherited 
role assignments via the OS-INHERIT extension. At the Keystone mid-cycle we 
agreed moving this to core this was a good target for Liberty, but this was 
held by needing the data driver testing to be in place  
(https://review.openstack.org/#/c/190996/ 
).

Inherited roles are becoming an integral part of Keystone, especially with the 
move to hierarchal projects (which is core already) - and so moving inheritance 
to core makes a lot of sense.  At the same time as the move, we want to tidy up 
the API (https://review.openstack.org/#/c/200434/ 
) to be more consistent with project 
hierarchies (before the old API semantics get too widely used), although we 
will continue to support the old API via the extension for a number of cycles.

I would like to request an FFE for the move of inheritance to core.

Henry__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FFE Request for completion of data driven assignment testing in Keystone

2015-09-03 Thread Henry Nash
The approved Keystone Blueprint (https://review.openstack.org/#/c/190996/ 
) for enhancing our testing of the 
assignment backend was split into 7 patches. 5 of these landed before the 
liberty-3 freeze, but two had not yet been approved.

I would like to request an FFE for the remaining two patches that are already 
in review (https://review.openstack.org/#/c/153897/ 
 and 
https://review.openstack.org/#/c/154485/ 
).  These contain only test code and 
no functional changes, and increase our test coverage - as well as enable other 
items to be re-use the list_role_assignment backend method.

Henry__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FFE Request for list role assignment in tree blueprint in Keystone

2015-09-03 Thread Henry Nash
The approved Keystone blueprint (https://review.openstack.org/#/c/187045/ 
) was held up during the Liberty 
cycles by needing the data driver testing to be in place  
(https://review.openstack.org/#/c/190996/ 
).  The main implementation is 
already in review and this adds a new API without modifying other existing APIs.

Given the narrowness of the addition being made (and that it is already well 
advanced in its implementation), I would like to request an FFE for the 
completion of this blueprint since this provides valuable administrative 
capabilities for enterprise customers in conjunction with the Keystone reseller 
capabilities.

Thanks

Henry__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api][keystone][openstackclient] Standards for object name attributes and filtering

2015-08-26 Thread Henry Nash
Hi

With keystone, we recently came across an issue in terms of the assumptions 
that the openstack client is making about the entities it can show - namely 
that is assumes all entries have a ‘name’ attribute (which is how the 
openstack show command works). Turns out, that not all keystone entities have 
such an attribute (e.g. IDPs for federation) - often the ID is really the name. 
Is there already agreement across our APIs that all first class entities should 
have a ‘name’ attribute?  If we do, then we need to change keystone, if not, 
then we need to change openstack client to not make this assumption (and 
perhaps allow some kind of per-entity definition of which attribute should be 
used for ‘show’).

A follow on (and somewhat related) question to this, is whether we have agreed 
standards for what should happen if some provides an unrecognized filter to a 
list entities API request at the http level (this is related since this is also 
the hole osc fell into with keystone since, again, ‘name’ is not a recognized 
filter attribute). Currently keystone ignores filters it doesn’t understand (so 
if that was your only filter, you would get back all the entities). The 
alternative approach would of course be to return no entities if the filter is 
on an attribute we don’t recognize (or even issue a validation or bad request 
exception).  Again, the question is whether we have agreement across the 
projects for how such unrecognized filtering should be handled?

Thanks

Henry
Keystone Core

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Pagination support for Identity dashboard entities

2015-08-14 Thread Henry Nash
So I was one of the keystone folks who looked at pagination (hell, I even had 
an implementation - and the framework for it still exists in keystone). 
However, I think it is true to say that there were as many people (external to 
keystone) who thought pagination was a bad idea, as thought it was a good one.  
At the time, there was a drive to answer this debate corss-project so we would 
have a new consensus (as opposed to just assume that what we did before should 
be replicated everywhere). I’m actually unclear if that happened. We then add 
the complication of federation where keystone physically does not have access 
to the users (it only knows about users who are active right now” and even 
that is pretty tenuous).

As Morgan has outlined, although there are solutions to at least the 
“traditional LDAP” backed keystone…they aren’t very pretty - and don’t sit 
easily with a REST API.

It’s really a dichotomy - we have grown up thinking that keystone can serve up 
users and groups…whereas the future of large enterprise systems (where you 
might think you need pagination) is one where keystone will probably NOT have 
access to the users.

Henry

 On 14 Aug 2015, at 20:19, Morgan Fainberg morgan.fainb...@gmail.com wrote:
 
 Pagination in ldap requires holding a cursor open. You would have to map the 
 requests to the same cursor each time. It costs memory and holds a client 
 connected to the ldap server. In a REST api it is a bad idea. With regard to 
 searching it can be done, but each query can be a different set of objects 
 (order is not guaranteed). It isn't straight forward. 
 
 To put is bluntly, we are working to push user management to the tools that 
 are better at this than keystone. The LDAP servers or AD have far better 
 tools than the keystone API. And federated users are managed externally as 
 well. The SQL table to manage users is not a good solution and we are making 
 strides to eliminate the needs for even service users to exist here. 
 
 The question about roles and grants can be queried and appropriately 
 paginated/limited/searched (again same statement about resource for 
 project/domain where if it doesn't exist i wouldn't block it but it is likely 
 a mitaka target). 
 
 --Morgan
 
 Sent via mobile
 
 On Aug 14, 2015, at 09:42, Fox, Kevin M kevin@pnnl.gov 
 mailto:kevin@pnnl.gov wrote:
 
 Surely ldap supports some form of pagination/searching natively.  If any 
 storage system of users needs to scale up to large numbers of users, its 
 ldap...
 
 Thanks,
 Kevin
 From: Timur Sufiev [tsuf...@mirantis.com mailto:tsuf...@mirantis.com]
 Sent: Friday, August 14, 2015 9:20 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Pagination support for 
 Identity dashboard entities
 
 Morgan,
 
 Your reasoning is perfectly fine from the Keystone point of view. Yet I 
 believe this approach is harmful for both Horizon and the whole OpenStack 
 ecosystem. 
 
 It is harmful for the ecosystem, because it breaks API uniformity in one of 
 the few areas where this uniformity could be achieved. Imagine if Nova or 
 Cinder start saying the same thing: we have too much drivers/backends to 
 provide the uniform interface for all of them, let's delegate the choice of 
 handling them differently to our consumers. It'll propagate the knowledge 
 of different backends throughout the stack and it's obviously not good.
 
 Not having pagination on Identity-Users page means that even with filtering 
 being fully supported there will be problems. At least the first time the 
 Users page with all the Users piped from production-grade LDAP through 
 Keystone is shown in Horizon, it takes a lot time to render them all (before 
 an unhappy admin had any chance to narrow the list), which eventually may 
 result in connection being dropped by some HA balancer. We did these kinds 
 of tests, the results weren't reassuring. Well, I might miss some of new 
 Horizon angularization steps, so please regard this paragraph as my personal 
 opinion - I don't think Horizon could be lighting fast on its own (i.e. 
 without additional services) with a lot of data without pagination.
 
 On Fri, Aug 14, 2015 at 6:03 PM Morgan Fainberg morgan.fainb...@gmail.com 
 mailto:morgan.fainb...@gmail.com wrote:
 For the identity (users and groups) backend as long as we support LDAP (and 
 as side note federated users never show up in this list anyway) and with the 
 drive towards pushing all user management out of keystone itself to ldap or 
 other tools that do it better, I don't see pagination as something we should 
 be providing. Providing an inconsistent user experience based on leaking 
 underlying implementation details is something I am very against. This 
 stance ensures that horizon and other tools like it will not need to know 
 underlying implementation details to provide a consistent user experience. 
 Unfortunately here we do need to cater to the 

Re: [openstack-dev] Hyper-V 2008 R2 support

2015-08-09 Thread Henry Nash
Hi

So adding a deprecation warning but saying the “the code is there but not 
tested” in Liberty isn’t really doing it right.  The deprecation warning should 
come in a release where the code is still tested and working (so there is no 
danger in breaking customers), but they are warned that they need to change 
what they are doing by a certain release, or it may no longer work.

Henry
 On 4 Aug 2015, at 16:28, Alessandro Pilotti apilo...@cloudbasesolutions.com 
 wrote:
 
 
 On 04 Aug 2015, at 17:56, Daniel P. Berrange berra...@redhat.com wrote:
 
 On Tue, Aug 04, 2015 at 02:34:19PM +, Alessandro Pilotti wrote:
 Hi guys,
 
 Just a quick note on the Windows editions support matrix updates for the 
 Nova
 Hyper-V driver and Neutron networking-hyperv ML2 agent:  
 
 We are planning to drop legacy Windows Server / Hyper-V Server 2008 R2 
 support
 starting with Liberty.
 
 Windows Server / Hyper-V Server 2012 and above will continue to be 
 supported.
 
 What do you mean precisely by drop support here ?  Are you merely no longer
 testing it, or is Nova actually broken with Hyper-V 2k8 R2  in Liberty ?
 
 Generally if we intend to drop a hypervisor platform we'd expect to have a
 deprecation period for 1 cycle where Nova would print out a warning message
 on startup to alert administrators if using the platform that is intended
 to be dropped. This gives them time to plan a move to a newer platform
 before we drop the support.
 
 The plan is to move the Hyper-V specific code to a new Oslo project during the
 early M cycle and as part of the move the 2008 R2 specific code will be 
 dropped.
 This refers to the OS specific interaction layer (the *utils modules)
 which are mostly shared across multiple projects (nova, cinder,
 networking-hyperv, ceilometer, etc).
 
 Contextually the corresponding code will be proposed for removal in Nova,
 replacing it with the new oslo dependency.
 
 The 2008 R2 code will still be available in Liberty, although untested.
 A deprecation warning can surely be added to the logs.
 
 Alessandro
 
 
 Regards,
 Daniel
 -- 
 |: http://berrange.com http://berrange.com/  -o-
 http://www.flickr.com/photos/dberrange/ 
 http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org http://libvirt.org/  -o- 
 http://virt-manager.org http://virt-manager.org/ :|
 |: http://autobuild.org http://autobuild.org/   -o- 
 http://search.cpan.org/~danberr/ http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org http://entangle-photo.org/   -o-   
 http://live.gnome.org/gtk-vnc http://live.gnome.org/gtk-vnc :|
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] LDAP identity driver with groups from local DB

2015-07-24 Thread Henry Nash
Matt,

Your hybrid driver seems to be doing something different than what Julian was 
asking - namely providing some “automatic role assignments” for users stored in 
LDAP (unless I am not understanding your patch)?  I guess you could argue 
that’s a restricted version of being able to create group memberships outside 
of LDAP (which is Julian what I think you are asking for….), but probably a 
somewhat different use case?

Henry
 On 24 Jul 2015, at 05:51, Matt Fischer m...@mattfischer.com wrote:
 
 Julian,
 
 You want this hybrid backend driver. Bind against LDAP for auth, store 
 everything else in mysql:
 
 https://github.com/SUSE-Cloud/keystone-hybrid-backend 
 https://github.com/SUSE-Cloud/keystone-hybrid-backend
 
 We maintain our own fork with has a few small differences. I do not use the 
 assignment portion of the driver and I'm not sure anyone does so keep that in 
 mind.
 
 I know some of the Keystone team has pretty strong opinions about this but it 
 works for us.
 
 And nice to run into you again...
 
 On Thu, Jul 23, 2015 at 10:00 PM, Julian Edwards bigjo...@gmail.com 
 mailto:bigjo...@gmail.com wrote:
 Hello,
 
 I am relatively new to Openstack and Keystone so please forgive me any
 crazy misunderstandings here.
 
 One of the problems with the existing LDAP Identity driver that I see
 is that for group management it needs write access to the LDAP server,
 or requires an LDAP admin to set up groups separately.
 
 Neither of these are palatable to some larger users with corporate
 LDAP directories, so I'm interested in discussing a solution that
 would get acceptance from core devs.
 
 My initial thoughts are to create a new driver that would store groups
 and their user memberships in the local keystone database, while
 continuing to rely on LDAP for user authentication. The advantages of
 this would be that the standard UI tools could continue to work for
 group manipulation.  This is somewhat parallel with ephemeral
 federated user group mappings, but that's all done in the json blob
 which is a bit horrible. (I'd like to see that working with a decent
 UI some time, perhaps it is solved in the same way)
 
 However, one of the other reasons I'm sending this is to gather more
 ideas to solve this. I'd like to hear from anyone in a similar
 position, and anyone with input on how to help.
 
 Cheers,
 Julian.
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Request for spec freeze execption

2015-07-17 Thread Henry Nash
Hi

Role assignment inheritance has been an extension in Keystone for a number of 
cycle.  With the introduction of project hierarchies (which also support 
assignment inheritance), we’d like to move inheritance into core.

At the same time as the move to core, we’d like to modify the way inheritance 
rules are applied.  When assignment inheritance was first introduced, it was 
designed for domain-project inheritance - and under that model, the rule was 
that an inherited role assigned to the domain would be applied to all the 
projects in the domain, but not to the domain itself.  Now that we have 
generalised project hierarchies, this seem to make a lot less sense…and the 
more standard model of the assignment being applied to its target and all the 
targets sub-projects makes more sense.

The proposal is, therefore, that the API we support in core (which is, by 
definition, different from the one in OS-INHERIT extension), will support this 
new model, but we will, for backward compatibility, continue to support the old 
extension (with the old model of inheritance), marked as deprecate, with 
removal no sooner than 4 cycles. Although probably not recommended from an 
operations usability point of view, there would be no issue mixing and matching 
assignments both from the core API and the extension API, during the 
deprecation period.

The spec for this change can be found here:  
https://review.openstack.org/#/c/200434

At the next keystone IRC meeting, I’d like to discuss granting a spec freeze 
exception to allow this move to core.

Henry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Liberty SFE Request - Dynamic Policies

2015-07-13 Thread Henry Nash
So although I am in favor of approving some of this, it isn’t quite clear what 
portions of “Dynamic Policy” is being asked for an exception?  We need to be 
clear exactly what bps we are taking about here, since there is a lot under 
that umbrella.

Henry
 On 13 Jul 2015, at 19:20, Yee, Guang guang@hp.com 
 mailto:guang@hp.com wrote:
 
 ++!
  
 Per my understanding, the work, and therefore the risks, are fairly 
 compartmentalized. The upside is this will pave the way for a much richer 
 authorization management system. 
  
  
 Guang
  
 From: Adam Young [mailto:ayo...@redhat.com mailto:ayo...@redhat.com] 
 Sent: Monday, July 13, 2015 10:15 AM
 To: openstack-dev@lists.openstack.org 
 mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [keystone] Liberty SFE Request - Dynamic Policies
  
 On 07/03/2015 08:36 AM, Samuel de Medeiros Queiroz wrote:
 Hi Thierry,
 
 Thanks for clarifying. A Spec Freeze Exception is what I was supposed to ask 
 for.
 
 Rectifying:
 
 On behalf of the team working on the Dynamic Policies subject, I would like 
 to ask for a *Spec Freeze Exception* in Liberty for it.
 
 This one is an important lead in to a lot of other work. Getting just this in 
 to Liberty allows us to focus the remainder of the work on Dynamic policy 
 inside Keystone.
 
 Please approve.
 
 
 
 
 Thanks,
 Samuel de Medeiros Queiroz
 
 
 Thierry Carrez wrote:
 samuel wrote:
 [...]
 On behalf of the team working on the Dynamic Policies subject, I would
 like to ask for a Feature Freeze Exception in Liberty for it.
 Liberty Feature Freeze is on September 3rd, so I doubt you need a
 feature freeze exception at this time. I suspect that would be a spec
 freeze exception or some other Keystone-specific freeze exception ?
  
 https://wiki.openstack.org/wiki/FeatureFreeze 
 https://wiki.openstack.org/wiki/FeatureFreeze
  
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-05 Thread Henry Nash
The one proviso is that in single LDAP situations, the cloud provider can chose 
(for backward compatibility reasons) to allow the underlying LDAP user/group 
ID….so we might want to advise this to be disabled (there’s a config switch to 
use the Public ID mapping for even this case).

Henry
 On 5 Jun 2015, at 18:19, Dolph Mathews dolph.math...@gmail.com wrote:
 
 
 On Fri, Jun 5, 2015 at 11:50 AM, Henry Nash henry.n...@uk.ibm.com 
 mailto:henry.n...@uk.ibm.com wrote:
 So I think that GroupID's are actually unique and safesince in the multi 
 LDAP case we provide an indirection already in Keystone and issue a Public 
 ID (this is true for bother users and groups), that we map to the underlying 
 local ID in the particular LDAP backend.
 
 Oh, awesome! I didn't realize we did that for groups as well. So then, we're 
 safe exposing X-Group-Ids to services via keystonemiddleware.auth_token but 
 still not X-Group-Names (in any trivial form).
  
 
 
 Henry 
 
 
 From: Dolph Mathews dolph.math...@gmail.com mailto:dolph.math...@gmail.com
 To:   OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 mailto:openstack-dev@lists.openstack.org, Henry Nash 
 hen...@linux.vnet.ibm.com mailto:hen...@linux.vnet.ibm.com, Henry 
 Nash/UK/IBM@IBMGB
 Date: 05/06/2015 15:38
 Subject:  Re: [openstack-dev] [keystone][barbican] Regarding exposing 
 X-Group- in token validation
 
 
 
 
 
 On Thu, Jun 4, 2015 at 10:17 PM, John Wood john.w...@rackspace.com 
 mailto:john.w...@rackspace.com wrote: 
 Hello folks, 
 
 Regarding option C, if group IDs are unique within a given cloud/context, and 
 these are discoverable by clients that can then set the ACL on a secret in 
 Barbican, then that seems like a viable option to me. As it is now, the user 
 information provided to the ACL is the user ID information as found in 
 X-User-Ids now, not user names.  
 
 To Kevin’s point though, are these group IDs unique across domains now, or in 
 the future? If not the more complex tuples suggested could be used, but seem 
 more error prone to configure on an ACL. 
 
 Well, that's a good question, because that depends on the backend, and our 
 backend architecture has recently gotten very complicated in this area. 
 
 If groups are backed by SQL, then they're going to be globally unique UUIDs, 
 so the answer is always yes. 
 
 If they're backed by LDAP, then actually it depends on LDAP, but the answer 
 should be yes. 
 
 But the nightmare scenario we now support is domain-specific identity 
 drivers, where each domain can actually be configured to talk to a different 
 LDAP server. In that case, I don't think you can make any guarantees about 
 group ID uniqueness :( Instead, each domain could provide whatever IDs it 
 wants, and those might conflict with those of other domains. We have a 
 workaround for a similar issue with user IDs, but it hasn't been applied to 
 groups, leaving them quite broken in this scenario. I'd consider this to be 
 an issue we need to solve in Keystone, though, not something other projects 
 need to worry about. I'm hoping Henry Nash can chime in and correct me! 
   
 
 Thanks, 
 John 
 
 From: Fox, Kevin M kevin@pnnl.gov mailto:kevin@pnnl.gov
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org
 Date: Thursday, June 4, 2015 at 6:01 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org 
 mailto:openstack-dev@lists.openstack.org 
 
 Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
 X-Group- in token validation 
 
 In Juno I tried adding a user in Domain A to group in Domain B. That 
 currently is not supported. Would be very handy though.
 
 We're getting a ways from the original part of the thread, so I may have lost 
 some context, but I think the original question was, if barbarian can add 
 group names to their resource acls.
 
 Since two administrative domains can issue the same group name, its not safe 
 I believe.
 
 Simply ensuring the group name is associated with a user and the domain for 
 the user matches the domain for the group wouldn't work because someone with 
 control of their own domain can just make a 
 user and give them the group with the name they want and come take your 
 credentials.
 
 What may be safe is for the barbican ACL to contain the group_id if they are 
 uniqueue across all domains, or take a domain_id  group_name pair for the 
 acl.
 
 Thanks,
 Kevin
 
 
 From: Dolph Mathews [dolph.math...@gmail.com mailto:dolph.math...@gmail.com]
 Sent: Thursday, June 04, 2015 1:41 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
 X-Group- in token validation
 
 Problem! In writing a spec for this ( 
 https://review.openstack.org/#/c/188564/ 
 https://review.openstack.org

Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-05 Thread Henry Nash
I am sure I have missed something along the way, but can someone explain to me 
why we need this at all.  Project names are unique within a domain, with the 
exception of the project that is acting as its domain (i.e. they can only every 
be two names clashing in a hierarchy at the domain level and below).  So why 
isn’t specifying “is_domain=True/False” sufficient in an auth scope along with 
the project name?

Henry

 On 5 Jun 2015, at 18:02, Adam Young ayo...@redhat.com wrote:
 
 On 06/03/2015 05:05 PM, Morgan Fainberg wrote:
 Hi David,
 
 There needs to be some form of global hierarchy delimiter - well more to the 
 point there should be a common one across OpenStack installations to ensure 
 we are providing a good and consistent (and more to the point 
 inter-operable) experience to our users. I'm worried a custom defined 
 delimiter (even at the domain level) is going to make it difficult to 
 consume this data outside of the context of OpenStack (there are 
 applications that are written to use the APIs directly).
 We have one already.  We are working JSON, and so instead of project name 
 being a string, it can be an array.
 
 Nothing else is backwards compatible.  Nothing else will ensure we don;t 
 break exisiting deployments.
 
 Moving forward, we should support DNS notation, but it has to be an opt in
 
 
 The alternative is to explicitly list the delimiter in the project ( e.g. 
 {hierarchy: {delim: ., domain.project.project2}} ). The additional 
 need to look up the delimiter / set the delimiter when creating a domain is 
 likely to make for a worse user experience than selecting one that is not 
 different across installations.
 
 --Morgan
 
 On Wed, Jun 3, 2015 at 12:19 PM, David Chadwick d.w.chadw...@kent.ac.uk 
 mailto:d.w.chadw...@kent.ac.uk wrote:
 
 
 On 03/06/2015 14:54, Henrique Truta wrote:
  Hi David,
 
  You mean creating some kind of delimiter attribute in the domain
  entity? That seems like a good idea, although it does not solve the
  problem Morgan's mentioned that is the global hierarchy delimiter.
 
 There would be no global hierarchy delimiter. Each domain would define
 its own and this would be carried in the JSON as a separate parameter so
 that the recipient can tell how to parse hierarchical names
 
 David
 
 
  Henrique
 
  Em qua, 3 de jun de 2015 às 04:21, David Chadwick
  d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk 
  mailto:d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk 
  escreveu:
 
 
 
  On 02/06/2015 23:34, Morgan Fainberg wrote:
   Hi Henrique,
  
   I don't think we need to specifically call out that we want a
  domain, we
   should always reference the namespace as we do today. Basically, if 
  we
   ask for a project name we need to also provide it's namespace (your
   option #1). This clearly lines up with how we handle projects in
  domains
   today.
  
   I would, however, focus on how to represent the namespace in a single
   (usable) string. We've been delaying the work on this for a while
  since
   we have historically not provided a clear way to delimit the
  hierarchy.
   If we solve the issue with what is the delimiter between domain,
   project, and subdomain/subproject, we end up solving the usability
 
  why not allow the top level domain/project to define the delimiter for
  its tree, and to carry the delimiter in the JSON as a new parameter.
  That provides full flexibility for all languages and locales
 
  David
 
   issues with proposal #1, and not breaking the current behavior you'd
   expect with implementing option #2 (which at face value feels to
  be API
   incompatible/break of current behavior).
  
   Cheers,
   --Morgan
  
   On Tue, Jun 2, 2015 at 7:43 AM, Henrique Truta
   henriquecostatr...@gmail.com mailto:henriquecostatr...@gmail.com
  mailto:henriquecostatr...@gmail.com 
  mailto:henriquecostatr...@gmail.com
  mailto:henriquecostatr...@gmail.com 
  mailto:henriquecostatr...@gmail.com
  mailto:henriquecostatr...@gmail.com 
  mailto:henriquecostatr...@gmail.com wrote:
  
   Hi folks,
  
  
   In Reseller[1], we’ll have the domains concept merged into
  projects,
   that means that we will have projects that will behave as 
  domains.
   Therefore, it will be possible to have two projects with the same
   name in a hierarchy, one being a domain and another being a
  regular
   project. For instance, the following hierarchy will be valid:
  
   A - is_domain project, with domain A
  
   |
  
   B - project
  
   |
  
   A - project with domain A
  
  
   That hierarchy faces a problem when a user requests a project
  scoped
   token by name, once she’ll pass “domain = ‘A’” and
  project.name http://project.name/ 

Re: [openstack-dev] [keystone][barbican] Regarding exposing X-Group-xxxx in token validation

2015-06-05 Thread Henry Nash
So I think that GroupID's are actually unique and safesince in the multi 
LDAP case we provide an indirection already in Keystone and issue a Public ID 
(this is true for BOTH users and groups), that we map to the underlying local 
ID in the particular LDAP backend. 

Henry

 On 5 Jun 2015, at 15:37, Dolph Mathews dolph.math...@gmail.com wrote:
 
 
 On Thu, Jun 4, 2015 at 10:17 PM, John Wood john.w...@rackspace.com 
 mailto:john.w...@rackspace.com wrote:
 Hello folks,
 
 Regarding option C, if group IDs are unique within a given cloud/context, and 
 these are discoverable by clients that can then set the ACL on a secret in 
 Barbican, then that seems like a viable option to me. As it is now, the user 
 information provided to the ACL is the user ID information as found in 
 X-User-Ids now, not user names. 
 
 To Kevin’s point though, are these group IDs unique across domains now, or in 
 the future? If not the more complex tuples suggested could be used, but seem 
 more error prone to configure on an ACL.
 
 Well, that's a good question, because that depends on the backend, and our 
 backend architecture has recently gotten very complicated in this area.
 
 If groups are backed by SQL, then they're going to be globally unique UUIDs, 
 so the answer is always yes.
 
 If they're backed by LDAP, then actually it depends on LDAP, but the answer 
 should be yes.
 
 But the nightmare scenario we now support is domain-specific identity 
 drivers, where each domain can actually be configured to talk to a different 
 LDAP server. In that case, I don't think you can make any guarantees about 
 group ID uniqueness :( Instead, each domain could provide whatever IDs it 
 wants, and those might conflict with those of other domains. We have a 
 workaround for a similar issue with user IDs, but it hasn't been applied to 
 groups, leaving them quite broken in this scenario. I'd consider this to be 
 an issue we need to solve in Keystone, though, not something other projects 
 need to worry about. I'm hoping Henry Nash can chime in and correct me!
  
 
 Thanks,
 John
 
 From: Fox, Kevin M kevin@pnnl.gov mailto:kevin@pnnl.gov
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org
 Date: Thursday, June 4, 2015 at 6:01 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org
 
 Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
 X-Group- in token validation
 
 In Juno I tried adding a user in Domain A to group in Domain B. That 
 currently is not supported. Would be very handy though.
 
 We're getting a ways from the original part of the thread, so I may have lost 
 some context, but I think the original question was, if barbarian can add 
 group names to their resource acls.
 
 Since two administrative domains can issue the same group name, its not safe 
 I believe.
 
 Simply ensuring the group name is associated with a user and the domain for 
 the user matches the domain for the group wouldn't work because someone with 
 control of their own domain can just make a 
 user and give them the group with the name they want and come take your 
 credentials.
 
 What may be safe is for the barbican ACL to contain the group_id if they are 
 uniqueue across all domains, or take a domain_id  group_name pair for the 
 acl.
 
 Thanks,
 Kevin
 
 From: Dolph Mathews [dolph.math...@gmail.com mailto:dolph.math...@gmail.com]
 Sent: Thursday, June 04, 2015 1:41 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [keystone][barbican] Regarding exposing 
 X-Group- in token validation
 
 Problem! In writing a spec for this ( 
 https://review.openstack.org/#/c/188564/ 
 https://review.openstack.org/#/c/188564/ ), I remembered that groups are 
 domain-specific entities, which complicates the problem of providing 
 X-Group-Names via middleware.
 
 The problem is that we can't simply expose X-Group-Names to underlying 
 services without either A) making a well-documented assumption about the ONE 
 owning domain scope of ALL included groups, B) passing significantly more 
 data to underlying services than just a list of names (a domain scope for 
 every group), C) passing only globally-unique group IDs (services would then 
 have to retrieve additional details about each from from keystone if they so 
 cared).
 
 Option A) More specifically, keystone could opt to enumerate the groups that 
 belong to the same domain as the user. In this case, it'd probably make more 
 sense from an API perspective if the groups enumeration were part of the 
 user resources in the token response body (the user object already has a 
 containing domain ID. That means that IF a user were to be assigned a group 
 membership in another domain (assuming we didn't move to disallowing that 
 behavior at some

[openstack-dev] [keystone] On dynamic policy, role hierarchies/groups/sets etc.

2015-05-05 Thread Henry Nash
We’ve been discussing changes to these areas for a while - and although I think 
there is general agreement among the keystone cores that we need to change 
*something*, we’ve been struggling to get agreement on exactly how..  So to try 
and ground the discussion that will (I am sure) occur in Vancouver, here’s an 
attempt to take a step back, look at what we have now, as well as where, 
perhaps, we want to get to.

The core functionality all this is related to is that of how does keystone  
policy allow the checking of whether a given API call to an OpenStack service 
should be allowed to take place or not. Within OpenStack this is a two step 
process for an API caller….1) Get yourself a token by authentication and 
getting authorised for a particular scope (e.g. a given project), and then 2) 
Use that token as part of your API call to the service you are interested in. 
Assuming you do, indeed, have the rights to execute this API, somehow steps 1) 
and 2) give the policy engine enough info to say yes or no.

So first, how does this work today and (conceptually) how should we describe 
that?  Well first of all, in fact, strictly we don’t control access at the raw 
API level.  In fact, each service defines a series “capabilities” (which 
usually, but not always, map one-to-one with an API call).  These capabilities 
represent the finest grained access control we support via the policy engine. 
Now, in theory, the most transparent way we could have implemented steps 1) and 
2) above would have been to say that users should be assigned capabilities to 
projects….and then those capabilities would be placed in the token….allowing 
the policy engine to check if they match what is needed for a given capability 
to be executed. We didn’t do that since, a) this would probably end up being 
very laborious for the administrator (there would be lots of capabilities any 
given user would need), and b) the tokens would get very big storing all those 
capabilities. Instead, it was recognised that, usually, there are sets of these 
capabilities that nearly always go together - so instead let’s allow the 
creation of such sets….and we’ll assign those to users instead. So far, so 
good. What is perhaps unusual is how this was implemented. These capability 
sets are, today, called Roles…but rather than having a role definition that 
describes the capabilities represented by that role….instead roles are just 
labels - which can be assigned to users/projects and get placed in a tokens.  
The expansion to capabilities happens through the definition of a json policy 
file (one for each service) which must be processed by the policy engine in 
order to work out what whether the roles in a token and the role-capability 
mapping means that a given API can go ahead. This implementation leads to a 
number issues (these have all been raised by others, just pulling them together 
here):

i) The role-capability mapping is rather static. Until recently it had to be 
stored in service-specific files pushed out to the service nodes out-of-band. 
Keystone does now provide some REST APIs to store and retrieve whole policy 
files, but these are a) course-grained and b) not really used by services 
anyway yet.

ii) As more and more clouds become multi-customer (i.e. a cloud provider 
hosting multiple companies on a single OpenStack installation), cloud providers 
will want to allow those customers to administer “their bit of the cloud”. 
Keystone uses the Domains concept to allow a cloud provider to create a 
namespace for a customer to create their own projects, users and groups….and 
there is a version of the keystone policy file that allows a cloud provider to 
effectively delegate management of these items to an administrator of that 
customer (sometimes called a domain administrator).  However, Roles are not 
part of that namespace - they exists in a global namespace (within a keystone 
installation). Diverse customers may have different interpretations of what a 
“VM admin” or a “net admin” should be allowed to do for their bit of the cloud 
- but  right now that differentiation is hard to provide. We have no support 
for roles or policy that are domain specific.

iii) Although as stated in ii) above, you can write a policy file that 
differentiates between various levels of admin, or fine-tunes access to certain 
capabilities, the reality is that doing this is pretty un-intuative. The 
structure of a policy.json file that tries to do this is, indeed, complex (see 
Keystone’s as an example: 
https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json 
https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json).
 Adding more capability to this will likely only make the situation worse.

We have a number of specs taking shape to try and address the above (a number 
of them competing), so I wanted to propose with a set of guidelines for these:

a) Making the policy centrally sourced (i.e. in 

Re: [openstack-dev] [Keystone] SQLite support (migrations, work-arounds, and more), is it worth it?

2015-04-03 Thread Henry Nash
Fully support this.  I, for one, volunteer to take on a lot of the work needed 
to clean up any our tests/environment to allow this to a happen. Hardly a month 
goes by without a fix having to be re-applied to our sql code to get round some 
problem that didn’t show up in original testing because SQLite is too 
promiscuous.

Henry
 On 4 Apr 2015, at 01:55, Morgan Fainberg morgan.fainb...@gmail.com wrote:
 
 I am looking forward to the Liberty cycle and seeing the special casing we do 
 for SQLite in our migrations (and elsewhere). My inclination is that we 
 should (similar to the deprecation of eventlet) deprecate support for SQLite 
 in Keystone. In Liberty we will have a full functional test suite that can 
 (and will) be used to validate everything against much more real environments 
 instead of in-process “eventlet-like” test-keystone-services; the “Restful 
 test cases” will no longer be part of the standard unit tests (as they are 
 functional testing). With this change I’m inclined to say SQLite (being the 
 non-production usable DB) what it is we should look at dropping migration 
 support for SQLite and the custom work-arounds.
 
 Most deployers and developers (as far as I know) use devstack and MySQL or 
 Postgres to really suss out DB interactions.
 
 I am looking for feedback from the community on the general stance for 
 SQLite, and more specifically the benefit (if any) of supporting it in 
 Keystone.
 
 -- 
 Morgan Fainberg
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Requesting FFE for last few remaining patches for domain configuration SQL support

2015-03-18 Thread Henry Nash
Rich,Yes, I am adding this ability to the keystone client library and then to osc.HenryOn 17 Mar 2015, at 20:17, Rich Megginson rmegg...@redhat.com wrote:
  

  
  
On 03/17/2015 01:26 PM, Henry Nash
  wrote:


  
  
  
  
  
  
  
  Hi
  
  
  Prior to Kilo, Keystone supported the ability for
its Identity backends to be specified on a domain-by-domain
basis - primarily so that different domains could be backed by
different LDAP servers. In this previous support, you defined
the domain-specific configuration options in a separate config
file (one for each domain that was not using the default
options). While functional, this can make onboarding new domains
somewhat problematic since you need to create the domains via
REST and then create a config file and push it out to the
keystone server (and restart the server). As part of the
Keystone Kilo release we are are supporting the ability to
manage these domain-specific configuration options via REST (and
allow them to be stored in the Keystone SQL database). More
detailed information can be found in the spec for this change
at:https://review.openstack.org/#/c/123238/
  
  
  The actual code change for this is split into 11
patches (to make it easier to review), the majority of which
have already merged - and the basic functionality described is
already functional. There are some final patches that are
in-flight, a few of which are unlikely to meet the m3 deadline.
These relate to:
  
  
  1) Migration assistance for those that want to move
from the current file-based domain-specific configuration files
to the SQL based support (i.e. a one-off upload of their config
files). This is handled in the keystone-manage tool - See:https://review.openstack.org/160364
  2) The notification between multiple keystone server
processes that a domain has a new configuration (so that a
restart of keystone is not required) - See:https://review.openstack.org/163322
  3) Support of substitution of sensitive config
options into whitelisted options (this might actually make the
m3 deadline anyway) - Seehttps://review.openstack.org/159928
  
  
  Given that we have the core support for this feature
already merged, I am requesting an FFE to enable these final
patches to be merged ahead of RC.


This would be nice to use in puppet-keystone for domain
configuration. Is there support planned for the openstack client?


  
  
  Henry
  
  
  
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  

__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Requesting FFE for last few remaining patches for domain configuration SQL support

2015-03-17 Thread Henry Nash
Hi

Prior to Kilo, Keystone supported the ability for its Identity backends to be 
specified on a domain-by-domain basis - primarily so that different domains 
could be backed by different LDAP servers. In this previous support, you 
defined the domain-specific configuration options in a separate config file 
(one for each domain that was not using the default options). While functional, 
this can make onboarding new domains somewhat problematic since you need to 
create the domains via REST and then create a config file and push it out to 
the keystone server (and restart the server). As part of the Keystone Kilo 
release we are are supporting the ability to manage these domain-specific 
configuration options via REST (and allow them to be stored in the Keystone SQL 
database). More detailed information can be found in the spec for this change 
at: https://review.openstack.org/#/c/123238/ 
https://review.openstack.org/#/c/123238/

The actual code change for this is split into 11 patches (to make it easier to 
review), the majority of which have already merged - and the basic 
functionality described is already functional.  There are some final patches 
that are in-flight, a few of which are unlikely to meet the m3 deadline.  These 
relate to:

1) Migration assistance for those that want to move from the current file-based 
domain-specific configuration files to the SQL based support (i.e. a one-off 
upload of their config files).  This is handled in the keystone-manage tool - 
See: https://review.openstack.org/160364 https://review.openstack.org/160364
2) The notification between multiple keystone server processes that a domain 
has a new configuration (so that a restart of keystone is not required) - See: 
https://review.openstack.org/163322 https://review.openstack.org/163322
3) Support of substitution of sensitive config options into whitelisted options 
(this might actually make the m3 deadline anyway) - See 
https://review.openstack.org/159928 https://review.openstack.org/159928

Given that we have the core support for this feature already merged, I am 
requesting an FFE to enable these final patches to be merged ahead of RC.

Henry__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] SPFE: Authenticated Encryption (AE) Tokens

2015-02-14 Thread Henry Nash
So I’m Ok with an exception for this, but still have reservations of the this 
AE technique (at least the one that is currently proposed). I assume this will 
be marked as experimental - since we have not formally agreed that this is a 
standard we want to lock into core?

Henry
 On 14 Feb 2015, at 05:09, Lin Hua Cheng os.lch...@gmail.com wrote:
 
 ++ One of the feature that I am looking forward to see in Kilo, this feature 
 will solve one of the pain points from operators in maintaining the token db 
 backend.
 
 -Lin
 
 On Fri, Feb 13, 2015 at 7:21 PM, Steve Martinelli steve...@ca.ibm.com 
 mailto:steve...@ca.ibm.com wrote:
 It would be great to see this land in Kilo, I'll definitely be willing to 
 review the code. 
 
 Steve 
 
 Morgan Fainberg morgan.fainb...@gmail.com 
 mailto:morgan.fainb...@gmail.com wrote on 02/13/2015 04:19:15 PM:
 
  From: Morgan Fainberg morgan.fainb...@gmail.com 
  mailto:morgan.fainb...@gmail.com 
  To: Lance Bragstad lbrags...@gmail.com mailto:lbrags...@gmail.com, 
  OpenStack Development 
  Mailing List (not for usage questions) openstack-dev@lists.openstack.org 
  mailto:openstack-dev@lists.openstack.org 
  Date: 02/13/2015 04:24 PM 
  Subject: Re: [openstack-dev] [keystone] SPFE: Authenticated 
  Encryption (AE) Tokens 
  
  On February 13, 2015 at 11:51:10 AM, Lance Bragstad (lbrags...@gmail.com 
  mailto:lbrags...@gmail.com
  ) wrote:
 
  Hello all, 
  
  I'm proposing the Authenticated Encryption (AE) Token specification 
  [1] as an SPFE. AE tokens increases scalability of Keystone by 
  removing token persistence. This provider has been discussed prior 
  to, and at the Paris summit [2]. There is an implementation that is 
  currently up for review [3], that was built off a POC. Based on the 
  POC, there has been some performance analysis done with respect to 
  the token formats available in Keystone (UUID, PKI, PKIZ, AE) [4]. 
  
  The Keystone team spent some time discussing limitations of the 
  current POC implementation at the mid-cycle. One case that still 
  needs to be addressed (and is currently being worked), is federated 
  tokens. When requesting unscoped federated tokens, the token 
  contains unbound groups which would need to be carried in the token.
  This case can be handled by AE tokens but it would be possible for 
  an unscoped federated AE token to exceed an acceptable AE token 
  length (i.e.  255 characters). Long story short, a federation 
  migration could be used to ensure federated AE tokens never exceed a
  certain length. 
  
  Feel free to leave your comments on the AE Token spec. 
  
  Thanks! 
  
  Lance 
  
  [1] https://review.openstack.org/#/c/130050/ 
  https://review.openstack.org/#/c/130050/ 
  [2] https://etherpad.openstack.org/p/kilo-keystone-authorization 
  https://etherpad.openstack.org/p/kilo-keystone-authorization 
  [3] https://review.openstack.org/#/c/145317/ 
  https://review.openstack.org/#/c/145317/ 
  [4] http://dolphm.com/benchmarking-openstack-keystone-token-formats/ 
  http://dolphm.com/benchmarking-openstack-keystone-token-formats/ 
  __ 
  OpenStack Development Mailing List (not for usage questions) 
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
  
  I am for granting this exception as long as it’s clear that the 
  following is clear/true: 
  * All current use-cases for tokens (including federation) will be 
  supported by the new token provider. 
  * The federation tokens being possibly over 255 characters can be 
  addressed in the future if they are not addressed here (a 
  “federation migration” does not clearly state what is meant. 
  I am also ok with the AE token work being re-ordered ahead of the 
  provider cleanup to ensure it lands. Fixing the AE Token provider 
  along with PKI and UUID providers should be minimal extra work in the 
  cleanup. 
  This addresses a very, very big issue within Keystone as scaling 
  scaling up happens. There has been demand for solving token 
  persistence for ~3 cycles. The POC code makes this exception 
  possible to land within Kilo, whereas without the POC this would 
  almost assuredly need to be held until the L-Cycle. 
  
  TL;DR, I am for the exception if the AE Tokens support 100% of the 
  current use-cases of tokens (UUID or PKI) today. 
  
  —Morgan
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
  

Re: [openstack-dev] [Keystone] Proposing Marek Denis for the Keystone Core Team

2015-02-11 Thread Henry Nash
+1

 On 10 Feb 2015, at 18:04, Dolph Mathews dolph.math...@gmail.com 
 mailto:dolph.math...@gmail.com wrote:
 
 +1
 
 On Tue, Feb 10, 2015 at 11:51 AM, Morgan Fainberg morgan.fainb...@gmail.com 
 mailto:morgan.fainb...@gmail.com wrote:
 Hi everyone!
 
 I wanted to propose Marek Denis (marekd on IRC) as a new member of the 
 Keystone Core team. Marek has been instrumental in the implementation of 
 Federated Identity. His work on Keystone and first hand knowledge of the 
 issues with extremely large OpenStack deployments has been a significant 
 asset to the development team. Not only is Marek a strong developer working 
 on key features being introduced to Keystone but has continued to set a high 
 bar for any code being introduced / proposed against Keystone. I know that 
 the entire team really values Marek’s opinion on what is going in to Keystone.
 
 Please respond with a +1 or -1 for adding Marek to the Keystone core team. 
 This poll will remain open until Feb 13.
 
 -- 
 Morgan Fainberg
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Splitting up the assignment component

2014-11-25 Thread Henry Nash
Hi

As most of you know, we have approved a spec 
(https://review.openstack.org/#/c/129397/) to split the assignments component 
up into two pieces, and the code (divided up into a series of patches) is 
currently in review (https://review.openstack.org/#/c/130954/). While most 
aspects of the split appear to have agreement, there is one aspect that has 
been questioned - and that is the whether roles' should be in the resource 
component, as proposed?

First, let's recap the goals here:

1) The current assignment component is really what's left after we split off 
users/groups into identity some releases ago.  Assignments is pretty 
complicated and messy - and we need a better structure (as an example, just 
doing the split allowed me to find 5 bugs in our current implementation - and I 
wouldn't be surprised if there are more).  This is made more urgent by the fact 
that we are about to land some big new changes in this area, e.g. hierarchical 
projects and a re-implemntation (for performance) of list_role_assignments.

2) While Keystone may have started off as a service where we store all the 
users, credentials  permissions needed to access other OpenStack services, we 
more and more see Keystone as a wrapper for existing corporate authentication 
and authorisation mechanisms - and it's job is really to provided a common 
mechanism and language for these to be consumed across OpenStack services.  To 
do this well, we must make sure that the keystone components are split along 
sensible lines...so that they can individually wrap these corporate 
directories/services.  The classic case of this was are previous split off of 
Identity...and this new proposal takes this a step further.

3) As more and more broad OpenStack powered clouds are created, we must makes 
sure that our Keystone implementation is as flexible as possible. We already 
plan to support new abstractions for things like cloud providers enabling 
resellers to do business within one OpenStack cloud (by providing hierarchical 
multi-tenancy, domain-roles etc.). Our current assignments model is a) slightly 
unusual in that all roles are global and every assignment has 
actor-target-role, and b) cannot easily be substituted for alternate assignment 
models (even for the whole of an OpenStack installation, let alone on a domain 
by domain basis)

The proposal for splitting the assignment component is trying to provide a 
better basis for the above.  It separates the storing and CRUD operations of 
domain/projects/roles into a resource component, while leaving the pure 
assignment model in assignment.  The rationale for this is that the resource 
component defines the entities that the rest of the OpenStack services (and 
their policy engines) understand...while assignment is a pure mapper between 
these entities. The details of these mappings are never exposed outside of 
Keystone, except for the generation of contents of a token.  This would allow 
new assignment models to be introduced that, as long as they support the api to 
list what role_ids are mapped to project_id X for user_id Y, then the rest of 
OpenStack would never know anything had changed.

So to (finally) get the the point of this post...where should the role 
definitions live? The proposal is that these live in resource, because:

a) They represent the definition of how Keystone and the other services define 
permission - and this should be independent of whatever assignment model we 
choose
b) We may well chose (in the future) to morph what we currently means as a 
role...into what they really are, which is a capability.  Once we have 
domain-specifc roles (groups), which map to global roles, then we may well 
end up, more often than not, with a role representing a single API capability.  
Roles might even be created simply by a service registering its 
capabilities with Keystone.  Again, this should be independent of any 
assignment model.
c) By placing roles in the resource component, we allow much greater 
flexibilityfor example we could have different domains having different 
assignment models (e.g. one RABC, one ABAC, one HenrysWeirdAccessControl 
(HWAC). Obviously this would be aimed at a different type of cloud than one 
that has multiple layers of domain inheritance - but that's part of the goal of 
OpenStack, to allow different models to be supported.

I am absolutely open to arguments that roles should remain with 
assignments...but right now the above feels the better solution. I've put 
this topic on the agenda for today's Keystone IRC meeting.

Henry


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy

2014-11-19 Thread Henry Nash
Hi Adam,

So a comprehensive write-up...although I'm not sure we have made the case for 
why we need a complete rewrite of how policy is managed.  We seemed to have 
lept into a solution without looking at other possible solutions to the 
problems we are trying to solve.  Here's a start at an alternative approach:

Problem 1: The current services don't use the centralised policy store/fetch of 
keystone, meaning that a) policy file management is hard, and b) we can't 
support the policy-per-endpoint style of working
Solution: Let's get the other services using it!  No code changes required in 
Keytsone.  The fact that we haven't succeeded before, just means we haven't 
tried hard enough.

Problem 2: Different domains want to be able to create their own roles which 
are more meaningful to their users...but our roles are global and are 
directly linked to the rules in the policy file - something only a cloud 
operator is going to want to own.
Solution: Have some kind of domain-scoped role-group (maybe just called 
domain-roles?) that a domain owner can define, that maps to a set of 
underlying roles that a policy file understands (see: 
https://review.openstack.org/#/c/133855/). [As has been pointed out, what we 
are really doing with this is finally doing real RBAC, where what we call roles 
today are really capabilities and domain-roles are really just roles].  As this 
evolves, cloud providers could slowly migrate to the position where each 
service API is effectively a role (i.e. a capability) and at the domain level 
there exists the abstraction that makes sense for the users of that domain 
into the underlying capabilities. No code changes...this just uses policy files 
as they are today (plus domain-groups) - and tokens as they are too. And I 
think that level of functionality would satisfy a lot of people. Eventually (as 
pointed out by samuelmz) the policy file could even simply become the 
definition of the service capabilities (and whether each capability is open, 
closed or is a role)...maybe just registered and stored in the service 
entity the keystone DB (allowing dynamic service registration). My point being, 
that we really didn't require much code change (nor really any conceptual 
changes) to get to this end point...and certainly no rewriting of policy/token 
formats etc.  [In reality, this last point would cause problems with token size 
(since a broad admin capability would need a lot of capabilities), so some kind 
a collections of capabilities would be required.]

Problem 3: A cloud operator wants to be able to enable resellers to white label 
her services (who in turn may resell to others) - so needs some kind of 
inheritance model so that service level agreements can be supported by policy 
(e.g. let the reseller give the support expert from the cloud provider have 
access to their projects).
Solution: We already have hierarchical inheritance in the works...so that we 
would allow a reseller to assign roles to a user/group from the parent onto 
their own domain/project. Further, domain-roles are just another thing that can 
(optionally) be inherited and used in this fashion.

My point about all the above is that I think while what you have laid out is a 
great set of stepsI don't think we have conceptual agreement as to whether 
that path is the only way we could go to solve out problems.

Henry
On 18 Nov 2014, at 23:40, Adam Young ayo...@redhat.com wrote:

 There is a lot of discussion about policy.  I've attempted to pull the 
 majority of the work into a single document that explains the process in a 
 step-by-step manner:
 
 
 http://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/
 
 Its really long, so I won't bother reposting the whole article here.  
 Instead, I will post the links to the topic on Gerrit.
 
 https://review.openstack.org/#/q/topic:dynamic-policy,n,z
 
 
 There is one additional review worth noting:
 
 https://review.openstack.org/#/c/133855/
 
 Which is for private groups of roles  specific to a domain.  This is 
 related, but not part of the critical path for the things I wrote above.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystyone] Mid-Cycle Meetup Planning

2014-11-12 Thread Henry Nash
I'll be there!

Henry
On 12 Nov 2014, at 02:29, Adam Young ayo...@redhat.com wrote:

 On 11/11/2014 08:18 PM, Morgan Fainberg wrote:
 I am trying to pin down a location for our mid-cycle meetup, I need to get 
 an idea of who will be joining us at the Keystone meetup. I’ve included a 
 couple questions relating to Barbican in the case we can double-up and have 
 a day of overlap like the Juno meetup. I apologize for the delay, this 
 should have been sent out by the end of the summit or earlier. As details 
 are available I’ll provide updates.
 
 Due to timing (so that people can get visas, get travel approval, etc as 
 soon as possible), I will be making a final call on dates by the end of this 
 week and location as soon as the space is confirmed, so your prompt 
 responses are super important!
 
 http://goo.gl/forms/4W7xVM9x49
 
 Cheers,
 Morgan
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 I plan on attending.  If it is in Sunnvale, Nate Kinder from Red Hat  will be 
 able to attend as well.  Its a bit of a schlep for Jamie but I know he wants 
 to attend.  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Stepping down as PTL

2014-09-23 Thread Henry Nash
Agree with all the comments made - Dolph, you really did a great job as PTL - 
keeping the balanced view is a crucial part of the role.  Keystone is the 
better for it.

Henry
On 23 Sep 2014, at 17:08, Yee, Guang guang@hp.com wrote:

 ++ Amen, brother!
  
 From: Lance Bragstad [mailto:lbrags...@gmail.com] 
 Sent: Tuesday, September 23, 2014 7:52 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [keystone] Stepping down as PTL
  
  
  
 On Tue, Sep 23, 2014 at 3:51 AM, Thierry Carrez thie...@openstack.org wrote:
 Adam Young wrote:
  OpenStack owes you more than most people realize.
 
 +1
 
 Dolph did a great job of keeping the fundamental piece that is Keystone
 safe from a release management perspective, by consistently hitting all
 the deadlines, giving time for other projects to safely build on it.
  
 +1  
  
 Thank you for all you're dedication and hard work! It's easy to contribute 
 and become a
 better developer when strong project leadership is in place to learn from.
  
 
  Don't you dare pull a Joe Heck and disappear on us now.
 
 :)
 
 --
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Domain-specific Drivers

2014-08-26 Thread Henry Nash
Hi

It was fully merged for Juno-2 - so if you are having problems, feel free to 
share the settings in you main config and keystone.heat.config files

Henry
On 26 Aug 2014, at 10:26, Bruno Luis Dos Santos Bompastor 
bruno.bompas...@cern.ch wrote:

 Hi folks!
 
 I would like to know what is the status on the “Domain-specific Drivers” 
 feature for Juno.
 I see that there’s documentation on this already but I was not able to use it 
 with the master branch.
 
 I was trying to configure LDAP on the default domain and SQL for heat domain 
 but with no luck.
 
 Is the feature ready?
 
 Best Regards,
 
 Bruno Bompastor.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Keystone multi-domain ldap + sql in Icehouse

2014-07-17 Thread Henry Nash
Hi

So the bad news is that you are correct, multi-domain LDAP is not ready in 
IceHouse (It is marked as experimental.and it has serious flaws).  The good 
news is that this is fixed for Juno - and this support has already been merged 
- and will be in the Juno milestone 2 release.  Here's the spec that describes 
the work done:

https://github.com/openstack/keystone-specs/blob/master/specs/juno/multi-backend-uuids.rst

This support uses the domain-specifc config files approach that is already in 
IceHouse - so the way you define the LDAP parameters for each domain does not 
change.

Henry
On 17 Jul 2014, at 10:52, foss geek thefossg...@gmail.com wrote:

 Dear All,
 
 We are using LDAP as identity back end and SQL as assignment back end.
 
 Now I am trying to evaluate Keystone multi-domain support with LDAP 
 (identity) + SQL (assignment)
 
 Does any one managed to setup LDAP/SQL multi-domain environment in 
 Havana/Icehouse?
 
 Does keystone have suggested LDAP DIT for domains?
 
 I gone through the below thread  [1] and [2], it seems Keystone multi-domain 
 with LDAP+SQL is not ready in Icehouse. 
 
 Hope some one will help.
 
 Thanks for your time. 
 
 [1]http://www.gossamer-threads.com/lists/openstack/dev/37705
 
 [2]http://lists.openstack.org/pipermail/openstack/2014-January/004900.html
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] REST API access to configuration options

2014-07-15 Thread Henry Nash
HI

As the number of configuration options increases and OpenStack installations 
become more complex, the chances of incorrect configuration increases. There is 
no better way of enabling cloud providers to be able to check the configuration 
state of an OpenStack service than providing a direct REST API that allows the 
current running values to be inspected. Having an API to provide this 
information becomes increasingly important for dev/ops style operation.

As part of Keystone we are considering adding such an ability (see: 
https://review.openstack.org/#/c/106558/).  However, since this is the sort of 
thing that might be relevant to and/or affect other projects, I wanted to get 
views from the wider dev audience.  

Any such change obviously has to take security in mind - and as the spec says, 
just like when we log config options, any options marked as secret will be 
obfuscated.  In addition, the API will be protected by the normal policy 
mechanism and is likely in most installations to be left as admin required.  
And of course, since it is an extension, if a particular installation does not 
want to use it, they don't need to load it.

Do people think this is a good idea?  Useful in other projects?  Concerned 
about the risks?

Henry


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-15 Thread Henry Nash
Mark,

Thanks for your comments (as well as remarks on the WIP code-review).

So clearly gathering and analysing log files is an alternative approach, 
perhaps not as immediate as an API call.  In general, I believe that the more 
capability we provide via easy-to-consume APIs (with appropriate permissions) 
the more effective (and innovative) ways of management of OpenStack we will 
achieve (easier to build automated management systems).  In terms of multi API 
servers, obviously each server would respond to the API with the values it has 
set, so operators could check any or all of the serversand this actually 
becomes more important as people distribute config files around to the various 
servers (since more chance of something getting out of sync).

Henry
On 15 Jul 2014, at 10:08, Mark McLoughlin mar...@redhat.com wrote:

 On Tue, 2014-07-15 at 08:54 +0100, Henry Nash wrote:
 HI
 
 As the number of configuration options increases and OpenStack
 installations become more complex, the chances of incorrect
 configuration increases. There is no better way of enabling cloud
 providers to be able to check the configuration state of an OpenStack
 service than providing a direct REST API that allows the current
 running values to be inspected. Having an API to provide this
 information becomes increasingly important for dev/ops style
 operation.
 
 As part of Keystone we are considering adding such an ability (see:
 https://review.openstack.org/#/c/106558/).  However, since this is the
 sort of thing that might be relevant to and/or affect other projects,
 I wanted to get views from the wider dev audience.  
 
 Any such change obviously has to take security in mind - and as the
 spec says, just like when we log config options, any options marked as
 secret will be obfuscated.  In addition, the API will be protected by
 the normal policy mechanism and is likely in most installations to be
 left as admin required.  And of course, since it is an extension, if
 a particular installation does not want to use it, they don't need to
 load it.
 
 Do people think this is a good idea?  Useful in other projects?
 Concerned about the risks?
 
 I would have thought operators would be comfortable gleaning this
 information from the log files?
 
 Also, this is going to tell you how the API service you connected to was
 configured. Where there are multiple API servers, what about the others?
 How do operators verify all of the API servers behind a load balancer
 with this?
 
 And in the case of something like Nova, what about the many other nodes
 behind the API server?
 
 Mark.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-15 Thread Henry Nash
Joe,

I'd imagine an API like this would be pretty useful for some of these config 
tools - so I'd imagine they might well be consumers of this API.

Henry
On 15 Jul 2014, at 13:10, Joe Gordon joe.gord...@gmail.com wrote:

 
 
 
 On Tue, Jul 15, 2014 at 5:00 AM, Henry Nash hen...@linux.vnet.ibm.com wrote:
 Mark,
 
 Thanks for your comments (as well as remarks on the WIP code-review).
 
 So clearly gathering and analysing log files is an alternative approach, 
 perhaps not as immediate as an API call.  In general, I believe that the more 
 capability we provide via easy-to-consume APIs (with appropriate permissions) 
 the more effective (and innovative) ways of management of OpenStack we will 
 achieve (easier to build automated management systems).  In terms of multi 
 API servers, obviously each server would respond to the API with the values 
 it has set, so operators could check any or all of the serversand this 
 actually becomes more important as people distribute config files around to 
 the various servers (since more chance of something getting out of sync).
 
 Where do you see configuration management tools like chef, puppet, and the 
 os-*-config tools (http://git.openstack.org/cgit) fit in to this?
  
 
 Henry
 On 15 Jul 2014, at 10:08, Mark McLoughlin mar...@redhat.com wrote:
 
 On Tue, 2014-07-15 at 08:54 +0100, Henry Nash wrote:
 HI
 
 As the number of configuration options increases and OpenStack
 installations become more complex, the chances of incorrect
 configuration increases. There is no better way of enabling cloud
 providers to be able to check the configuration state of an OpenStack
 service than providing a direct REST API that allows the current
 running values to be inspected. Having an API to provide this
 information becomes increasingly important for dev/ops style
 operation.
 
 As part of Keystone we are considering adding such an ability (see:
 https://review.openstack.org/#/c/106558/).  However, since this is the
 sort of thing that might be relevant to and/or affect other projects,
 I wanted to get views from the wider dev audience.  
 
 Any such change obviously has to take security in mind - and as the
 spec says, just like when we log config options, any options marked as
 secret will be obfuscated.  In addition, the API will be protected by
 the normal policy mechanism and is likely in most installations to be
 left as admin required.  And of course, since it is an extension, if
 a particular installation does not want to use it, they don't need to
 load it.
 
 Do people think this is a good idea?  Useful in other projects?
 Concerned about the risks?
 
 I would have thought operators would be comfortable gleaning this
 information from the log files?
 
 Also, this is going to tell you how the API service you connected to was
 configured. Where there are multiple API servers, what about the others?
 How do operators verify all of the API servers behind a load balancer
 with this?
 
 And in the case of something like Nova, what about the many other nodes
 behind the API server?
 
 Mark.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Size of Log files

2014-07-07 Thread Henry Nash
Hi

Our debug log file size is getting pretty hugea typical py26 jenkins run 
produces a whisker under 50Mb of log - which is problematic for at least the 
reason that our current jenkins setup consider the test run a failure if the 
log file is  50 Mb.  (see 
http://logs.openstack.org/14/74214/40/check/gate-keystone-python26/1714702/subunit_log.txt.gz
 as an example for a recent patch I am working on).  Obviously we could just 
raise the limit, but we should probably also look at how effective our logging 
is.  Reviewing of the log file listed above shows:

1) Some odd corruption.  I think this is related to the subunit concatenation 
of output files, but haven't been able to find the exact cause (looking a local 
subunit file shows some weird characters, but not as bad as when as part of 
jenkins).  It may be that this corruption is dumping more data than we need 
into the log file.

2) There are some spectacularly uninteresting log entries, e.g. 25 lines of :

Initialized with method overriding = True, and path info altering = True

as part of each unit test call that uses routes! (This is generated as part of 
the routes.middleware init)

3) Some seemingly over zealous logging, e.g. the following happens multiple 
times per call:

Parsed 2014-07-06T14:47:46.850145Z into {'tz_sign': None, 'second_fraction': 
'850145', 'hour': '14', 'daydash': '06', 'tz_hour': None, 'month': None, 
'timezone': 'Z', 'second': '46', 'tz_minute': None, 'year': '2014', 
'separator': 'T', 'monthdash': '07', 'day': None, 'minute': '47'} with default 
timezone iso8601.iso8601.Utc object at 0x1a02fd0

Got '2014' for 'year' with default None

Got '07' for 'monthdash' with default 1

Got 7 for 'month' with default 7

Got '06' for 'daydash' with default 1

Got 6 for 'day' with default 6

Got '14' for 'hour' with default None

Got '47' for 'minute' with default None

3) LDAP is VERY verbose, e.g. 30-50 lines of debug per call to the driver.  

I'm happy to work to trim back some of worst excessesbut open to ideas as 
to whether we need a more formal approach to this...perhaps a good topic for 
our hackathon this week?

Henry


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-28 Thread Henry Nash
Hi Mark,

So we would not modify any existing IDs, so no migration required.

Henry
On 28 Feb 2014, at 17:38, Mark Washenberger mark.washenber...@markwash.net 
wrote:

 
 
 
 On Wed, Feb 26, 2014 at 5:25 AM, Dolph Mathews dolph.math...@gmail.com 
 wrote:
 
 On Tue, Feb 25, 2014 at 2:38 PM, Jay Pipes jaypi...@gmail.com wrote:
 On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
  For purposes of supporting multiple backends for Identity (multiple
  LDAP, mix of LDAP and SQL, federation, etc) Keystone is planning to
  increase the maximum size of the USER_ID field from an upper limit of
  64 to an upper limit of 255. This change would not impact any
  currently assigned USER_IDs (they would remain in the old simple UUID
  format), however, new USER_IDs would be increased to include the IDP
  identifier (e.g. USER_ID@@IDP_IDENTIFIER).
 
 -1
 
 I think a better solution would be to have a simple translation table
 only in Keystone that would store this longer identifier (for folks
 using federation and/or LDAP) along with the Keystone user UUID that is
 used in foreign key relations and other mapping tables through Keystone
 and other projects.
 
 Morgan and I talked this suggestion through last night and agreed it's 
 probably the best approach, and has the benefit of zero impact on other 
 services, which is something we're obviously trying to avoid. I imagine it 
 could be as simple as a user_id to domain_id lookup table. All we really care 
 about is given a globally unique user ID, which identity backend is the user 
 from?
 
 On the downside, it would likely become bloated with unused ephemeral user 
 IDs, so we'll need enough metadata about the mapping to implement a purging 
 behavior down the line.
 
 Is this approach planning on reusing the existing user-id field, then? It 
 seems like this creates a migration problem for folks who are currently using 
 user-ids that are generated by their identity backends.
  
  
 
 The only identifiers that would ever be communicated to any non-Keystone
 OpenStack endpoint would be the UUID user and tenant IDs.
 
  There is the obvious concern that projects are utilizing (and storing)
  the user_id in a field that cannot accommodate the increased upper
  limit. Before this change is merged in, it is important for the
  Keystone team to understand if there are any places that would be
  overflowed by the increased size.
 
 I would go so far as to say the user_id and tenant_id fields should be
 *reduced* in size to a fixed 16-char BINARY or 32-char CHAR field for
 performance reasons. Lengthening commonly-used and frequently-joined
 identifier fields is not a good option, IMO.
 
 Best,
 -jay
 
  The review that would implement this change in size
  is https://review.openstack.org/#/c/74214 and is actively being worked
  on/reviewed.
 
 
  I have already spoken with the Nova team, and a single instance has
  been identified that would require a migration (that will have a fix
  proposed for the I3 timeline).
 
 
  If there are any other known locations that would have issues with an
  increased USER_ID size, or any concerns with this change to USER_ID
  format, please respond so that the issues/concerns can be addressed.
   Again, the plan is not to change current USER_IDs but that new ones
  could be up to 255 characters in length.
 
 
  Cheers,
  Morgan Fainberg
  —
  Morgan Fainberg
  Principal Software Engineer
  Core Developer, Keystone
  m...@metacloud.com
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-27 Thread Henry Nash
So a couple of things about this:

1) Today (and also true for Grizzly and Havana), the user can chose what LDAP 
attribute should be returned as the user or group ID.  So it is NOT a safe 
assumption today (ignoring any support for domain-specific LDAP support) that 
the format of a user or group ID is a 32 char UUID.  Quite often, I would 
think, that email address would be chosen by a cloud provider as the LDAP id 
field, by default we use the CN.  Since we really don't want to ever change the 
user or group ID we have given out from keystone for a particular entity, this 
means we need to update nova (or anything else) that has made a 32 char 
assumption.
2) In oder to support the ability for service providers to be able to have the 
identity part of keystone be satisfied by a customer LDAP (i.e. for a given 
domain, have a specific LDAP), then, as has been stated, we need to 
subsequently, when handed an API call with just a user or group ID, be able to 
route this call to the correct LDAP.  Trying to keep true to the openstack 
design principles, we had planned to encode a domain identifier into the user 
or group ID - i.e. distribute the data to where it is needed, in ortherwords, 
the user and group ID provide all the info we need to route the call to the 
right place. Two implementations come to mind:
2a) Simply concatenate the user/group ID with the domain_id, plus some 
separator and make a composite public facing ID.  e.g. 
user_entity_id@@UUID_of_domain.  This would have a technical maximum size of 
64+2+64 (i.e. 130), although in reality since we control domain_id and we know 
it is always 32 char UUID - in fact the max size would be 98.  This has the 
problem of increasing the size of the public facing field beyond the existing 
64.  This is what we had planned for IceHouse - and is currently in review.
2b) Use a similar concatenation idea as 2a), but limit the total size to the 
existing 64. Since we control domain_id, we could (internally and not visibly 
to the outside world), create a domain_index, that was used in place of 
domain_id in the publicly visible field, to minimize the number of chars it 
requires.  So the public facing composite ID might be something like up to 54 
chars of entity_id@@8 chars of domain_index.  There is a chance, of course, 
that  the 54 char restriction might be problematic for LDAP usersbut I 
doubt it.  We would make that a restriction and if it really became a problem, 
we could consider a field size increase at a later release
3) The alternative to 2a and 2b is to have, as had been suggested, an internal 
mapping table that maps external facing entity_ids to a domain plus local 
entity ID.  The problem with this idea is that:
- This could become a very big table (you will essentially have an entry for 
every user in every corporate LDAP that has accessed a given openstack)
- Since most LDAPs are RO, we will never see deletes...so we won't know when 
(without some kind of garbage collection) to cull entries
- It obviously does not solve 1) - since existing LDAP support can break the 32 
char limit - and so it isn't true that this mapping table causes all public 
facing entity IDs to be simple 32 char UUIDs

From a delivery into IceHouse point of view any of the above are possible, 
since the actual mapping used is relatively small part of the patch.  I 
personally favor 2b), since it is simple, has less moving parts and does not 
change any external facing requirements for storage of user and group IDs 
(above and beyond what is true today).

Henry
On 27 Feb 2014, at 03:46, Adam Young ayo...@redhat.com wrote:

 On 02/26/2014 08:25 AM, Dolph Mathews wrote:
 
 On Tue, Feb 25, 2014 at 2:38 PM, Jay Pipes jaypi...@gmail.com wrote:
 On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
  For purposes of supporting multiple backends for Identity (multiple
  LDAP, mix of LDAP and SQL, federation, etc) Keystone is planning to
  increase the maximum size of the USER_ID field from an upper limit of
  64 to an upper limit of 255. This change would not impact any
  currently assigned USER_IDs (they would remain in the old simple UUID
  format), however, new USER_IDs would be increased to include the IDP
  identifier (e.g. USER_ID@@IDP_IDENTIFIER).
 
 -1
 
 I think a better solution would be to have a simple translation table
 only in Keystone that would store this longer identifier (for folks
 using federation and/or LDAP) along with the Keystone user UUID that is
 used in foreign key relations and other mapping tables through Keystone
 and other projects.
 
 Morgan and I talked this suggestion through last night and agreed it's 
 probably the best approach, and has the benefit of zero impact on other 
 services, which is something we're obviously trying to avoid. I imagine it 
 could be as simple as a user_id to domain_id lookup table. All we really 
 care about is given a globally unique user ID, which identity backend is 
 the user from?
 
 On the 

Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-27 Thread Henry Nash

On 27 Feb 2014, at 17:52, Jay Pipes jaypi...@gmail.com wrote:

 On Thu, 2014-02-27 at 16:13 +, Henry Nash wrote:
 So a couple of things about this:
 
 
 1) Today (and also true for Grizzly and Havana), the user can chose
 what LDAP attribute should be returned as the user or group ID.  So it
 is NOT a safe assumption today (ignoring any support for
 domain-specific LDAP support) that the format of a user or group ID is
 a 32 char UUID.  Quite often, I would think, that email address would
 be chosen by a cloud provider as the LDAP id field, by default we use
 the CN.  Since we really don't want to ever change the user or group
 ID we have given out from keystone for a particular entity, this means
 we need to update nova (or anything else) that has made a 32 char
 assumption.
 
 I don't believe this is correct. Keystone is the service that deals with
 authentication. As such, Keystone should be the one and only one service
 that should have any need whatsoever to need to understand a non-UUID
 value for a user ID. The only value that should ever be communicated
 *from* Keystone should be the UUID value of the user.
 
 If the Keystone service uses LDAP or federation for alternative
 authentication schemes, then Keystone should have a mapping table that
 translates those elongated and non-UUID identifiers values (email
 addresses, LDAP CNs, etc) into the UUID value that is then communicated
 to all other OpenStack services.
 
 Best,
 -jay
 
So I think that's a perfectly reasonable point of viewour challenge is that 
this isn't what
Keystone has done to datee.g. anyone using a RO LDAP today is probably 
exposing
non-UUID identifiers out into nova and other projects (and maybe outside of 
openstack
altogether).  We can't (without breaking them) just change the IDs for any 
existing LDAP
entities.  So they best we could do is to say something like, new entities (and 
perhaps only
those in domain-specific backends) would use such a mapping capability.

Henry
 2) In oder to support the ability for service providers to be able to
 have the identity part of keystone be satisfied by a customer LDAP
 (i.e. for a given domain, have a specific LDAP), then, as has been
 stated, we need to subsequently, when handed an API call with just a
 user or group ID, be able to route this call to the correct LDAP.
 Trying to keep true to the openstack design principles, we had
 planned to encode a domain identifier into the user or group ID - i.e.
 distribute the data to where it is needed, in ortherwords, the user
 and group ID provide all the info we need to route the call to the
 right place. Two implementations come to mind:
 2a) Simply concatenate the user/group ID with the domain_id, plus some
 separator and make a composite public facing ID.  e.g.
 user_entity_id@@UUID_of_domain.  This would have a technical maximum
 size of 64+2+64 (i.e. 130), although in reality since we control
 domain_id and we know it is always 32 char UUID - in fact the max size
 would be 98.  This has the problem of increasing the size of the
 public facing field beyond the existing 64.  This is what we had
 planned for IceHouse - and is currently in review.
 2b) Use a similar concatenation idea as 2a), but limit the total size
 to the existing 64. Since we control domain_id, we could (internally
 and not visibly to the outside world), create a domain_index, that was
 used in place of domain_id in the publicly visible field, to minimize
 the number of chars it requires.  So the public facing composite ID
 might be something like up to 54 chars of entity_id@@8 chars of
 domain_index.  There is a chance, of course, that  the 54 char
 restriction might be problematic for LDAP usersbut I doubt it.  We
 would make that a restriction and if it really became a problem, we
 could consider a field size increase at a later release
 3) The alternative to 2a and 2b is to have, as had been suggested, an
 internal mapping table that maps external facing entity_ids to a
 domain plus local entity ID.  The problem with this idea is that:
 - This could become a very big table (you will essentially have an
 entry for every user in every corporate LDAP that has accessed a given
 openstack)
 - Since most LDAPs are RO, we will never see deletes...so we won't
 know when (without some kind of garbage collection) to cull entries
 - It obviously does not solve 1) - since existing LDAP support can
 break the 32 char limit - and so it isn't true that this mapping table
 causes all public facing entity IDs to be simple 32 char UUIDs
 
 
 From a delivery into IceHouse point of view any of the above are
 possible, since the actual mapping used is relatively small part of
 the patch.  I personally favor 2b), since it is simple, has less
 moving parts and does not change any external facing requirements for
 storage of user and group IDs (above and beyond what is true today).
 
 
 Henry
 On 27 Feb 2014, at 03:46, Adam Young ayo...@redhat.com wrote:
 
 On 02/26/2014

Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-30 Thread Henry Nash
Vish,

Excellent idea to discuss this more widely.  To your point about domains not 
being well understood and that most policy files being just admin or not, the 
exception here is, of course, keystone itself - where we can use domains to 
support enable various levels of cloud/domain  project level admin type of 
capability via the policy file.  Although the default policy file we supply is 
a bit like the admin or not versions, we also supply a much richer sample for 
those who want to do admin delegation via domains:

https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json

The other point is that one thing we did introduce in Havana was the concept of 
domain inheritance (where a role assigned to a domain could be specified to be 
inherited by all projects within that domain).  This was an attempt to provide 
an rudimentary multi-ownership capability (within our current token formats 
and policy capabilities).

https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-inherit-ext.md

I'm not suggesting these solve all the issues, just that we should be aware of 
these in the upcoming discussions.

Henry
On 28 Jan 2014, at 18:35, Vishvananda Ishaya vishvana...@gmail.com wrote:

 Hi Everyone,
 
 I apologize for the obtuse title, but there isn't a better succinct term to 
 describe what is needed. OpenStack has no support for multiple owners of 
 objects. This means that a variety of private cloud use cases are simply not 
 supported. Specifically, objects in the system can only be managed on the 
 tenant level or globally.
 
 The key use case here is to delegate administration rights for a group of 
 tenants to a specific user/role. There is something in Keystone called a 
 “domain” which supports part of this functionality, but without support from 
 all of the projects, this concept is pretty useless.
 
 In IRC today I had a brief discussion about how we could address this. I have 
 put some details and a straw man up here:
 
 https://wiki.openstack.org/wiki/HierarchicalMultitenancy
 
 I would like to discuss this strawman and organize a group of people to get 
 actual work done by having an irc meeting this Friday at 1600UTC. I know this 
 time is probably a bit tough for Europe, so if we decide we need a regular 
 meeting to discuss progress then we can vote on a better time for this 
 meeting.
 
 https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
 
 Please note that this is going to be an active team that produces code. We 
 will *NOT* spend a lot of time debating approaches, and instead focus on 
 making something that works and learning as we go. The output of this team 
 will be a MultiTenant devstack install that actually works, so that we can 
 ensure the features we are adding to each project work together.
 
 Vish
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Filtering and limiting ready for final reviews

2014-01-20 Thread Henry Nash
Hi

Both Filtering and List Limiting are ready for final review (both were pretty 
heavily reviewed on the run up to Havana, if you remember, but we decided to 
pull them):

https://review.openstack.org/#/c/43257/
https://review.openstack.org/#/c/44836/

The only debate on list limiting is whether we indicate to the client that the 
list has been truncated by using the return status code (e.g. 203) which was 
what we decided at the Hackathon last week, or switch to using the 'Next' 
pointer in the collection return instead (e.g. 'next' : 'Truncated', os 
something like that).  The former is what has been implemented in the patch 
above, but it would be trivial to switch to the pointer style.

I think we want to get both these in for I-2, so any review help today from 
cores would be appreciated.

Henry


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] domain admin role query

2013-12-12 Thread Henry Nash
Hi

So the idea wasn't the you create a domain with the id of 'domain_admin_id', 
rather that you create the domain that you plan to use for your admin domain, 
and then paste its (auto-generated) domain_id into the policy file.

Henry
On 12 Dec 2013, at 03:11, Paul Belanger paul.belan...@polybeacon.com wrote:

 On 13-12-11 11:18 AM, Lyle, David wrote:
 +1 on moving the domain admin role rules to the default policy.json
 
 -David Lyle
 
 From: Dolph Mathews [mailto:dolph.math...@gmail.com]
 Sent: Wednesday, December 11, 2013 9:04 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [keystone] domain admin role query
 
 
 On Tue, Dec 10, 2013 at 10:49 PM, Jamie Lennox jamielen...@redhat.com 
 wrote:
 Using the default policies it will simply check for the admin role and not 
 care about the domain that admin is limited to. This is partially a left 
 over from the V2 api when there wasn't domains to worry  about.
 
 A better example of policies are in the file etc/policy.v3cloudsample.json. 
 In there you will see the rule for create_project is:
 
   identity:create_project: rule:admin_required and 
 domain_id:%(project.domain_id)s,
 
 as opposed to (in policy.json):
 
   identity:create_project: rule:admin_required,
 
 This is what you are looking for to scope the admin role to a domain.
 
 We need to start moving the rules from policy.v3cloudsample.json to the 
 default policy.json =)
 
 
 Jamie
 
 - Original Message -
 From: Ravi Chunduru ravi...@gmail.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Wednesday, 11 December, 2013 11:23:15 AM
 Subject: [openstack-dev] [keystone] domain admin role query
 
 Hi,
 I am trying out Keystone V3 APIs and domains.
 I created an domain, created a project in that domain, created an user in
 that domain and project.
 Next, gave an admin role for that user in that domain.
 
 I am assuming that user is now admin to that domain.
 Now, I got a scoped token with that user, domain and project. With that
 token, I tried to create a new project in that domain. It worked.
 
 But, using the same token, I could also create a new project in a 'default'
 domain too. I expected it should throw authentication error. Is it a bug?
 
 Thanks,
 --
 Ravi
 
 
 One of the issues I had this week while using the policy.v3cloudsample.json 
 was I had no easy way of creating a domain with the id of 'admin_domain_id'.  
 I basically had to modify the SQL directly to do it.
 
 Any chance we can create a 2nd domain using 'admin_domain_id' via 
 keystone-manage sync_db?
 
 -- 
 Paul Belanger | PolyBeacon, Inc.
 Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
 Github: https://github.com/pabelanger | Twitter: 
 https://twitter.com/pabelanger
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multidomain User Ids

2013-12-04 Thread Henry Nash

On 4 Dec 2013, at 13:28, Dolph Mathews dolph.math...@gmail.com wrote:

 
 On Sun, Nov 24, 2013 at 9:39 PM, Adam Young ayo...@redhat.com wrote:
 The #1 pain point I hear from people in the field is that they need to 
 consume read only  LDAP but have service users in something Keystone 
 specific.  We are close to having this, but we have not closed the loop.  
 This was something that was Henry's to drive home to completion.  Do we have 
 a plan?  Federation depends on this, I think, but this problem stands alone.
 
 I'm still thinking through the idea of having keystone natively federate to 
 itself out of the box, where keystone presents itself as an IdP (primarily 
 for service users). It sounds like a simpler architectural solution than 
 having to shuffle around code paths for both federated identities and local 
 identities.
  
 
 Two Solutions:
 1 always require domain ID along with the user id for role assignments.
 
 From an API perspective, how? (while still allowing for cross-domain role 
 assignments)
  
 2 provide some way to parse from the user ID what domain it is.
 
 I think you meant this one the other way around: Determine the domain given 
 the user ID.
  
 
 I was thinking that we could do something along the lines of 2 where we 
 provide  domain specific user_id prefix  for example, if there is just one 
 ldpa service, and they wanted to prefix anyting out of ldap with ldap@, 
 then an id would be  prefix  field from LDAP.  And would be configured on 
 a per domain basis.  THis would be optional.
 
 The weakness is that itbe Log N to determine which Domain a user_id came 
 from.  A better approach would be to use a divider, like '@' and then prefix 
 would be the key for a hashtable lookup.  Since it is optional, domains could 
 still be stored in SQL and user_ids could be uuids.
 
 One problem is if someone comes by later an must use email address as the 
 userid, the @ would mess them up.  So The default divider should be something 
 URL safe but no likely to be part of a userid. I realize that it might be 
 impossible to match this criterion.
 
I know this sounds a bit like back to the future', but how about we make a 
user_id passed via the API a structured binary field, containing a 
concatenation of domain_id and (the actual) user_id, but rather than have a 
separator, encode the start positions in the first few digits, e.g. something 
like:

Digit # Meaning
0-1 Start position of domain_id, (e.g. this will usually be 4)
2-3 Start position of user_id
4-N domain_id
M-end   user_id

We would run a migration that would convert all existing mappings.  Further, we 
would ensure (with padding if necessary) that this new user_id is ALWAYS 
larger than 64chars - hence we could easily detect which type of ID we had.

 For usernames, sure... but I don't know why anyone would care to use email 
 addresses as ID's.
  
 
 Actually, there might be other reasons to forbid @ signs from IDs, as they 
 look like phishing attempts in URLs.
 
 Phishing attempts?? They need to be encoded anyway...
  
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 
 -Dolph
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Keystone][glance] Keystone V3 Domains and Nova

2013-10-01 Thread Henry Nash
I think an obvious first step of domain support outside keystone is for images. 
 Today, I believe, an image can be global or project based.  There is 
definitely a use case for a third state of being domain based - and hence 
available to all projects in that domain, but not to those in other domains.

From a Nova perspective, where domains might be relevant is in how a cloud 
provider divides up their infrastructure, for example, I think there are use 
cases for when a cloud provider might want to specify on a domain-basis:
- availability zones and/or host aggregates
- quotas
- IP ranges (ok, maybe a quantum discussion)

Henry
On 1 Oct 2013, at 06:45, Dolph Mathews dolph.math...@gmail.com wrote:

 
 On Mon, Sep 30, 2013 at 11:42 PM, Christopher Yeoh cbky...@gmail.com wrote:
 Hi,
 
 I've been looking into how Nova might support Keystone V3 domains and I'm 
 having a bit of trouble getting my head around exactly how we'd use domain 
 scoped tokens.
 
 With the V3 Nova API we no longer specify the tenant id in the url as it is 
 implicit in the token used to authorize with. This is true for Keystone V3  
 tenant scoped tokens, but not for domain scoped tokens.  If we're going to 
 use domain scoped tokens with the Nova API is the idea that a client would 
 pass the tenant id in a header as well as the domain scoped token and Nova 
 would check that the tenant passed belongs to the domain that is implicit 
 with the token?
 
 Without a specific use case to support domain-based operations, I'd be 
 opposed to handling this scenario implicitly. I have yet to hear a use case 
 for domain awareness in nova, so I'd expect nova to return a 401 for any 
 domain-based token.
  
 
 Also, should we be updating the Nova policy code to be able to handle domains?
 
 Keystone supports domain-based authorization on two levels: domain-specific 
 role assignments and domain-specific role assignments that are inherited to 
 all projects/tenants owned by that domain.
 
 With inheritance, a domain administrator can request authorization on any 
 project in the domain (from keystone, per usual) and nova's policy doesn't 
 have to handle anything special (it'll just be a regular project/tenant 
 scoped token).
  
  
 
 Regards,
 
 Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 
 -Dolph
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Case sensitivity backend databases

2013-09-25 Thread Henry Nash
Hi

Do we specify somewhere whether text field matching in the API is case 
sensitive or in-sensitive?  I'm thinking about filters, as well as user and 
domain names in authentication.  I think our current implementation will always 
be case sensitive for filters  (since we do that in python and do not, yet, 
pass the filters to the backends), while authentication will reflect the case 
sensitivity or lack thereof of the underlying database.  I believe that MySQL 
is case in-sensitive by default, while Postgres, sqllite and others are 
case-sensitive by default.  If using an LDAP backend, then I think this is 
case-sensitive.

The above seems to be inconsistent.  It might become even more so when we pass 
the filters to the backend.  Given that other projects already pass filters to 
the backend, we may also have inter-project inconsistencies that bleed through 
to the user experience.  Should we make at least a recommendation that the 
backend should case-sensitive (you can configure MySQL to be so)?  Insist on 
it? Ignore it and keep things as they are?

Henry


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Difference between RBAC polices thats stored in policy.json and policies that can be created using openstack/identity/v3/policies

2013-08-14 Thread Henry Nash
Hi Sudheesh,

Using v3/policies is just a way of allowing other keystone projects (nova, 
glance) etc. a place to centrally store/access their policy files.  Keystone 
does not interpret any of the data you store here - it is simply acting as a 
central repository (where you can store a big blob of data that is, in effect, 
your policy file).  So the only place you can set policies is in the policy 
file.

Henry
On 13 Aug 2013, at 08:22, sudheesh sk wrote:

 Hi ,
 
 I am trying to understand Difference between RBAC polices thats stored in 
 policy.json and policies that can be created using 
 openstack/identity/v3/policies.
 
 I got answer from openstack forum that I can use both DB and policy.json 
 based implementation for RBAC policy management.
 
 Can you please tell me how to use DB based RBAC ? I can elaborate my question
  1. In policy.json(keystone) I am able to define rule called - admin_required 
  2. Similarly I can define rules line custome_role_required
  3. Then I can add this rule against each services (like for eg : 
 identity:list_users = custom_role_required How can I use this for DB based 
 RBAC policies? Also there are code like self.policy_api.enforce(context, 
 creds, 'admin_required', {}) in many places (this is in wsgi.py) 
 
 How can I utilize the same code and at the same time move the policy 
 definition to DB
 
 Thanks a million,
 Sudheesh
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Henry Nash
Hi

So few comebacks to the various comments:

1) While I understand the idea that a client would follow the next/prev links 
returned in collections, I wasn't aware that we considered 'page'/'per-page' as 
not standardized.   We list these explicitly throughout the identity API spec 
(look in each List 'entity' example).  How I imagined it would work would be:

a) If a client did not include 'page' in the url we would not paginate
b) Once we are paginating, a client can either build the next/prevs urls 
themselves if they want (by incrementing/decrementing the page number), or just 
follow the next/prev links (which come with the appropriate 'page=x' in them) 
returned in the collection which saves them having to do this.
c) Regarding implementation, the controller would continue to be able to 
paginate on behalf of drivers that couldn't, but those paginate-aware drivers 
would take over that capability (and indicate this to the controller the state 
of the pagination so that it can build the correct next/prev links)

2) On the subject of huge enumerates, options are:
a) Support a backend manager scoped (i.e. identity/assignent/token) limit in 
the conf file which would be honored by drivers.  Assuming that you set this 
larger than your pagination limit, this would make sense whether your driver is 
paginating or not in terms of minimizing the delay in responding data as well 
as not messing up pagination.  In the non-paginated case when we hit the limit, 
should we indicate this to the client?  Maybe a 206 return code?  Although i) 
not quite sure that meets http standards, and ii) would we break a bunch of 
clients by doing this?
b) We scrap the whole idea of pagination, and just set a conf limit as in 2a).  
To make this work of course, we must implement any defined filters in the 
backend (otherwise we still end up with today's performance problems - remember 
that today, in general,  filtering is done in the controller on a full 
enumeration of the entities in question).  I was planning to implement this 
backend filtering anyway as part of (or on top of) my change, since we are 
holding (at least one of) our hands behind our backs right now by not doing so. 
 And our filters need to be powerful, do we support wildcards for example, e.g. 
GET /users?name = fred*  ?
 
Henry

On 13 Aug 2013, at 04:40, Adam Young wrote:

 On 08/12/2013 09:22 PM, Miller, Mark M (EB SW Cloud - RD - Corvallis) wrote:
 The main reason I use user lists (i.e. keystone user-list) is to get the 
 list of usernames/IDs for other keystone commands. I do not see the value of 
 showing all of the users in an LDAP server when they are not part of the 
 keystone database (i.e. do not have roles assigned to them). Performing a 
 “keystone user-list” command against the HP Enterprise Directory locks up 
 keystone for about 1 ½ hours in that it will not perform any other commands 
 until it is done.  If it is decided that user lists are necessary, then at a 
 minimum they need to be paged to return control back to keystone for another 
 command.
 
 We need a way to tell HP ED to limit the number of rows, and to do filtering.
 
 We have a bug for the second part.  I'll open one for the limit.
 
  
 Mark
  
 From: Adam Young [mailto:ayo...@redhat.com] 
 Sent: Monday, August 12, 2013 5:27 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [keystone] Pagination
  
 On 08/12/2013 05:34 PM, Henry Nash wrote:
 Hi
  
 I'm working on extending the pagination into the backends.  Right now, we 
 handle the pagination in the v3 controller classand in fact it is 
 disabled right now and we return the whole list irrespective of whether 
 page/per-page is set in the query string, e.g.:
 Pagination is a broken concept. We should not be returning lists so long 
 that we need to paginate.  Instead, we should have query limits, and filters 
 to refine the queries.
 
 Some people are doing full user lists against LDAP.  I don't need to tell 
 you how broken that is.  Why do we allow user-list at the Domain (or 
 unscoped level)?  
 
 I'd argue that we should drop enumeration of objects in general, and 
 certainly limit the number of results that come back.  Pagination in LDAP 
 requires cursors, and thus continuos connections from Keystone to 
 LDAP...this is not a scalable solution.
 
 Do we really need this?
 
 
 
  
 def paginate(cls, context, refs):
 Paginates a list of references by page  per_page query 
 strings.
 # FIXME(dolph): client needs to support pagination first
 return refs
  
 page = context['query_string'].get('page', 1)
 per_page = context['query_string'].get('per_page', 30)
 return refs[per_page * (page - 1):per_page * page]
  
 I wonder both for the V3 controller (which still needs to handle pagination 
 for backends that do not support it) and the backends that dowhether we 
 could use wether 'page' is defined in the query-string as an indicator

Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Henry Nash
Jay,

Thanks for all the various links - most useful.

To map this into keystone context, if we were to follow this logic we would:

1) Support 'limit' and 'marker' (as opposed to 'page', 'page_szie', or anything 
else).  These would be standard, independent of what backing store keystone was 
using.  If neither are included in the url, then we return the first N entires, 
where N is defined by the cloud provider.  This ensures that for at least 
smaller deployments, non-pagination aware clients still work.  If either 
'limit' or 'marker' are specified, then we paginate, passing them down into the 
driver layer wherever possible to ensure efficiency (some drivers may not be 
able to support pagination, hence we will do this, inefficiently, at a higher 
layer)
2) If we are paginating at the driver level, we must, by definition, be doing 
all the filtering down there as well (otherwise it all gets mucked)
3) We should look at supporting the other standard options (sort order etc.), 
but irrespective of that, by definition, we must ensure that we any driver that 
is paginating must be getting is entries back in a consistent order (otherwise, 
again, pagination doesn't work reliably)

Henry
On 13 Aug 2013, at 18:10, Jay Pipes wrote:

 On 08/13/2013 12:55 PM, Lyle, David (Cloud Services) wrote:
 The marker/limit pagination scheme is inferior.
 
 A bold statement that flies in the face of experience and the work already 
 done in all the other projects.
 
 The use of page/page_size allows access to arbitrary pages, whereas 
 limit/marker only allows forward progress.
 
 I don't see this as a particularly compelling use case considering the 
 performance manifestations of using LIMIT OFFSET pagination.
 
 In Horizon's use case, with page/page_size we can provide the user access to 
 any page they have already visited, rather than just the previous page 
 (using prev/next links returned in the response).
 
 I don't see this as a particularly useful thing, but in any case, you could 
 still do this by keeping the markers for previous pages on the client 
 (Horizon) side.
 
 The point of marker/limit is to eliminate poor performance of LIMIT OFFSET 
 queries and to force proper index usage in the listing queries.
 
 You can see the original discussion about this from more than two years and 
 even see where I was originally arguing for a LIMIT OFFSET strategy and was 
 brought around to the current limit/marker strategy by the responses of 
 Justin Santa Barbara and Greg Holt:
 
 https://lists.launchpad.net/openstack/msg02548.html
 
 Best,
 -jay
 
 -David
 
 On 08/13/2013 10:29 AM, Pipes, Jay wrote:
 
 On 08/13/2013 03:05 AM, Yee, Guang wrote:
 Passing the query parameters, whatever they are, into the driver if
 the given driver supports pagination and allowing the driver to
 override the manager default pagination functionality seem reasonable to 
 me.
 
 Please do use the standards that are supported in other OpenStack services 
 already: limit, marker, sort_key and sort_dir.
 
 Pagination is meaningless without a sort key and direction, so picking a 
 sensible default for user/project records is good. I'd go with either 
 created_at (what Glance/Nova/Cinder use..) or with the user/project UUID.
 
 The Glance DB API pagination is well-documented and clean [1]. I highly 
 recommend it as a starting point.
 
 Nova uses the same marker/limit/sort_key/sort_dir options for queries that 
 it allows pagination on. An example is the
 instance_get_all_by_filters() call [2].
 
 Cinder uses the same marker/limit/sort_key/sort_dir options for query 
 pagination as well. [3]
 
 Finally, I'd consider supporting the standard change-since parameter for 
 listing operations. Both Nova [4] and Glance [5] support the parameter, 
 which is useful for tools that poll the APIs for new events/records.
 
 In short, go with what is already a standard in the other projects...
 
 Best,
 -jay
 
 [1]
 https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L429
 [2]
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1709
 [3]
 https://github.com/openstack/cinder/blob/master/cinder/common/sqlalchemyutils.py#L33
 [4]
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1766
 [5]
 https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L618
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list

[openstack-dev] [keystone] Pagination

2013-08-12 Thread Henry Nash
Hi

I'm working on extending the pagination into the backends.  Right now, we 
handle the pagination in the v3 controller classand in fact it is disabled 
right now and we return the whole list irrespective of whether page/per-page is 
set in the query string, e.g.:

def paginate(cls, context, refs):
Paginates a list of references by page  per_page query strings.
# FIXME(dolph): client needs to support pagination first
return refs

page = context['query_string'].get('page', 1)
per_page = context['query_string'].get('per_page', 30)
return refs[per_page * (page - 1):per_page * page]

I wonder both for the V3 controller (which still needs to handle pagination for 
backends that do not support it) and the backends that dowhether we could 
use wether 'page' is defined in the query-string as an indicator as to whether 
we should paginate or not?  That way clients who can handle it can ask for it, 
those that don'twill just get everything.  

Henry

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problem using Keystone split backend

2013-08-09 Thread Henry Nash
Yep, agreed, certainly my codeif you can provide as much info in the bug as 
possible, and I'll get straight on it.

Henry
On 9 Aug 2013, at 00:14, Adam Young wrote:

 On 08/08/2013 03:58 PM, Gaspareto, Otavio wrote:
  
 Hi all,
  
 I’m using the Keystone split backend code (H2) with both identity and 
 assignment using SQL drivers. During the process of authentication thru 
 Horizon, I’m getting the following error in the log:
  
 Traceback (most recent call last):
   File /usr/lib/python2.6/site-packages/keystone/common/wsgi.py, line 240, 
 in __call__
 result = method(context, **params)
   File /usr/lib/python2.6/site-packages/keystone/token/controllers.py, 
 line 80, in authenticate
 context, auth)
   File /usr/lib/python2.6/site-packages/keystone/token/controllers.py, 
 line 257, in _authenticate_local
 user_id, tenant_id)
   File /usr/lib/python2.6/site-packages/keystone/token/controllers.py, 
 line 371, in _get_project_roles_and_ref
 user_id, tenant_id)
   File /usr/lib/python2.6/site-packages/keystone/identity/core.py, line 
 123, in get_roles_for_user_and_project
 user_id, tenant_id)
   File /usr/lib/python2.6/site-packages/keystone/assignment/core.py, line 
 125, in get_roles_for_user_and_project
 user_role_list = _get_user_project_roles(user_id, project_ref)
   File /usr/lib/python2.6/site-packages/keystone/assignment/core.py, line 
 107, in _get_user_project_roles
 metadata_ref.get('roles', {}), False)
   File /usr/lib/python2.6/site-packages/keystone/common/manager.py, line 
 44, in _wrapper
 return f(*args, **kw)
   File /usr/lib/python2.6/site-packages/keystone/assignment/core.py, line 
 216, in _roles_from_role_dicts
 if ((not d.get('inherited_to') and not inherited) or
 
 
 This looks like a bug, and, if I guess correctly, Henry Nash's inheritance 
 change is to blame.
 
 commit 7f4891ddc3da7457df09c0cc8bbfe8a888063feb
  Implement role assignment inheritance (OS-INHERIT extension)
 
 Please file it in Launchpad,
  
 
 There are known issues with Unicode and LDAP, and I suspect you are tripping 
 over that. 
 
 
 AttributeError: 'unicode' object has no attribute 'get'
  
  
 I’m using a fresh tenant/role/user that were created using the V2.0. Has 
 anybody faced this problem too?
  
 Thanks.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Validating UUID tokens with V3 API

2013-08-07 Thread Henry Nash
Hi Jamie,

So potentially part of this solution may be one of the blueprints and 
implementation I am currently working on to allow policy rules to specify 
fields in the target entity that an API is accessing, see:

https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
https://review.openstack.org/#/c/38308/3

I have not yet quite completed this and am just about modify the auth 
controller to support this - potentially it  could mean you could have a policy 
rule that meant, for instance, your user_id would have to match the one in the 
token that already existed (for the example you raise)?

As to the general need for a description of how policy, domains  roles all 
work togther - I'm going to be writing this up for inclusion in the openstack 
docs for Havana - I'll let you see the draft as soon as I have it.

Henry
On 7 Aug 2013, at 09:07, Jamie Lennox wrote:

 Regarding my blueprint
 https://blueprints.launchpad.net/python-keystoneclient/+spec/keystoneclient-auth-token
 and Guang's bug
 https://bugs.launchpad.net/python-keystoneclient/+bug/1207922
 (auth_token middleware always use v2.0 to request admin token), i'm
 trying to make a v3 client capable of validating a UUID token. 
 
 For V2 without domains the concept of an admin user/role is simple and
 so to validate UUID tokens from auth_token middleware you simply need a
 username/password or token. 
 
 For V3 do we have any concept of what policies will need to be in place
 to say that a user has the privilege to validate another user's token.
 The problem comes in that to use the client we should scope a user to a
 domain or a project. If you don't do this you don't receive the catalog
 of links, and the client is not populated with the management_url you
 should be using for identity. A solution to this would be to hack around
 the issue and just assume that if a v3 client is instantiated with an
 unscoped client then you assume that the management url is the same as
 the client discovered url. This is generally wrong, but also i'm not
 sure that the call to POST /v3/auth/token makes sense when authenticated
 with an unscoped token.
 
 Using username in v3 is also not appropriate without specifying the
 domain name that the username is unique to so at the very least this
 will need to get added to auth_token. 
 
 However what happens with a scoped token? If auth_token has a token
 scoped differently to the token it is validating then the same 'admin'
 role should not apply (at least theoretically, i've only been an
 observer on the roles/domains debates). 
 
 The best way i can see out of it is a special type of role or domain
 whose users have certain permissions across all the keystone domains. Is
 there something like this already? Is the 'admin' concept global across
 domains? 
 
 Can someone with a better understanding of roles, policy, and domains in
 V3 explain how this should work? 
 
 
 Thanks, 
 
 Jamie 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Morgan Fainberg for keystone-core

2013-08-06 Thread Henry Nash
+1 from me, Morgan would be a great addition.

Henry
On 6 Aug 2013, at 20:20, Dolph Mathews wrote:

 Through feedback on code reviews and blueprints, Morgan clearly has the best 
 interests of the project itself in mind. I'd love for his votes to carry a 
 bit more weight!
 
   https://review.openstack.org/#/dashboard/2903
 
 Respond with +1/-1's before Friday, thanks!
 
 -Dolph
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/keystone[master]: Implement domain specific Identity backends

2013-08-06 Thread Henry Nash
Hi Mark,

Of particular interest are your views on the changes to 
keystone/common/config.py.  The requirement is that we need to be able to 
instantiate multiple conf objects (built from different sets of config files).  
We tried two approaches to this:

https://review.openstack.org/#/c/39530/11 which attempts to keep the current 
keystone config helper apps (register_bool() etc.) by passing on the conf 
instance, and
https://review.openstack.org/#/c/39530/12 which removes these helper apps and 
just calls the methods on the conf itself (conf.register_opt())

Both functionally work, but interested in your views on both approaches.

Henry
On 6 Aug 2013, at 19:26, ayoung (Code Review) wrote:

 Hello Mark McLoughlin,
 
 I'd like you to do a code review.  Please visit
 
   https://review.openstack.org/39530
 
 to review the following change.
 
 Change subject: Implement domain specific Identity backends
 ..
 
 Implement domain specific Identity backends
 
 A common scenario in shared clouds will be that a cloud provider will
 want to be able to offer larger customers the ability to interface to
 their chosen identity provider. In the base case, this might well be
 their own corporate LDAP/AD directory.  A cloud provider might also
 want smaller customers to have their identity managed solely
 within the OpenStack cloud, perhaps in a shared SQL database.
 
 This patch allows domain specifc backends for identity objects
 (namely User and groups), which are specified by creation of a domain
 configuration file for each domain that requires its own backend.
 
 A side benefit of this change is that it clearly separates the
 backends into those that are domain-aware and those that are not,
 allowing, for example, the removal of domain validation from the
 LDAP identity backend.
 
 Implements bp multiple-ldap-servers
 
 Change-Id: I489e8e50035f88eca4235908ae8b1a532645daab
 ---
 M doc/source/configuration.rst
 M etc/keystone.conf.sample
 M keystone/auth/plugins/password.py
 M keystone/catalog/backends/templated.py
 M keystone/common/config.py
 M keystone/common/controller.py
 M keystone/common/ldap/fakeldap.py
 M keystone/common/utils.py
 M keystone/config.py
 M keystone/identity/backends/kvs.py
 M keystone/identity/backends/ldap.py
 M keystone/identity/backends/pam.py
 M keystone/identity/backends/sql.py
 M keystone/identity/controllers.py
 M keystone/identity/core.py
 M keystone/test.py
 M keystone/token/backends/memcache.py
 M keystone/token/core.py
 A tests/backend_multi_ldap_sql.conf
 A tests/keystone.Default.conf
 A tests/keystone.domain1.conf
 A tests/keystone.domain2.conf
 M tests/test_backend.py
 M tests/test_backend_ldap.py
 24 files changed, 1,028 insertions(+), 372 deletions(-)
 
 
 git pull ssh://review.openstack.org:29418/openstack/keystone 
 refs/changes/30/39530/12
 --
 To view, visit https://review.openstack.org/39530
 To unsubscribe, visit https://review.openstack.org/settings
 
 Gerrit-MessageType: newchange
 Gerrit-Change-Id: I489e8e50035f88eca4235908ae8b1a532645daab
 Gerrit-PatchSet: 12
 Gerrit-Project: openstack/keystone
 Gerrit-Branch: master
 Gerrit-Owner: henry-nash hen...@linux.vnet.ibm.com
 Gerrit-Reviewer: Brant Knudson bknud...@us.ibm.com
 Gerrit-Reviewer: Dolph Mathews dolph.math...@gmail.com
 Gerrit-Reviewer: Jenkins
 Gerrit-Reviewer: Mark McLoughlin mar...@redhat.com
 Gerrit-Reviewer: Sahdev Zala spz...@us.ibm.com
 Gerrit-Reviewer: SmokeStack
 Gerrit-Reviewer: ayoung ayo...@redhat.com
 Gerrit-Reviewer: henry-nash hen...@linux.vnet.ibm.com
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Separate Migration Repos

2013-07-31 Thread Henry Nash
Hi Adam,

Wanted to just give you more detail on the issue I keep pressing on for your 
change (https://review.openstack.org/#/c/36731/).

For extensions which create their own private tables, I totally get it.  I'd 
like, however, to understand what happens for a more complex extension.  Let's 
imagine an (only-partially) hypothetical example of an extension that does (all 
of) the following:

1) It adds or changes the use of some columns in existing core tables, and has 
migrations and code that goes along with that.
2) It adds a new private table, and has all the code to handle that
3) New APIs etc. to create new REST calls to drive the extension

It is part 1) in the above that I am trying to understand how we would 
implement in this new model.  What I am imagining is that the best way to do 1) 
is that you would break (at least part of it) out of the extension and it would 
be a core patch.  This would cover modifications to core columns and changing 
any core code to make sure that such changes were benign to the rest of core 
(and indeed any other extensions).  Migrations for this part of the schema 
change would be in the core repo.  Our new extension would then build on this, 
have its private new table in its own repo and any unique code in contrib. Is 
that how you imagined this working?

This hypothetical example is, of course, not too far from reality - the recent 
change I did for  inherited roles (https://review.openstack.org/#/c/35986/) is 
an example that comes close to the above - and it would seem to me that it 
would be much safer (from a code dependency point of view) to have the DB 
changes done separately and integrated into core - and the extensions just, in 
this case, use the advantages of the new schema to provide its functionality.

Henry


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-24 Thread Henry Nash
I think we should transfer this discussion to the etherpad for this blueprint: 
https://etherpad.openstack.org/api_policy_on_target

I have summarised the views of this thread there already, so let's make any 
further comments there, rather than here.

Henry
On 24 Jul 2013, at 00:29, Simo Sorce wrote:

 On Tue, 2013-07-23 at 23:47 +0100, Henry Nash wrote:
 ...the problem is that if the object does not exists we might not be able 
 tell whether the use is authorized or not (since authorization might depend 
 on attributes of the object itself)so how do we know wether to lie or 
 not?
 
 If the error you return is always 'Not Found', why do you care ?
 
 Simo.
 
 Henry
 On 23 Jul 2013, at 21:23, David Chadwick wrote:
 
 
 
 On 23/07/2013 19:02, Henry Nash wrote:
 One thing we could do is:
 
 - Return Forbidden or NotFound if we can determine the correct answer
 - When we can't (i.e. the object doesn't exist), then return NotFound
 unless a new config value 'policy_harden' (?) is set to true (default
 false) in which case we translate NotFound into Forbidden.
 
 I am not sure that this achieves your objective of no data leakage through 
 error codes, does it?
 
 Its not a question of determining the correct answer or not, its a question 
 of whether the user is authorised to see the correct answer or not
 
 regards
 
 David
 
 Henry
 On 23 Jul 2013, at 18:31, Adam Young wrote:
 
 On 07/23/2013 12:54 PM, David Chadwick wrote:
 When writing a previous ISO standard the approach we took was as follows
 
 Lie to people who are not authorised.
 
 Is that your verbage?  I am going to reuse that quote, and I would
 like to get the attribution correct.
 
 
 So applying this approach to your situation, you could reply Not
 Found to people who are authorised to see the object if it had
 existed but does not, and Not Found to those not authorised to see
 it, regardless of whether it exists or not. In this case, only those
 who are authorised to see the object will get it if it exists. Those
 not authorised cannot tell the difference between objects that dont
 exist and those that do exist
 
 So, to try and apply this to a semi-real example:  There are two types
 of URLs.  Ones that are like this:
 
 users/55FEEDBABECAFE
 
 and ones like this:
 
 domain/66DEADBEEF/users/55FEEDBABECAFE
 
 
 In the first case, you are selecting against a global collection, and
 in the second, against a scoped collection.
 
 For unscoped, you have to treat all users as equal, and thus a 404
 probably makes sense.
 
 For a scoped collection we could return a 404 or a 403 Forbidden
 https://en.wikipedia.org/wiki/HTTP_403 based on the users
 credentials:  all resources under domain/66DEADBEEF  would show up
 as 403s regardless of existantce or not if the user had no roles in
 the domain 66DEADBEEF.  A user that would be allowed access to
 resources in 66DEADBEEF  would get a 403 only for an object that
 existed but that they had no permission to read, and 404 for a
 resource that doesn't exist.
 
 
 
 
 
 regards
 
 David
 
 
 On 23/07/2013 16:40, Henry Nash wrote:
 Hi
 
 As part of bp
 https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
 I have uploaded some example WIP code showing a proposed approach
 for just a few API calls (one easy, one more complex). I'd
 appreciate early feedback on this before I take it any further.
 
 https://review.openstack.org/#/c/38308/
 
 A couple of points:
 
 - One question is on how to handle errors when you are going to get
 a target object before doing you policy check.  What do you do if
 the object does not exist?  If you return NotFound, then someone,
 who was not authorized  could troll for the existence of entities by
 seeing whether they got NotFound or Forbidden. If however, you
 return Forbidden, then users who are authorized to, say, manage
 users in a domain would aways get Forbidden for objects that didn't
 exist (since we can know where the non-existant object was!).  So
 this would modify the expected return codes.
 
 - I really think we need some good documentation on how to bud
 keystone policy files.  I'm happy to take a first cut as such a
 thing - what do you think the right place is for such documentation
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo

[openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread Henry Nash
Hi

As part of bp 
https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target I have 
uploaded some example WIP code showing a proposed approach for just a few API 
calls (one easy, one more complex).  I'd appreciate early feedback on this 
before I take it any further.

https://review.openstack.org/#/c/38308/

A couple of points:

- One question is on how to handle errors when you are going to get a target 
object before doing you policy check.  What do you do if the object does not 
exist?  If you return NotFound, then someone, who was not authorized  could 
troll for the existence of entities by seeing whether they got NotFound or 
Forbidden.  If however, you return Forbidden, then users who are authorized to, 
say, manage users in a domain would aways get Forbidden for objects that didn't 
exist (since we can know where the non-existant object was!).  So this would 
modify the expected return codes.

- I really think we need some good documentation on how to bud keystone policy 
files.  I'm happy to take a first cut as such a thing - what do you think the 
right place is for such documentation

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread Henry Nash
One thing we could do is:

- Return Forbidden or NotFound if we can determine the correct answer
- When we can't (i.e. the object doesn't exist), then return NotFound unless a 
new config value 'policy_harden' (?) is set to true (default false) in which 
case we translate NotFound into Forbidden.

Henry
On 23 Jul 2013, at 18:31, Adam Young wrote:

 On 07/23/2013 12:54 PM, David Chadwick wrote:
 When writing a previous ISO standard the approach we took was as follows 
 
 Lie to people who are not authorised. 
 
 Is that your verbage?  I am going to reuse that quote, and I would like to 
 get the attribution correct.
 
 
 So applying this approach to your situation, you could reply Not Found to 
 people who are authorised to see the object if it had existed but does not, 
 and Not Found to those not authorised to see it, regardless of whether it 
 exists or not. In this case, only those who are authorised to see the object 
 will get it if it exists. Those not authorised cannot tell the difference 
 between objects that dont exist and those that do exist 
 
 So, to try and apply this to a semi-real example:  There are two types of 
 URLs.  Ones that are like this:
 
 users/55FEEDBABECAFE
 
 and ones like this:
 
 domain/66DEADBEEF/users/55FEEDBABECAFE
 
 
 In the first case, you are selecting against a global collection, and in the 
 second, against a scoped collection.
 
 For unscoped, you have to treat all users as equal, and thus a 404 probably 
 makes sense.
 
 For a scoped collection we could return a 404 or a 403 Forbidden based on the 
 users credentials:  all resources under domain/66DEADBEEF  would show up 
 as 403s regardless of existantce or not if the user had no roles in the 
 domain 66DEADBEEF.  A user that would be allowed access to resources in 
 66DEADBEEF  would get a 403 only for an object that existed but that they 
 had no permission to read, and 404 for a resource that doesn't exist.
 
 
 
 
 
 regards 
 
 David 
 
 
 On 23/07/2013 16:40, Henry Nash wrote: 
 Hi 
 
 As part of bp 
 https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target I have 
 uploaded some example WIP code showing a proposed approach for just a few 
 API calls (one easy, one more complex).  I'd appreciate early feedback on 
 this before I take it any further. 
 
 https://review.openstack.org/#/c/38308/ 
 
 A couple of points: 
 
 - One question is on how to handle errors when you are going to get a 
 target object before doing you policy check.  What do you do if the object 
 does not exist?  If you return NotFound, then someone, who was not 
 authorized  could troll for the existence of entities by seeing whether 
 they got NotFound or Forbidden.  If however, you return Forbidden, then 
 users who are authorized to, say, manage users in a domain would aways get 
 Forbidden for objects that didn't exist (since we can know where the 
 non-existant object was!).  So this would modify the expected return codes. 
 
 - I really think we need some good documentation on how to bud keystone 
 policy files.  I'm happy to take a first cut as such a thing - what do you 
 think the right place is for such documentation 
 
 ___ 
 OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 
 
 ___ 
 OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] New blueprint and implementation to standardize getting role assignments at authentication time

2013-07-07 Thread Henry Nash
Hi

In thinking about how to implement the OS-INHERIT extension as well as planning 
for simplification in iceHouse of all our backend grants tables, I realized we 
needed to rationalise the various different methodologies for getting the list 
of roles in the token/auth controllers (v2 local is different to v2 
remote/token, which again is different to v3).  This make all this code hard to 
maintain - and in at least one case wrong (e.g. if your only role on a project 
is via group membership, authenticating using v2 will fail).

The small bp 
(https://blueprints.launchpad.net/keystone/+spec/authenticate-role-rationalization)
 and a full implementation of this is now ready for review at: 
https://review.openstack.org/#/c/35897/.  A nice feature is that this has a 
negative impact on keystone code size - i.e. it removes a net of 240 odd lines 
of code :-)

As an aside, it was doing this work that I found the rather nasty bug of: 
https://bugs.launchpad.net/keystone/+bug/1197874.  A fix is also posted for 
review at https://review.openstack.org/#/c/35739/.

I think both of these should got in H2.

As a further aside, a WIP version for the OS-INHERIT extension is also posted, 
for anyone who wants to comment on the approach I am taking.

Henry

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Inherited domain roles: Options for Havana and beyond

2013-06-17 Thread Henry Nash
Hi

So I am also convinced by option a)it is just the time frame.  I would be 
happy to commit to completing this for Havana m3 - but we all agreed that any 
significant api-changing modifications would be in and stable by m2and that 
feels like a much taller order. e.g. we need to:

- Create the new entity backend storage (and decide how we map it in the 
various backend technologies, e.g. LDAP)
- Re-implment the existing v3 apis on top of it
- Implement the new role-assignmetn apis on top of it
- Provide DB migration up and down
- Test the hell out of it, especially the intermixing of new and old apis.

My gut feel is we would have to get agreement that we would bring this support 
in fully at m3 (maybe partial at m2) for us to commit to option a) for Havana.

Henry

On 17 Jun 2013, at 18:33, Tiwari, Arvind wrote:

 Thanks Henry for putting it all together.
 
 In my opinion we should go with option a (role-assignment as a first class 
 citizen) which seems correct to me, looking in to time constraint, option 'b' 
 is OK as an EXTENSION but SHOULD NOT BE implemented as part of core API.
 
 Thoughts???
 
 
 Arvind
 
 -Original Message-
 From: Henry Nash [mailto:hen...@linux.vnet.ibm.com] 
 Sent: Monday, June 17, 2013 6:52 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [keystone] Inherited domain roles: Options for 
 Havana and beyond
 
 Hi
 
 I have amended the etherpad to summarize the two proposed paths:
 
 a) Full role-assignment as a first class entity for Havana
 b) Stepping stone approach, for Havana and I
 
 So you can see what is involved in b), I have amended the two in-flight bp 
 identity proposals to match what this would look like:
 
 https://review.openstack.org/#/c/29781/14
 https://review.openstack.org/#/c/32394/7
 
 [Ok, technically I should have done this is 3 bp, one for changing the 
 current apis, one for the new GET /role-assignment api (that supports groups 
 but not inherited roles) and then an updated GET /role-assigment api that now 
 supports the inherited roles as well, and would be dependant on the other two 
 bps.but in the interests of time I kept it simple.] 
 
 We need to make a call on whether we take a) or b) at tomorrows keystone IRC 
 meeting, if not before.  We are running out of time. 
 
 Henry
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev