Re: [openstack-dev] [keystone][nova] Struggling with non-admin user on Queens install

2018-08-09 Thread Neil Jerram
It appears this is to do with Keystone v3-created users not having any role
assignment by default.  Big thanks to lbragstad for helping me to
understand this on IRC; he also provided this link as historical context
for this situation: https://bugs.launchpad.net/keystone/+bug/1662911.

In detail, I was creating a non-admin project and user like this:

tenant = self.keystone3.projects.create(username,
"default",

description=description,
enabled=True)
user = self.keystone3.users.create(username,
   domain="default",
   project=tenant.id,
   password=password)

With just that, that user won't be able to do anything; you need to give it
a role assignment as well, for example:

admin_role = None
for role in self.keystone3.roles.list():
_log.info("role: %r", role)
if role.name == 'admin':
admin_role = role
break
assert admin_role is not None, "Couldn't find 'admin' role"
self.keystone3.roles.grant(admin_role, user=user,
project=tenant)

I still don't have a good understanding of what 'admin' within that project
really means, or why it means that that user can then do, e.g.
nova.images.list(); but at least I have a working system again.

Regards,
 Neil


On Thu, Aug 9, 2018 at 4:42 PM Neil Jerram  wrote:

> I'd like to create a non-admin project and user that are able to do
> nova.images.list(), in a Queens install.  IIUC, all users should be able to
> do that.  I'm afraid I'm pretty lost and would appreciate any help.
>
> Define a function to test whether a particular set of credentials can do
> nova.images.list():
>
> from keystoneauth1 import identity
> from keystoneauth1 import session
> from novaclient.client import Client as NovaClient
>
> def attemp(auth):
> sess = session.Session(auth=auth)
> nova = NovaClient(2, session=sess)
> for i in nova.images.list():
> print i
>
> With an admin user, things work:
>
> >>> auth_url = "http://controller:5000/v3;
> >>> auth = identity.Password(auth_url=auth_url,
> >>>   username="admin",
> >>>   password="abcdef",
> >>>   project_name="admin",
> >>>   project_domain_id="default",
> >>>   user_domain_id="default")
> >>> attemp(auth)
> 
> 
>
> With a non-admin user with project_id specified, 401:
>
> >>> tauth = identity.Password(auth_url=auth_url,
> ...   username="tenant2",
> ...   password="password",
> ...   project_id="tenant2",
> ...   user_domain_id="default")
> >>> attemp(tauth)
> ...
> keystoneauth1.exceptions.http.Unauthorized: The request you have made
> requires authentication. (HTTP 401) (Request-ID:
> req-ed0630a4-7df0-4ba8-a4c4-de3ecb7b4d7d)
>
> With the same but without project_id, I get an empty service catalog
> instead:
>
> >>> tauth = identity.Password(auth_url=auth_url,
> ...   username="tenant2",
> ...   password="password",
> ...   #project_name="tenant2",
> ...   #project_domain_id="default",
> ...   user_domain_id="default")
> >>>
> >>> attemp(tauth)
> ...
> keystoneauth1.exceptions.catalog.EmptyCatalog: The service catalog is
> empty.
>
> Can anyone help?
>
> Regards,
>  Neil
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova] Struggling with non-admin user on Queens install

2018-08-09 Thread Neil Jerram
I'd like to create a non-admin project and user that are able to do
nova.images.list(), in a Queens install.  IIUC, all users should be able to
do that.  I'm afraid I'm pretty lost and would appreciate any help.

Define a function to test whether a particular set of credentials can do
nova.images.list():

from keystoneauth1 import identity
from keystoneauth1 import session
from novaclient.client import Client as NovaClient

def attemp(auth):
sess = session.Session(auth=auth)
nova = NovaClient(2, session=sess)
for i in nova.images.list():
print i

With an admin user, things work:

>>> auth_url = "http://controller:5000/v3;
>>> auth = identity.Password(auth_url=auth_url,
>>>   username="admin",
>>>   password="abcdef",
>>>   project_name="admin",
>>>   project_domain_id="default",
>>>   user_domain_id="default")
>>> attemp(auth)



With a non-admin user with project_id specified, 401:

>>> tauth = identity.Password(auth_url=auth_url,
...   username="tenant2",
...   password="password",
...   project_id="tenant2",
...   user_domain_id="default")
>>> attemp(tauth)
...
keystoneauth1.exceptions.http.Unauthorized: The request you have made
requires authentication. (HTTP 401) (Request-ID:
req-ed0630a4-7df0-4ba8-a4c4-de3ecb7b4d7d)

With the same but without project_id, I get an empty service catalog
instead:

>>> tauth = identity.Password(auth_url=auth_url,
...   username="tenant2",
...   password="password",
...   #project_name="tenant2",
...   #project_domain_id="default",
...   user_domain_id="default")
>>>
>>> attemp(tauth)
...
keystoneauth1.exceptions.catalog.EmptyCatalog: The service catalog is empty.

Can anyone help?

Regards,
 Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-02-05 Thread Lance Bragstad


On 02/02/2018 11:56 AM, Lance Bragstad wrote:
> I apologize for using the "baremetal/VM" name, but I wanted to get an
> etherpad rolling sooner rather than later [0], since we're likely going
> to have to decide on a new name in person. I ported the initial ideas
> Colleen mentioned when she started this thread, added links to previous
> etherpads from Boston and Denver, and ported some topics from the Boston
> etherpads.
>
> Please feel free to add ideas to the list or elaborate on existing ones.
> Next week we'll start working through them and figure out what we want
> to accomplish for the session. Once we have an official room for the
> discussion, I'll add the etherpad to the list in the wiki.
Based on some discussions in #openstack-dev this morning [0], I took a
stab at working out a rough schedule for Monday and Tuesday [1]. Let me
know if you notice conflicts or want to re-propose a session/topic.

[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-02-05.log.html#t2018-02-05T15:45:57
[1] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg
>
> [0] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg
>
>
> On 02/02/2018 11:10 AM, Zane Bitter wrote:
>> On 30/01/18 10:33, Colleen Murphy wrote:
>>> At the last PTG we had some time on Monday and Tuesday for
>>> cross-project discussions related to baremetal and VM management. We
>>> don't currently have that on the schedule for this PTG. There is still
>>> some free time available that we can ask for[1]. Should we try to
>>> schedule some time for this?
>> +1, I would definitely attend this too.
>>
>> - ZB
>>
>>>  From a keystone perspective, some things we'd like to talk about with
>>> the BM/VM teams are:
>>>
>>> - Unified limits[2]: we now have a basic REST API for registering
>>> limits in keystone. Next steps are building out libraries that can
>>> consume this API and calculate quota usage and limit allocation, and
>>> developing models for quotas in project hierarchies. Input from other
>>> projects is essential here.
>>> - RBAC: we've introduced "system scope"[3] to fix the admin-ness
>>> problem, and we'd like to guide other projects through the migration.
>>> - Application credentials[4]: this main part of this work is largely
>>> done, next steps are implementing better access control for it, which
>>> is largely just a keystone team problem but we could also use this
>>> time for feedback on the implementation so far
>>>
>>> There's likely some non-keystone-related things that might be at home
>>> in a dedicated BM/VM room too. Do we want to have a dedicated day or
>>> two for these projects? Or perhaps not dedicated days, but
>>> planned-in-advance meeting time? Or should we wait and schedule it
>>> ad-hoc if we feel like we need it?
>>>
>>> Colleen
>>>
>>> [1]
>>> https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true
>>> [2]
>>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html
>>> [3]
>>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html
>>> [4]
>>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-02-02 Thread Lance Bragstad
I apologize for using the "baremetal/VM" name, but I wanted to get an
etherpad rolling sooner rather than later [0], since we're likely going
to have to decide on a new name in person. I ported the initial ideas
Colleen mentioned when she started this thread, added links to previous
etherpads from Boston and Denver, and ported some topics from the Boston
etherpads.

Please feel free to add ideas to the list or elaborate on existing ones.
Next week we'll start working through them and figure out what we want
to accomplish for the session. Once we have an official room for the
discussion, I'll add the etherpad to the list in the wiki.

[0] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg


On 02/02/2018 11:10 AM, Zane Bitter wrote:
> On 30/01/18 10:33, Colleen Murphy wrote:
>> At the last PTG we had some time on Monday and Tuesday for
>> cross-project discussions related to baremetal and VM management. We
>> don't currently have that on the schedule for this PTG. There is still
>> some free time available that we can ask for[1]. Should we try to
>> schedule some time for this?
>
> +1, I would definitely attend this too.
>
> - ZB
>
>>  From a keystone perspective, some things we'd like to talk about with
>> the BM/VM teams are:
>>
>> - Unified limits[2]: we now have a basic REST API for registering
>> limits in keystone. Next steps are building out libraries that can
>> consume this API and calculate quota usage and limit allocation, and
>> developing models for quotas in project hierarchies. Input from other
>> projects is essential here.
>> - RBAC: we've introduced "system scope"[3] to fix the admin-ness
>> problem, and we'd like to guide other projects through the migration.
>> - Application credentials[4]: this main part of this work is largely
>> done, next steps are implementing better access control for it, which
>> is largely just a keystone team problem but we could also use this
>> time for feedback on the implementation so far
>>
>> There's likely some non-keystone-related things that might be at home
>> in a dedicated BM/VM room too. Do we want to have a dedicated day or
>> two for these projects? Or perhaps not dedicated days, but
>> planned-in-advance meeting time? Or should we wait and schedule it
>> ad-hoc if we feel like we need it?
>>
>> Colleen
>>
>> [1]
>> https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true
>> [2]
>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html
>> [3]
>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html
>> [4]
>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-02-02 Thread Zane Bitter

On 30/01/18 10:33, Colleen Murphy wrote:

At the last PTG we had some time on Monday and Tuesday for
cross-project discussions related to baremetal and VM management. We
don't currently have that on the schedule for this PTG. There is still
some free time available that we can ask for[1]. Should we try to
schedule some time for this?


+1, I would definitely attend this too.

- ZB


 From a keystone perspective, some things we'd like to talk about with
the BM/VM teams are:

- Unified limits[2]: we now have a basic REST API for registering
limits in keystone. Next steps are building out libraries that can
consume this API and calculate quota usage and limit allocation, and
developing models for quotas in project hierarchies. Input from other
projects is essential here.
- RBAC: we've introduced "system scope"[3] to fix the admin-ness
problem, and we'd like to guide other projects through the migration.
- Application credentials[4]: this main part of this work is largely
done, next steps are implementing better access control for it, which
is largely just a keystone team problem but we could also use this
time for feedback on the implementation so far

There's likely some non-keystone-related things that might be at home
in a dedicated BM/VM room too. Do we want to have a dedicated day or
two for these projects? Or perhaps not dedicated days, but
planned-in-advance meeting time? Or should we wait and schedule it
ad-hoc if we feel like we need it?

Colleen

[1] 
https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true
[2] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html
[3] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html
[4] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-02-01 Thread Rico Lin
>
> Fair point. When the "VM/baremetal workgroup" was originally formed,
> the goal was more about building clouds with both types of resources,
> making them behave similarly from a user perspective, etc. Somehow
> we got into talking applications and these other topics came up, which
> seemed more interesting/pressing to fix. :)
>
> Maybe "cross-project identity integration" or something is a better name?

Cloud-Native Application IMO is one of the ways to see the flow for both
VM/Baremetal.
But  It's true if we can have more specific goal coss project to make sure
we're marching to that goal (which `VM/baremetal workgroup` formed for)
will be even better.
Instead of modifying the name, I do prefer if we can spend some time to
trace current flow and come out with specific targets for teams to work on
in rocky to allow building both types of resources and feel like same flow
to user, and which of cause includes what keystone already started. So
other than topics Collen mentioned above (and I think they all great), we
should focus working on what topics we can comes out from here (I think
that's why Collen start this ML). Ideas?




-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-01-31 Thread Jim Rollenhagen
On Wed, Jan 31, 2018 at 12:22 PM, Dmitry Tantsur 
wrote:

> On 01/31/2018 06:15 PM, Matt Riedemann wrote:
>
>> On 1/30/2018 9:33 AM, Colleen Murphy wrote:
>>
>>> At the last PTG we had some time on Monday and Tuesday for
>>> cross-project discussions related to baremetal and VM management. We
>>> don't currently have that on the schedule for this PTG. There is still
>>> some free time available that we can ask for[1]. Should we try to
>>> schedule some time for this?
>>>
>>>  From a keystone perspective, some things we'd like to talk about with
>>> the BM/VM teams are:
>>>
>>> - Unified limits[2]: we now have a basic REST API for registering
>>> limits in keystone. Next steps are building out libraries that can
>>> consume this API and calculate quota usage and limit allocation, and
>>> developing models for quotas in project hierarchies. Input from other
>>> projects is essential here.
>>> - RBAC: we've introduced "system scope"[3] to fix the admin-ness
>>> problem, and we'd like to guide other projects through the migration.
>>> - Application credentials[4]: this main part of this work is largely
>>> done, next steps are implementing better access control for it, which
>>> is largely just a keystone team problem but we could also use this
>>> time for feedback on the implementation so far
>>>
>>> There's likely some non-keystone-related things that might be at home
>>> in a dedicated BM/VM room too. Do we want to have a dedicated day or
>>> two for these projects? Or perhaps not dedicated days, but
>>> planned-in-advance meeting time? Or should we wait and schedule it
>>> ad-hoc if we feel like we need it?
>>>
>>> Colleen
>>>
>>> [1] https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rI
>>> zlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25Kji
>>> uRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true
>>> [2] http://specs.openstack.org/openstack/keystone-specs/specs/
>>> keystone/queens/limits-api.html
>>> [3] http://specs.openstack.org/openstack/keystone-specs/specs/
>>> keystone/queens/system-scope.html
>>> [4] http://specs.openstack.org/openstack/keystone-specs/specs/
>>> keystone/queens/application-credentials.html
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> These all seem like good topics for big cross-project issues.
>>
>> I've never liked the "BM/VM" platform naming thing, it seems to imply
>> that the only things one needs to care about for these discussions is if
>> they work on or use nova and ironic, and that's generally not the case.
>>
>
> ++ can we please rename it? I think people (myself included) will expect
> specifically something related to bare metal instances co-existing with
> virtual ones (e.g. scheduling or networking concerns). Which is also a
> great topic, but it does not seem to be present on the list.


Fair point. When the "VM/baremetal workgroup" was originally formed,
the goal was more about building clouds with both types of resources,
making them behave similarly from a user perspective, etc. Somehow
we got into talking applications and these other topics came up, which
seemed more interesting/pressing to fix. :)

Maybe "cross-project identity integration" or something is a better name?

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-01-31 Thread Colleen Murphy
On Wed, Jan 31, 2018 at 6:46 PM, Graham Hayes  wrote:
> On 31/01/18 17:22, Dmitry Tantsur wrote:
>> On 01/31/2018 06:15 PM, Matt Riedemann wrote:
>>> On 1/30/2018 9:33 AM, Colleen Murphy wrote:
[snip]
>>>
>>> These all seem like good topics for big cross-project issues.
>>>
>>> I've never liked the "BM/VM" platform naming thing, it seems to imply
>>> that the only things one needs to care about for these discussions is
>>> if they work on or use nova and ironic, and that's generally not the
>>> case.
>>
>> ++ can we please rename it? I think people (myself included) will expect
>> specifically something related to bare metal instances co-existing with
>> virtual ones (e.g. scheduling or networking concerns). Which is also a
>> great topic, but it does not seem to be present on the list.
>>
>
> Yeah - both of these topic apply to all projects. If we could do
> scheduled time for both of these, and then separate time for Ironic /
> Nova issues it would be good.
>
>>>
>>> So if you do have a session about this really cross-project
>>> platform-specific stuff, can we at least not call it "BM/VM"? Plus,
>>> "BM" always makes me think of something I'd rather not see in a room
>>> with other people.
>>>

++

Sorry, I didn't mean to be exclusive. These topics do apply to most
projects, and it did feel awkward writing that email with keystone
goals in mind when keystone is in neither category.

Colleen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-01-31 Thread Graham Hayes
On 31/01/18 17:22, Dmitry Tantsur wrote:
> On 01/31/2018 06:15 PM, Matt Riedemann wrote:
>> On 1/30/2018 9:33 AM, Colleen Murphy wrote:
>>> At the last PTG we had some time on Monday and Tuesday for
>>> cross-project discussions related to baremetal and VM management. We
>>> don't currently have that on the schedule for this PTG. There is still
>>> some free time available that we can ask for[1]. Should we try to
>>> schedule some time for this?
>>>
>>>  From a keystone perspective, some things we'd like to talk about with
>>> the BM/VM teams are:
>>>
>>> - Unified limits[2]: we now have a basic REST API for registering
>>> limits in keystone. Next steps are building out libraries that can
>>> consume this API and calculate quota usage and limit allocation, and
>>> developing models for quotas in project hierarchies. Input from other
>>> projects is essential here.
>>> - RBAC: we've introduced "system scope"[3] to fix the admin-ness
>>> problem, and we'd like to guide other projects through the migration.
>>> - Application credentials[4]: this main part of this work is largely
>>> done, next steps are implementing better access control for it, which
>>> is largely just a keystone team problem but we could also use this
>>> time for feedback on the implementation so far
>>>
>>> There's likely some non-keystone-related things that might be at home
>>> in a dedicated BM/VM room too. Do we want to have a dedicated day or
>>> two for these projects? Or perhaps not dedicated days, but
>>> planned-in-advance meeting time? Or should we wait and schedule it
>>> ad-hoc if we feel like we need it?
>>>
>>> Colleen
>>>
>>> [1]
>>> https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true
>>>
>>> [2]
>>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html
>>>
>>> [3]
>>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html
>>>
>>> [4]
>>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> These all seem like good topics for big cross-project issues.
>>
>> I've never liked the "BM/VM" platform naming thing, it seems to imply
>> that the only things one needs to care about for these discussions is
>> if they work on or use nova and ironic, and that's generally not the
>> case.
> 
> ++ can we please rename it? I think people (myself included) will expect
> specifically something related to bare metal instances co-existing with
> virtual ones (e.g. scheduling or networking concerns). Which is also a
> great topic, but it does not seem to be present on the list.
> 

Yeah - both of these topic apply to all projects. If we could do
scheduled time for both of these, and then separate time for Ironic /
Nova issues it would be good.

>>
>> So if you do have a session about this really cross-project
>> platform-specific stuff, can we at least not call it "BM/VM"? Plus,
>> "BM" always makes me think of something I'd rather not see in a room
>> with other people.
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-01-31 Thread Dmitry Tantsur

On 01/31/2018 06:15 PM, Matt Riedemann wrote:

On 1/30/2018 9:33 AM, Colleen Murphy wrote:

At the last PTG we had some time on Monday and Tuesday for
cross-project discussions related to baremetal and VM management. We
don't currently have that on the schedule for this PTG. There is still
some free time available that we can ask for[1]. Should we try to
schedule some time for this?

 From a keystone perspective, some things we'd like to talk about with
the BM/VM teams are:

- Unified limits[2]: we now have a basic REST API for registering
limits in keystone. Next steps are building out libraries that can
consume this API and calculate quota usage and limit allocation, and
developing models for quotas in project hierarchies. Input from other
projects is essential here.
- RBAC: we've introduced "system scope"[3] to fix the admin-ness
problem, and we'd like to guide other projects through the migration.
- Application credentials[4]: this main part of this work is largely
done, next steps are implementing better access control for it, which
is largely just a keystone team problem but we could also use this
time for feedback on the implementation so far

There's likely some non-keystone-related things that might be at home
in a dedicated BM/VM room too. Do we want to have a dedicated day or
two for these projects? Or perhaps not dedicated days, but
planned-in-advance meeting time? Or should we wait and schedule it
ad-hoc if we feel like we need it?

Colleen

[1] 
https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true 

[2] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html 

[3] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html 

[4] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



These all seem like good topics for big cross-project issues.

I've never liked the "BM/VM" platform naming thing, it seems to imply that the 
only things one needs to care about for these discussions is if they work on or 
use nova and ironic, and that's generally not the case.


++ can we please rename it? I think people (myself included) will expect 
specifically something related to bare metal instances co-existing with virtual 
ones (e.g. scheduling or networking concerns). Which is also a great topic, but 
it does not seem to be present on the list.




So if you do have a session about this really cross-project platform-specific 
stuff, can we at least not call it "BM/VM"? Plus, "BM" always makes me think of 
something I'd rather not see in a room with other people.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-01-31 Thread Matt Riedemann

On 1/30/2018 9:33 AM, Colleen Murphy wrote:

At the last PTG we had some time on Monday and Tuesday for
cross-project discussions related to baremetal and VM management. We
don't currently have that on the schedule for this PTG. There is still
some free time available that we can ask for[1]. Should we try to
schedule some time for this?

 From a keystone perspective, some things we'd like to talk about with
the BM/VM teams are:

- Unified limits[2]: we now have a basic REST API for registering
limits in keystone. Next steps are building out libraries that can
consume this API and calculate quota usage and limit allocation, and
developing models for quotas in project hierarchies. Input from other
projects is essential here.
- RBAC: we've introduced "system scope"[3] to fix the admin-ness
problem, and we'd like to guide other projects through the migration.
- Application credentials[4]: this main part of this work is largely
done, next steps are implementing better access control for it, which
is largely just a keystone team problem but we could also use this
time for feedback on the implementation so far

There's likely some non-keystone-related things that might be at home
in a dedicated BM/VM room too. Do we want to have a dedicated day or
two for these projects? Or perhaps not dedicated days, but
planned-in-advance meeting time? Or should we wait and schedule it
ad-hoc if we feel like we need it?

Colleen

[1] 
https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true
[2] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html
[3] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html
[4] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



These all seem like good topics for big cross-project issues.

I've never liked the "BM/VM" platform naming thing, it seems to imply 
that the only things one needs to care about for these discussions is if 
they work on or use nova and ironic, and that's generally not the case.


So if you do have a session about this really cross-project 
platform-specific stuff, can we at least not call it "BM/VM"? Plus, "BM" 
always makes me think of something I'd rather not see in a room with 
other people.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-01-31 Thread Lance Bragstad


On 01/30/2018 09:33 AM, Colleen Murphy wrote:
> At the last PTG we had some time on Monday and Tuesday for
> cross-project discussions related to baremetal and VM management. We
> don't currently have that on the schedule for this PTG. There is still
> some free time available that we can ask for[1]. Should we try to
> schedule some time for this?
>
> From a keystone perspective, some things we'd like to talk about with
> the BM/VM teams are:
>
> - Unified limits[2]: we now have a basic REST API for registering
> limits in keystone. Next steps are building out libraries that can
> consume this API and calculate quota usage and limit allocation, and
> developing models for quotas in project hierarchies. Input from other
> projects is essential here.
> - RBAC: we've introduced "system scope"[3] to fix the admin-ness
> problem, and we'd like to guide other projects through the migration.
> - Application credentials[4]: this main part of this work is largely
> done, next steps are implementing better access control for it, which
> is largely just a keystone team problem but we could also use this
> time for feedback on the implementation so far
So, I'm probably biased, but a huge +1 for me. I think the last
baremetal/vm session in Denver was really productive and led to most of
what we accomplished this release. Who else do we need to get involved
in order to get this scheduled? Do we need some more projects to show up
(e.g. cinder, nova, neutron)?

Tacking on the RBAC stuff, it would be cool to sit down with others and
talk about basic roles [0], since we have everything to make that
possible. I suppose we could start collecting topics in an etherpad and
elaborating on them there.

[0] https://review.openstack.org/#/c/523973/
> There's likely some non-keystone-related things that might be at home
> in a dedicated BM/VM room too. Do we want to have a dedicated day or
> two for these projects? Or perhaps not dedicated days, but
> planned-in-advance meeting time? Or should we wait and schedule it
> ad-hoc if we feel like we need it?
>
> Colleen
>
> [1] 
> https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true
> [2] 
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html
> [3] 
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html
> [4] 
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-01-30 Thread Pavlo Shchelokovskyy
+1 to Jim,

I'm specifically interested in app creds and RBAC as I'd like to find a way
to pass some of API access creds to the ironic deploy ramdisk, and it seems
one of those could help. Let's discuss :)

Cheers,

On Tue, Jan 30, 2018 at 6:03 PM, Jim Rollenhagen 
wrote:

> On Tue, Jan 30, 2018 at 10:33 AM, Colleen Murphy 
> wrote:
>
>> At the last PTG we had some time on Monday and Tuesday for
>> cross-project discussions related to baremetal and VM management. We
>> don't currently have that on the schedule for this PTG. There is still
>> some free time available that we can ask for[1]. Should we try to
>> schedule some time for this?
>>
>
> I'd attend for the topics you list below, FWIW.
>
>
>>
>> From a keystone perspective, some things we'd like to talk about with
>> the BM/VM teams are:
>>
>> - Unified limits[2]: we now have a basic REST API for registering
>> limits in keystone. Next steps are building out libraries that can
>> consume this API and calculate quota usage and limit allocation, and
>> developing models for quotas in project hierarchies. Input from other
>> projects is essential here.
>> - RBAC: we've introduced "system scope"[3] to fix the admin-ness
>> problem, and we'd like to guide other projects through the migration.
>> - Application credentials[4]: this main part of this work is largely
>> done, next steps are implementing better access control for it, which
>> is largely just a keystone team problem but we could also use this
>> time for feedback on the implementation so far
>>
>> There's likely some non-keystone-related things that might be at home
>> in a dedicated BM/VM room too. Do we want to have a dedicated day or
>> two for these projects? Or perhaps not dedicated days, but
>> planned-in-advance meeting time? Or should we wait and schedule it
>> ad-hoc if we feel like we need it?
>>
>
> There's always plenty to discuss between nova and ironic, but we usually
> just schedule those topics somewhat ad-hoc. Never opposed to some
> dedicated time if folks will show up, though. :)
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-01-30 Thread Jim Rollenhagen
On Tue, Jan 30, 2018 at 10:33 AM, Colleen Murphy 
wrote:

> At the last PTG we had some time on Monday and Tuesday for
> cross-project discussions related to baremetal and VM management. We
> don't currently have that on the schedule for this PTG. There is still
> some free time available that we can ask for[1]. Should we try to
> schedule some time for this?
>

I'd attend for the topics you list below, FWIW.


>
> From a keystone perspective, some things we'd like to talk about with
> the BM/VM teams are:
>
> - Unified limits[2]: we now have a basic REST API for registering
> limits in keystone. Next steps are building out libraries that can
> consume this API and calculate quota usage and limit allocation, and
> developing models for quotas in project hierarchies. Input from other
> projects is essential here.
> - RBAC: we've introduced "system scope"[3] to fix the admin-ness
> problem, and we'd like to guide other projects through the migration.
> - Application credentials[4]: this main part of this work is largely
> done, next steps are implementing better access control for it, which
> is largely just a keystone team problem but we could also use this
> time for feedback on the implementation so far
>
> There's likely some non-keystone-related things that might be at home
> in a dedicated BM/VM room too. Do we want to have a dedicated day or
> two for these projects? Or perhaps not dedicated days, but
> planned-in-advance meeting time? Or should we wait and schedule it
> ad-hoc if we feel like we need it?
>

There's always plenty to discuss between nova and ironic, but we usually
just schedule those topics somewhat ad-hoc. Never opposed to some
dedicated time if folks will show up, though. :)

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-01-30 Thread Colleen Murphy
At the last PTG we had some time on Monday and Tuesday for
cross-project discussions related to baremetal and VM management. We
don't currently have that on the schedule for this PTG. There is still
some free time available that we can ask for[1]. Should we try to
schedule some time for this?

From a keystone perspective, some things we'd like to talk about with
the BM/VM teams are:

- Unified limits[2]: we now have a basic REST API for registering
limits in keystone. Next steps are building out libraries that can
consume this API and calculate quota usage and limit allocation, and
developing models for quotas in project hierarchies. Input from other
projects is essential here.
- RBAC: we've introduced "system scope"[3] to fix the admin-ness
problem, and we'd like to guide other projects through the migration.
- Application credentials[4]: this main part of this work is largely
done, next steps are implementing better access control for it, which
is largely just a keystone team problem but we could also use this
time for feedback on the implementation so far

There's likely some non-keystone-related things that might be at home
in a dedicated BM/VM room too. Do we want to have a dedicated day or
two for these projects? Or perhaps not dedicated days, but
planned-in-advance meeting time? Or should we wait and schedule it
ad-hoc if we feel like we need it?

Colleen

[1] 
https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true
[2] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html
[3] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html
[4] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-10-30 Thread James Penick
Big +1 to re-evaluating this. In my environment we have many users
deploying and managing a number of different apps in different tenants.
Some of our users, such as Yahoo Mail service engineers could be in up to
40 different tenants. Those service engineers may change products as their
careers develop. Having to re-deploy part of an application stack because
Sally SE changed products would be unnecessarily disruptive.

 I regret that I missed the bus on this back in June. But at Oath we've
built a system (Called Copper Argos) on top of Athenz (it's open source:
www.athenz.io) to provide instance identity in a way that is both unique
but doesn't have all of the problems of a static persistent identity.

 The really really really* high level overview is:
1. Users pass application identity data to Nova as metadata during the boot
process.
2. Our vendor-data driver works with a service called HostSignd to validate
that data and create a one time use attestation document which is injected
into the instance's config drive.
3. On boot an agent within the instance will use that time-limited host
attestation document to identify itself to the Athenz identity service,
which will then exchange the document for a unique certificate containing
the application data passed in the boot call.
4. From then on the instance identity (TLS certificate) is periodically
exchanged by the agent for a new certificate.
5. The host attestation document and the instance TLS certificate can each
only be used a single time to exchange for another certificate. The
attestation document has a very short ttl, and the instance identity is set
to live slightly longer than the planned rotation frequency. So if you
rotate your certificates once an hour, the ttl on the cert should be 2
hours. This gives some wiggle room in the event the identity service is
down for any reason.

The agent is also capable of supporting SSH CA by passing the SSH host key
up to be re-signed whenever it exchanges the TLS certificate. All instances
leveraging Athens identity can communicate to one another using TLS mutual
auth.

If there's any interest i'd be happy to go into more detail here on the ML
and/or at the summit in Sydney

-James
* With several more zoolander-style Really's thrown in for good measure.


On Tue, Oct 10, 2017 at 12:34 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:

> Big +1 for reevaluating the bigger picture. We have a pile of api's that
> together don't always form the most useful of api's due to lack of big
> picture analysis.
>
> +1 to thinking through the dev's/devops use case.
>
> Another one to really think over is single user that != application
> developer. IE, Pure user type person deploying cloud app in their tenant
> written by dev not employees by user's company. User shouldn't have to go
> to Operator to provision service accounts and other things. App dev should
> be able to give everything needed to let OpenStack launch say a heat
> template that provisions the service accounts for the User, not making the
> user twiddle the api themselves. It should be a "here, launch this" kind of
> thing, and they fill out the heat form, and out pops a working app. If they
> have to go prevision a bunch of stuff themselves before passing stuff to
> the form, game over. Likewise, if they have to look at yaml, game over. How
> do app credentials fit into this?
>
> Thanks,
> Kevin
>
> 
> From: Zane Bitter [zbit...@redhat.com]
> Sent: Monday, October 09, 2017 9:39 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [keystone][nova] Persistent application
> credentials
>
> On 12/09/17 18:58, Colleen Murphy wrote:
> > While it's fresh in our minds, I wanted to write up a short recap of
> > where we landed in the Application Credentials discussion in the BM/VM
> > room today. For convenience the (as of yet unrevised) spec is here:
>
> Thanks so much for staying on this Colleen, it's tremendously helpful to
> have someone from the core team keeping an eye on it :)
>
> > http://specs.openstack.org/openstack/keystone-specs/
> specs/keystone/backlog/application-credentials.html
> >
> > Attached are images of the whiteboarded notes.
> >
> > On the contentious question of the lifecycle of an application
> > credential, we re-landed in the same place we found ourselves in when
> > the spec originally landed, which is that the credential becomes invalid
> > when its creating user is disabled or deleted. The risk involved in
> > allowing a credential to continue to be valid after its creating user
> > has been disabled is not really surmountable, and we are basically
> > giving up on this feature. The benefits we still get from not having to
> > embed user passwords in config files, especial

Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-10-10 Thread Fox, Kevin M
Big +1 for reevaluating the bigger picture. We have a pile of api's that 
together don't always form the most useful of api's due to lack of big picture 
analysis.

+1 to thinking through the dev's/devops use case.

Another one to really think over is single user that != application developer. 
IE, Pure user type person deploying cloud app in their tenant written by dev 
not employees by user's company. User shouldn't have to go to Operator to 
provision service accounts and other things. App dev should be able to give 
everything needed to let OpenStack launch say a heat template that provisions 
the service accounts for the User, not making the user twiddle the api 
themselves. It should be a "here, launch this" kind of thing, and they fill out 
the heat form, and out pops a working app. If they have to go prevision a bunch 
of stuff themselves before passing stuff to the form, game over. Likewise, if 
they have to look at yaml, game over. How do app credentials fit into this?

Thanks,
Kevin


From: Zane Bitter [zbit...@redhat.com]
Sent: Monday, October 09, 2017 9:39 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone][nova] Persistent application credentials

On 12/09/17 18:58, Colleen Murphy wrote:
> While it's fresh in our minds, I wanted to write up a short recap of
> where we landed in the Application Credentials discussion in the BM/VM
> room today. For convenience the (as of yet unrevised) spec is here:

Thanks so much for staying on this Colleen, it's tremendously helpful to
have someone from the core team keeping an eye on it :)

> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/backlog/application-credentials.html
>
> Attached are images of the whiteboarded notes.
>
> On the contentious question of the lifecycle of an application
> credential, we re-landed in the same place we found ourselves in when
> the spec originally landed, which is that the credential becomes invalid
> when its creating user is disabled or deleted. The risk involved in
> allowing a credential to continue to be valid after its creating user
> has been disabled is not really surmountable, and we are basically
> giving up on this feature. The benefits we still get from not having to
> embed user passwords in config files, especially for LDAP or federated
> users, is still a vast improvement over the situation today, as is the
> ability to rotate credentials.

OK, there were lots of smart people in the room so I trust that y'all
made the right decision.

I'd just like to step back for a moment though and ask: how exactly do
we expect users to make use of Keystone?

When I think about a typical OpenStack user of the near future, they
looks something like this: there's a team with a handful of developers,
with maybe one or two devops engineers. This team is responsible for a
bunch of applications, at various stages in their lifecycles. They work
in a department with several such teams, in an organisation with several
such departments. People regularly join or leave the team - whether
because they join or leave the organisation or just transfer between
different teams. The applications are deployed with Heat and are at
least partly self-managing (e.g. they use auto-scaling, or auto-healing,
or have automated backups, or all of the above), but also require
occasional manual intervention (beyond just a Heat stack-update). The
applications may be deployed to a private OpenStack cloud, a public
OpenStack cloud, or both, with minimal differences in how they work when
moving back and forth.

(Obviously the beauty of Open Source is that we don't think about only
one set of users. But I think if we can serve this set of users as a
baseline then we have built something pretty generically useful.)

So my question is: how do we recommend these users use Keystone? We
definitely _can_ support them. But the most workable way I can think of
would be to create a long-lived application user account for each
project in LDAP/ActiveDirectory/whatever and have that account manage
the application. Then things will work basically the same way in the
public cloud, where you also get a user per project. Hopefully some
auditability is maintained by having Jenkins/Zuul/Solum/whatever do the
pushing of changes to Heat, although realistically many users will not
be that sophisticated. Once we have application credentials, the folks
doing manual intervention would be able to do so in the same way on
public clouds as on private clouds, without being given the account
credentials.

Some observations about this scenario:
* The whole user/role infrastructure is completely unused - 'Users' are
1:1 with projects. We might as well not have built it.
* Having Keystone backed by LDAP/ActiveDirectory is arguably worse than
useless - it just means there are two different places to set things up
when creating a project and

Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-10-10 Thread Colleen Murphy
On Mon, Oct 9, 2017 at 6:39 PM, Zane Bitter  wrote:

> On 12/09/17 18:58, Colleen Murphy wrote:
>
>> While it's fresh in our minds, I wanted to write up a short recap of
>> where we landed in the Application Credentials discussion in the BM/VM room
>> today. For convenience the (as of yet unrevised) spec is here:
>>
>
> Thanks so much for staying on this Colleen, it's tremendously helpful to
> have someone from the core team keeping an eye on it :)

No problem :)

>
>
> http://specs.openstack.org/openstack/keystone-specs/specs/ke
>> ystone/backlog/application-credentials.html
>>
>> Attached are images of the whiteboarded notes.
>>
>> On the contentious question of the lifecycle of an application
>> credential, we re-landed in the same place we found ourselves in when the
>> spec originally landed, which is that the credential becomes invalid when
>> its creating user is disabled or deleted. The risk involved in allowing a
>> credential to continue to be valid after its creating user has been
>> disabled is not really surmountable, and we are basically giving up on this
>> feature. The benefits we still get from not having to embed user passwords
>> in config files, especially for LDAP or federated users, is still a vast
>> improvement over the situation today, as is the ability to rotate
>> credentials.
>>
>
> OK, there were lots of smart people in the room so I trust that y'all made
> the right decision.
>
> I'd just like to step back for a moment though and ask: how exactly do we
> expect users to make use of Keystone?
>
> When I think about a typical OpenStack user of the near future, they looks
> something like this: there's a team with a handful of developers, with
> maybe one or two devops engineers. This team is responsible for a bunch of
> applications, at various stages in their lifecycles. They work in a
> department with several such teams, in an organisation with several such
> departments. People regularly join or leave the team - whether because they
> join or leave the organisation or just transfer between different teams.
> The applications are deployed with Heat and are at least partly
> self-managing (e.g. they use auto-scaling, or auto-healing, or have
> automated backups, or all of the above), but also require occasional manual
> intervention (beyond just a Heat stack-update). The applications may be
> deployed to a private OpenStack cloud, a public OpenStack cloud, or both,
> with minimal differences in how they work when moving back and forth.
>
> (Obviously the beauty of Open Source is that we don't think about only one
> set of users. But I think if we can serve this set of users as a baseline
> then we have built something pretty generically useful.)
>
> So my question is: how do we recommend these users use Keystone? We
> definitely _can_ support them. But the most workable way I can think of
> would be to create a long-lived application user account for each project
> in LDAP/ActiveDirectory/whatever and have that account manage the
> application. Then things will work basically the same way in the public
> cloud, where you also get a user per project. Hopefully some auditability
> is maintained by having Jenkins/Zuul/Solum/whatever do the pushing of
> changes to Heat, although realistically many users will not be that
> sophisticated. Once we have application credentials, the folks doing manual
> intervention would be able to do so in the same way on public clouds as on
> private clouds, without being given the account credentials.
>
> Some observations about this scenario:
> * The whole user/role infrastructure is completely unused - 'Users' are
> 1:1 with projects. We might as well not have built it.
> * Having Keystone backed by LDAP/ActiveDirectory is arguably worse than
> useless - it just means there are two different places to set things up
> when creating a project and an extra layer of indirection. (I won't say we
> might as well not have built it, because many organisations have compliance
> rules that, ahem, well let's just say they were developed in a different
> context :)
> * We're missing an essential feature of cloud (or even of VPSs): you
> shouldn't need to raise a ticket with IT to be able to deploy a new
> application. Any involvement from them should be asynchronous (e.g. setting
> quotas - although even that is an OpenStack-specific thing: in public
> clouds excessive use is discouraged by _billing_ and in non-OpenStack
> clouds users set their _own_ quotas); we don't want humans in the loop.
> * AFAIK it's not documented anywhere that this is the way we expect you to
> use OpenStack. Anybody would think it's all about the Users and Roles.
>
Another observation - if all the members of the team know the application
user's username and password (which they must, in order to use it to create
application credentials), then the team members who leave the team would
continue to have access to everything the application user has access to,
and 

Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-10-09 Thread Zane Bitter

On 12/09/17 18:58, Colleen Murphy wrote:
While it's fresh in our minds, I wanted to write up a short recap of 
where we landed in the Application Credentials discussion in the BM/VM 
room today. For convenience the (as of yet unrevised) spec is here:


Thanks so much for staying on this Colleen, it's tremendously helpful to 
have someone from the core team keeping an eye on it :)



http://specs.openstack.org/openstack/keystone-specs/specs/keystone/backlog/application-credentials.html

Attached are images of the whiteboarded notes.

On the contentious question of the lifecycle of an application 
credential, we re-landed in the same place we found ourselves in when 
the spec originally landed, which is that the credential becomes invalid 
when its creating user is disabled or deleted. The risk involved in 
allowing a credential to continue to be valid after its creating user 
has been disabled is not really surmountable, and we are basically 
giving up on this feature. The benefits we still get from not having to 
embed user passwords in config files, especially for LDAP or federated 
users, is still a vast improvement over the situation today, as is the 
ability to rotate credentials.


OK, there were lots of smart people in the room so I trust that y'all 
made the right decision.


I'd just like to step back for a moment though and ask: how exactly do 
we expect users to make use of Keystone?


When I think about a typical OpenStack user of the near future, they 
looks something like this: there's a team with a handful of developers, 
with maybe one or two devops engineers. This team is responsible for a 
bunch of applications, at various stages in their lifecycles. They work 
in a department with several such teams, in an organisation with several 
such departments. People regularly join or leave the team - whether 
because they join or leave the organisation or just transfer between 
different teams. The applications are deployed with Heat and are at 
least partly self-managing (e.g. they use auto-scaling, or auto-healing, 
or have automated backups, or all of the above), but also require 
occasional manual intervention (beyond just a Heat stack-update). The 
applications may be deployed to a private OpenStack cloud, a public 
OpenStack cloud, or both, with minimal differences in how they work when 
moving back and forth.


(Obviously the beauty of Open Source is that we don't think about only 
one set of users. But I think if we can serve this set of users as a 
baseline then we have built something pretty generically useful.)


So my question is: how do we recommend these users use Keystone? We 
definitely _can_ support them. But the most workable way I can think of 
would be to create a long-lived application user account for each 
project in LDAP/ActiveDirectory/whatever and have that account manage 
the application. Then things will work basically the same way in the 
public cloud, where you also get a user per project. Hopefully some 
auditability is maintained by having Jenkins/Zuul/Solum/whatever do the 
pushing of changes to Heat, although realistically many users will not 
be that sophisticated. Once we have application credentials, the folks 
doing manual intervention would be able to do so in the same way on 
public clouds as on private clouds, without being given the account 
credentials.


Some observations about this scenario:
* The whole user/role infrastructure is completely unused - 'Users' are 
1:1 with projects. We might as well not have built it.
* Having Keystone backed by LDAP/ActiveDirectory is arguably worse than 
useless - it just means there are two different places to set things up 
when creating a project and an extra layer of indirection. (I won't say 
we might as well not have built it, because many organisations have 
compliance rules that, ahem, well let's just say they were developed in 
a different context :)
* We're missing an essential feature of cloud (or even of VPSs): you 
shouldn't need to raise a ticket with IT to be able to deploy a new 
application. Any involvement from them should be asynchronous (e.g. 
setting quotas - although even that is an OpenStack-specific thing: in 
public clouds excessive use is discouraged by _billing_ and in 
non-OpenStack clouds users set their _own_ quotas); we don't want humans 
in the loop.
* AFAIK it's not documented anywhere that this is the way we expect you 
to use OpenStack. Anybody would think it's all about the Users and Roles.


Perhaps someone can suggest a better scenario for this group of users? I 
can't think of one that doesn't involve radical differences between 
public & private clouds (something we're explicitly trying to prevent, 
according to our mission statement), and/or risk total application 
breakage when personnel change.


My worry is that in this and other areas, there's a disconnect between 
the needs of the people whom we say we're building OpenStack for and 
what we're actually building. 

Re: [openstack-dev] [keystone] [nova] [neutron] [cinder] [ironic] [glance] [swift] Baremetal/VM SIG PTG Schedule/Etherpad

2017-09-10 Thread Lance Bragstad
Looks like the Baremetal/VM SIG (#compute) will meet in Ballroom B,
Banquet Level. I've updated the etherpad with the room information [0].

[0] https://etherpad.openstack.org/p/queens-PTG-vmbm


On 09/07/2017 10:01 AM, Lance Bragstad wrote:
> I spoke with John a bit today in IRC and we have a rough schedule worked
> out for the Baremetal/VM SIG. All the sessions/ideas/carry-over topics
> from Boston have been filtered into a schedule and is available in the
> etherpad [0].
>
> Each entry should have a "lead" to drive the discussion and a "goal" to
> work towards. I took a stab at listing leads and goals accordingly, but
> if you're more familiar with the topics, please adjust as necessary. If
> you noticed a conflict with another session, feel free to respond here,
> leave a comment in the etherpad, or ping me on IRC. I know it's a bit
> late, but I'd like to have the schedule pretty well set by the weekend.
>
> Thanks!
>
>
> [0] https://etherpad.openstack.org/p/queens-PTG-vmbm
>
> On 08/24/2017 03:34 PM, Lance Bragstad wrote:
>> Hi all,
>>
>> Keystone has a few cross-project topics we'd like to share with a wider
>> group, like the Baremetal/VM SIG. As a result, I attempted to dust off
>> some of the Baremetal/VM sessions [0][1] from Boston and port the
>> popular topics over to the etherpad for the PTG [2]. Maybe it will kick
>> start some discussions before we get there?
>>
>> John has more insight into this than I do, but I'm curious if we plan to
>> have a rough schedule for Monday and Tuesday? I'm happy to help
>> coordinate or shuffle bits for the baremetal/VM group if ideas come up here.
>>
>>
>> [0] https://etherpad.openstack.org/p/BOS-forum-operating-vm-and-baremetal
>> [1] https://etherpad.openstack.org/p/BOS-forum-using-vm-and-baremetal
>> [2] https://etherpad.openstack.org/p/queens-PTG-vmbm
>>
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [neutron] [cinder] [ironic] [glance] [swift] Baremetal/VM SIG PTG Schedule/Etherpad

2017-09-07 Thread Lance Bragstad
I spoke with John a bit today in IRC and we have a rough schedule worked
out for the Baremetal/VM SIG. All the sessions/ideas/carry-over topics
from Boston have been filtered into a schedule and is available in the
etherpad [0].

Each entry should have a "lead" to drive the discussion and a "goal" to
work towards. I took a stab at listing leads and goals accordingly, but
if you're more familiar with the topics, please adjust as necessary. If
you noticed a conflict with another session, feel free to respond here,
leave a comment in the etherpad, or ping me on IRC. I know it's a bit
late, but I'd like to have the schedule pretty well set by the weekend.

Thanks!


[0] https://etherpad.openstack.org/p/queens-PTG-vmbm

On 08/24/2017 03:34 PM, Lance Bragstad wrote:
> Hi all,
>
> Keystone has a few cross-project topics we'd like to share with a wider
> group, like the Baremetal/VM SIG. As a result, I attempted to dust off
> some of the Baremetal/VM sessions [0][1] from Boston and port the
> popular topics over to the etherpad for the PTG [2]. Maybe it will kick
> start some discussions before we get there?
>
> John has more insight into this than I do, but I'm curious if we plan to
> have a rough schedule for Monday and Tuesday? I'm happy to help
> coordinate or shuffle bits for the baremetal/VM group if ideas come up here.
>
>
> [0] https://etherpad.openstack.org/p/BOS-forum-operating-vm-and-baremetal
> [1] https://etherpad.openstack.org/p/BOS-forum-using-vm-and-baremetal
> [2] https://etherpad.openstack.org/p/queens-PTG-vmbm
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [nova] [neutron] [cinder] [ironic] [glance] [swift] Baremetal/VM SIG PTG Schedule/Etherpad

2017-08-24 Thread Lance Bragstad
Hi all,

Keystone has a few cross-project topics we'd like to share with a wider
group, like the Baremetal/VM SIG. As a result, I attempted to dust off
some of the Baremetal/VM sessions [0][1] from Boston and port the
popular topics over to the etherpad for the PTG [2]. Maybe it will kick
start some discussions before we get there?

John has more insight into this than I do, but I'm curious if we plan to
have a rough schedule for Monday and Tuesday? I'm happy to help
coordinate or shuffle bits for the baremetal/VM group if ideas come up here.


[0] https://etherpad.openstack.org/p/BOS-forum-operating-vm-and-baremetal
[1] https://etherpad.openstack.org/p/BOS-forum-using-vm-and-baremetal
[2] https://etherpad.openstack.org/p/queens-PTG-vmbm



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-21 Thread Brant Knudson
On Thu, Jul 20, 2017 at 8:02 PM, Zane Bitter  wrote:

>
> * If Keystone supported either a public-key or a Kerberos-style
> authentication mechanism to get a token


Keystone (via support for accepting authentication from the web server
hosting it) can be configured to accept X.509 and kerberos, see
http://www.jamielennox.net/blog/2015/02/12/step-by-step-kerberized-keystone/

-- 
- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-20 Thread Zane Bitter

On 19/07/17 23:19, Monty Taylor wrote:


Instance users do not solve this. Instance users can be built with this- 
but instance users are themselves not sufficient. Instance users are 
only sufficient in single-cloud ecosystems where it is possible to grant 
permissions on all the resources in the single-cloud ecosystem to an 
instance. We are not a single-cloud ecosystem.


Good point. Actually, nobody lives in a single-cloud ecosystem any more. 
So the 'public' side of any hybrid-cloud arrangement (including the big 
3 public clouds, not just OpenStack) will always need a way to deal with 
this.


Nodepool runs in Rackspace's DFW region. It has accounts across nine 
different clouds. If this were only solved with Instance users we'd have 
to boot a VM in each cloud so that we could call the publicly-accessible 
REST APIs of the clouds to boot VMs in each cloud.


I'm glad you're here, because I don't spend a lot of time thinking about 
such use cases (if we can get cloud applications to work on even one 
cloud then I can retire to my goat farm happy) and this one would have 
escaped me :)


So let's boil this down to 4 types of 'users' who need to authenticate 
to a given cloud:


1) Actual, corporeal humans
2) Services that are part of the cloud itself (e.g. autoscaling)
3) Hybrid-cloud applications running elsewhere (e.g. nodepool)
4) Applications running in the cloud

Looking at how AWS handles these cases AIUI:

1) For each tenant there is a 'root' account with access to billing  
Best practice is not to create API credentials for this account at all. 
Instead, you create IAM Users for all of the humans who need to access 
the tenant and give permissions to them (bootstrapped by the root 
account) using IAM Policies. To make management easier, you can 
aggregate Users into Groups. If a user leaves the organisation, you 
delete their IAM User. If the owner leaves the organisation, somebody 
else becomes the owner and you rotate the root password.


2) Cloud services can be named as principals in IAM policies, so 
permissions can be given to them in the same way that they are to human 
users.


3) You create an IAM User for the application and give it the 
appropriate permissions. The credential they get is actually a private 
key, not a password, so in theory you could store it in an HSM that just 
signs stuff with it and not provide it directly to the application. 
Otherwise, the credentials are necessarily disclosed to the team 
maintaining the application. If somebody who has/had access to private 
key leaves, you need to rotate the credentials. It's possible to 
automate the mechanics of this, but ultimately it has to be triggered by 
a human using their own credentials otherwise it's turtles all the way 
down. The AWS cloud has no way of ensuring that you rotate the 
credentials at appropriate times, or even knowing when those times are.


4) Instance users. You can give permissions to a VM that you have 
created in the cloud. It automatically receives credentials in its 
metadata. The credentials expire quite rapidly and are automatically 
replaced with new ones, also accessible through the metadata server. The 
application just reads the latest credentials from metadata and uses 
them. If someone leaves the organisation, you don't care. If an attacker 
breaches your server, the damage is limited to a relatively short window 
once you've evicted them again. There's no way to do the Wrong Thing 
even if you're trying.


And in OpenStack:

1) Works great provided you only have one user per project. Your 
password may, and probably will, be shared with your billing account 
(public cloud), or will be shared with pretty much your whole life 
(private cloud). If multiple humans need to work on the project, you'll 
generally need to share passwords or do something out-of-band to set it 
up (e.g. open a ticket with IT). If somebody leaves the organisation, 
same deal.


Application credentials could greatly improve this in the public cloud 
scenario.


2) Cloud services can create trusts that allow them to act on behalf of 
a particular user. If that user leaves the organisation, your 
application is hosed until someone else redeploys it to get a new trust.


Persistent application credentials could potentially replace trusts and 
solve this problem, although they'd need to be stored somewhere more 
secure (i.e. Barbican) than trust IDs are currently stored. A better 
solution might be to allow the service user to be granted permissions by 
the forthcoming fine-grained authorisation mechanism (independently of 
an application credential) - but this would require changes to the 
Keystone policies, because it would currently be blocked by the 
Scoped-RBAC system.


3) The credentials are necessarily disclosed to the team maintaining the 
application. Your password may, and probably will, be shared with your 
billing account. If somebody leaves the organisation, you have to rotate 
the password. This 

Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-20 Thread Zane Bitter

On 19/07/17 22:27, Monty Taylor wrote:
I propose we set aside time at the PTG to dig in to this. Between Zane 
and I and the Keystone core team I have confidence we can find a way out.


This may be a bad time to mention that regrettably I won't be attending 
the PTG, due to (happy!) family reasons.


It sounds like you and I are on the same page already in terms of the 
requirements though. I'm fairly relaxed about what the solution looks 
like, as long as we actually address those requirements.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-20 Thread Lance Bragstad


On 07/19/2017 09:27 PM, Monty Taylor wrote:
> On 07/19/2017 12:18 AM, Zane Bitter wrote:
>> On 18/07/17 10:55, Lance Bragstad wrote:

 Would Keystone folks be happy to allow persistent credentials once
 we have a way to hand out only the minimum required privileges?


 If I'm understanding correctly, this would make application
 credentials dependent on several cycles of policy work. Right?
>>>
>>> I think having the ability to communicate deprecations though
>>> oslo.policy would help here. We could use it to move towards better
>>> default roles, which requires being able to set minimum privileges.
>>>
>>> Using the current workflow requires operators to define the minimum
>>> privileges for whatever is using the application credential, and
>>> work that into their policy. Is that the intended workflow that we
>>> want to put on the users and operators of application credentials?
>>
>> The plan is to add an authorisation mechanism that is user-controlled
>> and independent of the (operator-controlled) policy. The beginnings
>> of this were included in earlier drafts of the spec, but were removed
>> in patch set 19 in favour of leaving them for a future spec:
>>
>> https://review.openstack.org/#/c/450415/18..19/specs/keystone/pike/application-credentials.rst
>
>
> Yes - that's right - and I expect to start work on that again as soon
> as this next keystoneauth release with version discovery is out the door.
>
> It turns out there are different POVs on this topic, and it's VERY
> important to be clear which one we're talking about at any given point
> in time. A bunch of the confusion just in getting as far as we've
> gotten so far came from folks saying words like "policy" or "trusts"
> or "ACLs" or "RBAC" - but not clarifying which group of cloud users
> they were discussing and from what context.
>
> The problem that Zane and I are are discussing and advocating for are
> for UNPRIVILEDGED users who neither deploy nor operate the cloud but
> who use the cloud to run applications.
>
> Unfortunately, neither the current policy system nor trusts are useful
> in any way shape or form for those humans. Policy and trusts are tools
> for cloud operators to take a certain set of actions.
>
> Similarly, the concern from the folks who are not in favor of
> project-lifecycled application credentials is the one that Zane
> outlined - that there will be $someone with access to those
> credentials after a User change event, and thus $security will be
> compromised.
>
> There is a balance that can and must be found. The use case Zane and I
> are talking about is ESSENTIAL, and literally ever single human who is
> a actually using OpenStack to run applications needs it. Needed it
> last year in fact, and they are, in fact doing things like writing
> ssh-agent like daemons in which they can store their corporate LDAP
> credentials so that their automation will work because we're not
> giving them a workable option.
>
> That said, the concerns about not letting a thing out the door that is
> insecure by design like PHP4's globally scoped URL variables is also
> super important.
>
> So we need to find a design that meets both goals.
>
> I have thoughts on the topic, but have been holding off until
> version-discovery is out the door. My hunch is that, like application
> credentials, we're not going to make significant headway without
> getting humans in the room - because the topic is WAY too fraught with
> peril.
>
> I propose we set aside time at the PTG to dig in to this. Between Zane
> and I and the Keystone core team I have confidence we can find a way out.

Done. I've added this thread to keystone's planning etherpad under
cross-project things we need to talk about [0]. Feel free to elaborate
and fill in context as you see fit. I'll make sure the content makes
it's way into a dedicated etherpad before we have that discussion
(usually as I go through each topic and plan the schedule).


[0] https://etherpad.openstack.org/p/keystone-queens-ptg

>
> Monty
>
> PS. It will not help to solve limited-scope before we solve this.
> Limited scope is an end-user opt-in action and having it does not
> remove the concerns that have been expressed.
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-20 Thread Sean Dague
On 07/19/2017 10:00 PM, Adrian Turjak wrote:
> The problem is then entirely procedural within a team. Do they rotate
> all keys when one person leaves? Anything less is the same problem. All
> we can do is make rotation less of a pain, but it will still be painful
> no matter what, and depending on the situation the team makes the choice
> of how to handle rotation if at all.
> 
> The sole reason for project level ownership of these application
> credentials is so that a user leaving/being deleted isn't a scramble to
> replace keys, and a team has the option/time to do it if they care about
> the possibility of that person having known the keys (again, not our
> problem, not a security flaw in code). Anything else, pretty much makes
> this feature useless for teams. :(
> 
> Having both options (owned by project vs user) is useful, but the
> 'security issues' are kind of implied by using project owned app creds.
> It's a very useful feature with some 'use at your own risk' attached.

I think this is a pretty good summary.

In many situations the situation of removing people from projects
(termination) will also physically remove their path to said clouds (as
they are beyond the firewall). It's not true with public clouds, but
it's not making the situation any worse, because right now it's shared
passwords to accounts.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-19 Thread Monty Taylor

On 07/19/2017 12:11 AM, Zane Bitter wrote:

On 17/07/17 23:12, Lance Bragstad wrote:

Would Keystone folks be happy to allow persistent credentials once
we have a way to hand out only the minimum required privileges?


If I'm understanding correctly, this would make application 
credentials dependent on several cycles of policy work. Right?


My thought here was that if this were the case (i.e. persistent 
credentials are OK provided the user can lock down the privileges) then 
you could make a case that the current spec is on the right track. For 
now we implement the application credentials as non-persistent, people 
who know about it use at their own risk, and for people who don't 
there's no exposure. Later on we add the authorisation stuff and relax 
the non-persistence requirement.


On further reflection, I'm not convinced by this - if we care about 
protecting people who don't intentionally use/know about the feature 
now, then we should probably still care once the tools are in place for 
the people who are using it intentionally to lock it down tightly.


So I'm increasingly convinced that we need to do one of two things. Either:

* Agree with Colleen (elsewhere in the thread) that persistent 
application credentials are still better than the status quo and 
reinstate the project-scoped lifecycle in accordance with original 
intent of the spec; or


* Agree that the concerns raised by Morgan & Adam will always apply, and 
look for a solution that gives us automatic key rotation - which might 
be quite different in shape (I can elaborate if necessary).


(That said, I chatted about this briefly with Monty yesterday and he 
said that his recollection was that there is a long-term solution that 
will keep everyone happy. He'll try to remember what it is once he's 
finished on the version discovery stuff he's currently working on.)


Part of this comes down to the fact that there are actually multiple 
scenarios and persistent credentials actually only applies to a scenario 
that typically requires a human with elevated credentials.


SO - I think we can get a long way forward by divvying up some 
responsibilities clearly.


What I mean is:

* The simple consume case ("typical public cloud") is User-per-Project 
with User lifecycle tied to Project lifecycle. In this case the idea of 
a 'persistent' credential is meaningless, because there is no 'other' 
User with access to the Project If the User in this scenario creates a 
Credential it doesn't actually matter what the Credential lifecycle is 
because the act of Account is ultimately about disabling or deleting 
access to the Project in question. We can and should help the folks who 
are running clouds in this model with $something (we need to talk 
details) so that if they are running in this model they don't 
accidentally or by default leave a door open when they think they've 
disabled someone's User as part of shutting off their Account. But in 
this scenario OpenStack adding project-persistent credentials is not a 
big deal - it doesn't provide value. (While a User in that scenario, who 
typically does not have the Role to create a new User being able to 
manage Application Credentials is a HUGE win)


* The other scenario is where there is more than one Human who has a 
User that have been granted Roles on a Project. This is the one where 
project-lifecycle credentials are meaningful and valuable, but it's also 
one that involves some Human with elevated admin-style privileges having 
been involved at some point because that is required to assign Users 
Roles in the Project in the first place.


I believe if we divide application credentials into two kinds:

1) Application Credentials with lifecycle tied to User
2) Application Credentials with lifecycle tied to Project

Then I think it's ok for the ability to do (2) to require a specific 
Role in policy. If we do that, then whatever Human it is that is mapping 
multiple Users into a single Project can decide whether any of those 
Users should be granted the ability to make Project-lifecycle 
Application Credentials. Such a Human is already a Human who has a User 
with elevated permissions, as you have to be to assign Roles to Users 
Projects.


In any case, as I mentioned in the other mail, I think there are a bunch 
of details here that are going to require us being in the room - and 
everyone realizing that everyones use cases and everyones concerns are 
important. If we dig in, I'm sure we can come out on the other side with 
happiness and joy.




I'm trying to avoid taking a side here because everyone is right. 

++

Currently anybody who want to do anything remotely 'cloudy' (i.e. have 
the application talk to OpenStack APIs) has to either share their 
personal password with the application (and by extension their whole 


Or - create an account in the Team's name and by storing the password 
for that account realizing that everyone on the team has access to the 
password so 

Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-19 Thread Monty Taylor

On 07/19/2017 12:18 AM, Zane Bitter wrote:

On 18/07/17 10:55, Lance Bragstad wrote:


Would Keystone folks be happy to allow persistent credentials once
we have a way to hand out only the minimum required privileges?


If I'm understanding correctly, this would make application 
credentials dependent on several cycles of policy work. Right?


I think having the ability to communicate deprecations though 
oslo.policy would help here. We could use it to move towards better 
default roles, which requires being able to set minimum privileges.


Using the current workflow requires operators to define the minimum 
privileges for whatever is using the application credential, and work 
that into their policy. Is that the intended workflow that we want to 
put on the users and operators of application credentials?


The plan is to add an authorisation mechanism that is user-controlled 
and independent of the (operator-controlled) policy. The beginnings of 
this were included in earlier drafts of the spec, but were removed in 
patch set 19 in favour of leaving them for a future spec:


https://review.openstack.org/#/c/450415/18..19/specs/keystone/pike/application-credentials.rst 


Yes - that's right - and I expect to start work on that again as soon as 
this next keystoneauth release with version discovery is out the door.


It turns out there are different POVs on this topic, and it's VERY 
important to be clear which one we're talking about at any given point 
in time. A bunch of the confusion just in getting as far as we've gotten 
so far came from folks saying words like "policy" or "trusts" or "ACLs" 
or "RBAC" - but not clarifying which group of cloud users they were 
discussing and from what context.


The problem that Zane and I are are discussing and advocating for are 
for UNPRIVILEDGED users who neither deploy nor operate the cloud but who 
use the cloud to run applications.


Unfortunately, neither the current policy system nor trusts are useful 
in any way shape or form for those humans. Policy and trusts are tools 
for cloud operators to take a certain set of actions.


Similarly, the concern from the folks who are not in favor of 
project-lifecycled application credentials is the one that Zane outlined 
- that there will be $someone with access to those credentials after a 
User change event, and thus $security will be compromised.


There is a balance that can and must be found. The use case Zane and I 
are talking about is ESSENTIAL, and literally ever single human who is a 
actually using OpenStack to run applications needs it. Needed it last 
year in fact, and they are, in fact doing things like writing ssh-agent 
like daemons in which they can store their corporate LDAP credentials so 
that their automation will work because we're not giving them a workable 
option.


That said, the concerns about not letting a thing out the door that is 
insecure by design like PHP4's globally scoped URL variables is also 
super important.


So we need to find a design that meets both goals.

I have thoughts on the topic, but have been holding off until 
version-discovery is out the door. My hunch is that, like application 
credentials, we're not going to make significant headway without getting 
humans in the room - because the topic is WAY too fraught with peril.


I propose we set aside time at the PTG to dig in to this. Between Zane 
and I and the Keystone core team I have confidence we can find a way out.


Monty

PS. It will not help to solve limited-scope before we solve this. 
Limited scope is an end-user opt-in action and having it does not remove 
the concerns that have been expressed.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-18 Thread Zane Bitter

On 18/07/17 10:55, Lance Bragstad wrote:


Would Keystone folks be happy to allow persistent credentials once
we have a way to hand out only the minimum required privileges?


If I'm understanding correctly, this would make application 
credentials dependent on several cycles of policy work. Right?


I think having the ability to communicate deprecations though 
oslo.policy would help here. We could use it to move towards better 
default roles, which requires being able to set minimum privileges.


Using the current workflow requires operators to define the minimum 
privileges for whatever is using the application credential, and work 
that into their policy. Is that the intended workflow that we want to 
put on the users and operators of application credentials?


The plan is to add an authorisation mechanism that is user-controlled 
and independent of the (operator-controlled) policy. The beginnings of 
this were included in earlier drafts of the spec, but were removed in 
patch set 19 in favour of leaving them for a future spec:


https://review.openstack.org/#/c/450415/18..19/specs/keystone/pike/application-credentials.rst

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-18 Thread Zane Bitter

On 17/07/17 23:12, Lance Bragstad wrote:

Would Keystone folks be happy to allow persistent credentials once
we have a way to hand out only the minimum required privileges?


If I'm understanding correctly, this would make application credentials 
dependent on several cycles of policy work. Right?


My thought here was that if this were the case (i.e. persistent 
credentials are OK provided the user can lock down the privileges) then 
you could make a case that the current spec is on the right track. For 
now we implement the application credentials as non-persistent, people 
who know about it use at their own risk, and for people who don't 
there's no exposure. Later on we add the authorisation stuff and relax 
the non-persistence requirement.


On further reflection, I'm not convinced by this - if we care about 
protecting people who don't intentionally use/know about the feature 
now, then we should probably still care once the tools are in place for 
the people who are using it intentionally to lock it down tightly.


So I'm increasingly convinced that we need to do one of two things. Either:

* Agree with Colleen (elsewhere in the thread) that persistent 
application credentials are still better than the status quo and 
reinstate the project-scoped lifecycle in accordance with original 
intent of the spec; or


* Agree that the concerns raised by Morgan & Adam will always apply, and 
look for a solution that gives us automatic key rotation - which might 
be quite different in shape (I can elaborate if necessary).


(That said, I chatted about this briefly with Monty yesterday and he 
said that his recollection was that there is a long-term solution that 
will keep everyone happy. He'll try to remember what it is once he's 
finished on the version discovery stuff he's currently working on.)



I'm trying to avoid taking a side here because everyone is right. 
Currently anybody who want to do anything remotely 'cloudy' (i.e. have 
the application talk to OpenStack APIs) has to either share their 
personal password with the application (and by extension their whole 
team) or to do the thing that is the polar opposite of cloud: file a 
ticket with IT to get a service user account added desk> and share that password instead. And this really is a disaster for 
OpenStack. On the other hand, allowing the creation of persistent 
application credentials in the absence of regular automatic rotation 
does create risk for those folks who are not aggressively auditing them 
(perhaps because they have no legitimate use for them) and the result is 
likely to be lots of clouds disabling them by policy, keeping their 
users in the dark age of IT-ticket-filing  and frustrating 
our interoperability goals.


It is possible in theory to satisfy both via the 'instance users' 
concept, but the Nova team's response to this has consistently been 
"prove to us that this has to be in Nova". Well, here's your answer.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-18 Thread Lance Bragstad


On 07/17/2017 10:12 PM, Lance Bragstad wrote:
>
>
> On Mon, Jul 17, 2017 at 6:39 PM, Zane Bitter  > wrote:
>
> So the application credentials spec has merged - huge thanks to
> Monty and the Keystone team for getting this done:
>
> https://review.openstack.org/#/c/450415/
> 
> 
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/application-credentials.html
> 
> 
>
> However, it appears that there was a disconnect in how two groups
> of folks were reading the spec that only became apparent towards
> the end of the process. Specifically, at this exact moment:
>
> 
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-06-09.log.html#t2017-06-09T17:43:59
> 
> 
>
> To summarise, Keystone folks are uncomfortable with the idea of
> application credentials that share the lifecycle of the project
> (rather than the user that created them), because a consumer could
> surreptitiously create an application credential and continue to
> use that to access the OpenStack APIs even after their User
> account is deleted. The agreed solution was to delete the
> application credentials when the User that created them is
> deleted, thus tying the lifecycle to that of the User.
>
> This means that teams using this feature will need to audit all of
> their applications for credential usage and rotate any credentials
> created by a soon-to-be-former team member *before* removing said
> team member's User account, or risk breakage. Basically we're
> relying on users to do the Right Thing (bad), but when they don't
> we're defaulting to breaking [some of] their apps over leaving
> them insecure (all things being equal, good).
>
> Unfortunately, if we do regard this as a serious problem, I don't
> think this solution is sufficient. Assuming that application
> credentials are stored on VMs in the project for use by the
> applications running on them, then anyone with access to those
> servers can obtain the credentials and continue to use them even
> if their own account is deleted. The solution to this is to rotate
> *all* application keys when a user is deleted. So really we're
> relying on users to do the Right Thing (bad), but when they don't
> we're defaulting to breaking [some of] their apps *and*
> [potentially] leaving them insecure (worst possible combination).
>
> (We're also being inconsistent, because according to the spec if
> you revoke a role from a User then any application credentials
> they've created that rely on that role continue to work. It's only
> if you delete the User that they're revoked.)
>
>
> As far as I can see, there are only two solutions to the
> fundamental problem:
>
> 1) Fine-grained user-defined access control. We can minimise the
> set of things that the application credentials are authorised to
> do. That's out of scope for this spec, but something we're already
> planning as a future enhancement.
> 2) Automated regular rotation of credentials. We can make sure
> that whatever a departing team member does manage to hang onto
> quickly becomes useless.
>
> By way of comparison, AWS does both. There's fine-grained defined
> access control in the form of IAM Roles, and these Roles can be
> associated with EC2 servers. The servers have an account with
> rotating keys provided through the metadata server. I can't find
> the exact period of rotation documented, but it's on the order of
> magnitude of 1 hour.
>
> There's plenty not to like about this design. Specifically, it's
> 2017 not 2007 and the idea that there's no point offering to
> segment permissions at a finer grained level than that of a VM no
> longer holds water IMHO, thanks to SELinux and containers. It'd be
> nice to be able to provide multiple sets of credentials to
> different services running on a VM, and it's probably essential to
> our survival that we find a way to provide individual credentials
> to containers. Nevertheless, what they have does solve the problem.
>
> Note that there's pretty much no sane way for the user to automate
> credential rotation themselves, because it's turtles all the way
> down. e.g. it's easy in principle to set up a Heat template with a
> Mistral workflow that will rotate the credentials for you, but
> they'll do so using trusts that are, in turn, tied back to the
> consumer who created the stack. (It suddenly occurs to me that
> this is a problem that 

Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-18 Thread Colleen Murphy
On Tue, Jul 18, 2017 at 1:39 AM, Zane Bitter  wrote:

> So the application credentials spec has merged - huge thanks to Monty and
> the Keystone team for getting this done:
>
> https://review.openstack.org/#/c/450415/
> http://specs.openstack.org/openstack/keystone-specs/specs/
> keystone/pike/application-credentials.html
>
> However, it appears that there was a disconnect in how two groups of folks
> were reading the spec that only became apparent towards the end of the
> process. Specifically, at this exact moment:
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone
> /%23openstack-keystone.2017-06-09.log.html#t2017-06-09T17:43:59
>
> To summarise, Keystone folks are uncomfortable with the idea of
> application credentials that share the lifecycle of the project (rather
> than the user that created them), because a consumer could surreptitiously
> create an application credential and continue to use that to access the
> OpenStack APIs even after their User account is deleted. The agreed
> solution was to delete the application credentials when the User that
> created them is deleted, thus tying the lifecycle to that of the User.
>
> This means that teams using this feature will need to audit all of their
> applications for credential usage and rotate any credentials created by a
> soon-to-be-former team member *before* removing said team member's User
> account, or risk breakage. Basically we're relying on users to do the Right
> Thing (bad), but when they don't we're defaulting to breaking [some of]
> their apps over leaving them insecure (all things being equal, good).
>
> Unfortunately, if we do regard this as a serious problem, I don't think
> this solution is sufficient. Assuming that application credentials are
> stored on VMs in the project for use by the applications running on them,
> then anyone with access to those servers can obtain the credentials and
> continue to use them even if their own account is deleted. The solution to
> this is to rotate *all* application keys when a user is deleted. So really
> we're relying on users to do the Right Thing (bad), but when they don't
> we're defaulting to breaking [some of] their apps *and* [potentially]
> leaving them insecure (worst possible combination).
>
> (We're also being inconsistent, because according to the spec if you
> revoke a role from a User then any application credentials they've created
> that rely on that role continue to work. It's only if you delete the User
> that they're revoked.)
>
>
> As far as I can see, there are only two solutions to the fundamental
> problem:
>
> 1) Fine-grained user-defined access control. We can minimise the set of
> things that the application credentials are authorised to do. That's out of
> scope for this spec, but something we're already planning as a future
> enhancement.
> 2) Automated regular rotation of credentials. We can make sure that
> whatever a departing team member does manage to hang onto quickly becomes
> useless.
>
> By way of comparison, AWS does both. There's fine-grained defined access
> control in the form of IAM Roles, and these Roles can be associated with
> EC2 servers. The servers have an account with rotating keys provided
> through the metadata server. I can't find the exact period of rotation
> documented, but it's on the order of magnitude of 1 hour.
>
> There's plenty not to like about this design. Specifically, it's 2017 not
> 2007 and the idea that there's no point offering to segment permissions at
> a finer grained level than that of a VM no longer holds water IMHO, thanks
> to SELinux and containers. It'd be nice to be able to provide multiple sets
> of credentials to different services running on a VM, and it's probably
> essential to our survival that we find a way to provide individual
> credentials to containers. Nevertheless, what they have does solve the
> problem.
>
> Note that there's pretty much no sane way for the user to automate
> credential rotation themselves, because it's turtles all the way down. e.g.
> it's easy in principle to set up a Heat template with a Mistral workflow
> that will rotate the credentials for you, but they'll do so using trusts
> that are, in turn, tied back to the consumer who created the stack. (It
> suddenly occurs to me that this is a problem that all services using trusts
> are going to need to solve.) Somewhere it all has to be tied back to
> something that survives the entire lifecycle of the project.
>
> Would Keystone folks be happy to allow persistent credentials once we have
> a way to hand out only the minimum required privileges?
>

I agree that in the haste of getting this approved before the spec freeze
deadline we took this in the wrong direction. I think that this spec was
fine before the addition of "Will be deleted when the associated User is
deleted" constraint.

As I understand it, the worry coming from the team is that a user who
sneakily copies the secret keys to an offsite 

Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-17 Thread Lance Bragstad
On Mon, Jul 17, 2017 at 6:39 PM, Zane Bitter  wrote:

> So the application credentials spec has merged - huge thanks to Monty and
> the Keystone team for getting this done:
>
> https://review.openstack.org/#/c/450415/
> http://specs.openstack.org/openstack/keystone-specs/specs/
> keystone/pike/application-credentials.html
>
> However, it appears that there was a disconnect in how two groups of folks
> were reading the spec that only became apparent towards the end of the
> process. Specifically, at this exact moment:
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone
> /%23openstack-keystone.2017-06-09.log.html#t2017-06-09T17:43:59
>
> To summarise, Keystone folks are uncomfortable with the idea of
> application credentials that share the lifecycle of the project (rather
> than the user that created them), because a consumer could surreptitiously
> create an application credential and continue to use that to access the
> OpenStack APIs even after their User account is deleted. The agreed
> solution was to delete the application credentials when the User that
> created them is deleted, thus tying the lifecycle to that of the User.
>
> This means that teams using this feature will need to audit all of their
> applications for credential usage and rotate any credentials created by a
> soon-to-be-former team member *before* removing said team member's User
> account, or risk breakage. Basically we're relying on users to do the Right
> Thing (bad), but when they don't we're defaulting to breaking [some of]
> their apps over leaving them insecure (all things being equal, good).
>
> Unfortunately, if we do regard this as a serious problem, I don't think
> this solution is sufficient. Assuming that application credentials are
> stored on VMs in the project for use by the applications running on them,
> then anyone with access to those servers can obtain the credentials and
> continue to use them even if their own account is deleted. The solution to
> this is to rotate *all* application keys when a user is deleted. So really
> we're relying on users to do the Right Thing (bad), but when they don't
> we're defaulting to breaking [some of] their apps *and* [potentially]
> leaving them insecure (worst possible combination).
>
> (We're also being inconsistent, because according to the spec if you
> revoke a role from a User then any application credentials they've created
> that rely on that role continue to work. It's only if you delete the User
> that they're revoked.)
>
>
> As far as I can see, there are only two solutions to the fundamental
> problem:
>
> 1) Fine-grained user-defined access control. We can minimise the set of
> things that the application credentials are authorised to do. That's out of
> scope for this spec, but something we're already planning as a future
> enhancement.
> 2) Automated regular rotation of credentials. We can make sure that
> whatever a departing team member does manage to hang onto quickly becomes
> useless.
>
> By way of comparison, AWS does both. There's fine-grained defined access
> control in the form of IAM Roles, and these Roles can be associated with
> EC2 servers. The servers have an account with rotating keys provided
> through the metadata server. I can't find the exact period of rotation
> documented, but it's on the order of magnitude of 1 hour.
>
> There's plenty not to like about this design. Specifically, it's 2017 not
> 2007 and the idea that there's no point offering to segment permissions at
> a finer grained level than that of a VM no longer holds water IMHO, thanks
> to SELinux and containers. It'd be nice to be able to provide multiple sets
> of credentials to different services running on a VM, and it's probably
> essential to our survival that we find a way to provide individual
> credentials to containers. Nevertheless, what they have does solve the
> problem.
>
> Note that there's pretty much no sane way for the user to automate
> credential rotation themselves, because it's turtles all the way down. e.g.
> it's easy in principle to set up a Heat template with a Mistral workflow
> that will rotate the credentials for you, but they'll do so using trusts
> that are, in turn, tied back to the consumer who created the stack. (It
> suddenly occurs to me that this is a problem that all services using trusts
> are going to need to solve.) Somewhere it all has to be tied back to
> something that survives the entire lifecycle of the project.
>
> Would Keystone folks be happy to allow persistent credentials once we have
> a way to hand out only the minimum required privileges?
>

If I'm understanding correctly, this would make application credentials
dependent on several cycles of policy work. Right?


>
> If not I think we're back to https://review.openstack.org/#/c/93/
>
> cheers,
> Zane.
>
> __
> OpenStack Development Mailing List (not for usage 

[openstack-dev] [keystone][nova] Persistent application credentials

2017-07-17 Thread Zane Bitter
So the application credentials spec has merged - huge thanks to Monty 
and the Keystone team for getting this done:


https://review.openstack.org/#/c/450415/
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/application-credentials.html

However, it appears that there was a disconnect in how two groups of 
folks were reading the spec that only became apparent towards the end of 
the process. Specifically, at this exact moment:


http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-06-09.log.html#t2017-06-09T17:43:59

To summarise, Keystone folks are uncomfortable with the idea of 
application credentials that share the lifecycle of the project (rather 
than the user that created them), because a consumer could 
surreptitiously create an application credential and continue to use 
that to access the OpenStack APIs even after their User account is 
deleted. The agreed solution was to delete the application credentials 
when the User that created them is deleted, thus tying the lifecycle to 
that of the User.


This means that teams using this feature will need to audit all of their 
applications for credential usage and rotate any credentials created by 
a soon-to-be-former team member *before* removing said team member's 
User account, or risk breakage. Basically we're relying on users to do 
the Right Thing (bad), but when they don't we're defaulting to breaking 
[some of] their apps over leaving them insecure (all things being equal, 
good).


Unfortunately, if we do regard this as a serious problem, I don't think 
this solution is sufficient. Assuming that application credentials are 
stored on VMs in the project for use by the applications running on 
them, then anyone with access to those servers can obtain the 
credentials and continue to use them even if their own account is 
deleted. The solution to this is to rotate *all* application keys when a 
user is deleted. So really we're relying on users to do the Right Thing 
(bad), but when they don't we're defaulting to breaking [some of] their 
apps *and* [potentially] leaving them insecure (worst possible combination).


(We're also being inconsistent, because according to the spec if you 
revoke a role from a User then any application credentials they've 
created that rely on that role continue to work. It's only if you delete 
the User that they're revoked.)



As far as I can see, there are only two solutions to the fundamental 
problem:


1) Fine-grained user-defined access control. We can minimise the set of 
things that the application credentials are authorised to do. That's out 
of scope for this spec, but something we're already planning as a future 
enhancement.
2) Automated regular rotation of credentials. We can make sure that 
whatever a departing team member does manage to hang onto quickly 
becomes useless.


By way of comparison, AWS does both. There's fine-grained defined access 
control in the form of IAM Roles, and these Roles can be associated with 
EC2 servers. The servers have an account with rotating keys provided 
through the metadata server. I can't find the exact period of rotation 
documented, but it's on the order of magnitude of 1 hour.


There's plenty not to like about this design. Specifically, it's 2017 
not 2007 and the idea that there's no point offering to segment 
permissions at a finer grained level than that of a VM no longer holds 
water IMHO, thanks to SELinux and containers. It'd be nice to be able to 
provide multiple sets of credentials to different services running on a 
VM, and it's probably essential to our survival that we find a way to 
provide individual credentials to containers. Nevertheless, what they 
have does solve the problem.


Note that there's pretty much no sane way for the user to automate 
credential rotation themselves, because it's turtles all the way down. 
e.g. it's easy in principle to set up a Heat template with a Mistral 
workflow that will rotate the credentials for you, but they'll do so 
using trusts that are, in turn, tied back to the consumer who created 
the stack. (It suddenly occurs to me that this is a problem that all 
services using trusts are going to need to solve.) Somewhere it all has 
to be tied back to something that survives the entire lifecycle of the 
project.


Would Keystone folks be happy to allow persistent credentials once we 
have a way to hand out only the minimum required privileges?


If not I think we're back to https://review.openstack.org/#/c/93/

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-25 Thread joehuang
I think a option 2 is better.

Best Regards
Chaoyi Huang (joehuang)

From: Lance Bragstad [lbrags...@gmail.com]
Sent: 25 May 2017 3:47
To: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org
Subject: Re: [openstack-dev] 
[keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

I'd like to fill in a little more context here. I see three options with the 
current two proposals.

Option 1

Use a special admin project to denote elevated privileges. For those unfamiliar 
with the approach, it would rely on every deployment having an "admin" project 
defined in configuration [0].

How it works:

Role assignments on this project represent global scope which is denoted by a 
boolean attribute in the token response. A user with an 'admin' role assignment 
on this project is equivalent to the global or cloud administrator. Ideally, if 
a user has a 'reader' role assignment on the admin project, they could have 
access to list everything within the deployment, pending all the proper changes 
are made across the various services. The workflow requires a special project 
for any sort of elevated privilege.

Pros:
- Almost all the work is done to make keystone understand the admin project, 
there are already several patches in review to other projects to consume this
- Operators can create roles and assign them to the admin_project as needed 
after the upgrade to give proper global scope to their users

Cons:
- All global assignments are linked back to a single project
- Describing the flow is confusing because in order to give someone global 
access you have to give them a role assignment on a very specific project, 
which seems like an anti-pattern
- We currently don't allow some things to exist in the global sense (i.e. I 
can't launch instances without tenancy), the admin project could own resources
- What happens if the admin project disappears?
- Tooling or scripts will be written around the admin project, instead of 
treating all projects equally

Option 2

Implement global role assignments in keystone.

How it works:

Role assignments in keystone can be scoped to global context. Users can then 
ask for a globally scoped token

Pros:
- This approach represents a more accurate long term vision for role 
assignments (at least how we understand it today)
- Operators can create global roles and assign them as needed after the upgrade 
to give proper global scope to their users
- It's easier to explain global scope using global role assignments instead of 
a special project
- token.is_global = True and token.role = 'reader' is easier to understand than 
token.is_admin_project = True and token.role = 'reader'
- A global token can't be associated to a project, making it harder for 
operations that require a project to consume a global token (i.e. I shouldn't 
be able to launch an instance with a globally scoped token)

Cons:
- We need to start from scratch implementing global scope in keystone, steps 
for this are detailed in the spec

Option 3

We do option one and then follow it up with option two.

How it works:

We implement option one and continue solving the admin-ness issues in Pike by 
helping projects consume and enforce it. We then target the implementation of 
global roles for Queens.

Pros:
- If we make the interface in oslo.context for global roles consistent, then 
consuming projects shouldn't know the difference between using the 
admin_project or a global role assignment

Cons:
- It's more work and we're already strapped for resources
- We've told operators that the admin_project is a thing but after Queens they 
will be able to do real global role assignments, so they should now migrate 
*again*
- We have to support two paths for solving the same problem in keystone, more 
maintenance and more testing to ensure they both behave exactly the same way
  - This can get more complicated for projects dedicated to testing policy and 
RBAC, like Patrole


Looking for feedback here as to which one is preferred given timing and payoff, 
specifically from operators who would be doing the migrations to implement and 
maintain proper scope in their deployments.

Thanks for reading!


[0] 
https://github.com/openstack/keystone/blob/3d033df1c0fdc6cc9d2b02a702efca286371f2bd/etc/keystone.conf.sample#L2334-L2342

On Wed, May 24, 2017 at 10:35 AM, Lance Bragstad 
<lbrags...@gmail.com<mailto:lbrags...@gmail.com>> wrote:
Hey all,

To date we have two proposed solutions for tackling the admin-ness issue we 
have across the services. One builds on the existing scope concepts by scoping 
to an admin project [0]. The other introduces global role assignments [1] as a 
way to denote elevated privileges.

I'd like to get some feedback from operators, as well as developers from other 
projects, on each approach. Since work is required in keystone, it would be 
good to get consensus before spec freeze (

Re: [openstack-dev] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-24 Thread Adrian Turjak


On 25/05/17 07:47, Lance Bragstad wrote:

> *Option 2*
>
> Implement global role assignments in keystone.
> /
> /
> /How it works:/
> /
> /
> Role assignments in keystone can be scoped to global context. Users
> can then ask for a globally scoped token 
>
> Pros:
> - This approach represents a more accurate long term vision for role
> assignments (at least how we understand it today)
> - Operators can create global roles and assign them as needed after
> the upgrade to give proper global scope to their users
> - It's easier to explain global scope using global role assignments
> instead of a special project
> - token.is_global = True and token.role = 'reader' is easier to
> understand than token.is_admin_project = True and token.role = 'reader'
> - A global token can't be associated to a project, making it harder
> for operations that require a project to consume a global token (i.e.
> I shouldn't be able to launch an instance with a globally scoped token)
>
> Cons:
> - We need to start from scratch implementing global scope in keystone,
> steps for this are detailed in the spec
>

>
> On Wed, May 24, 2017 at 10:35 AM, Lance Bragstad  > wrote:
>
> Hey all,
>
> To date we have two proposed solutions for tackling the admin-ness
> issue we have across the services. One builds on the existing
> scope concepts by scoping to an admin project [0]. The other
> introduces global role assignments [1] as a way to denote elevated
> privileges.
>
> I'd like to get some feedback from operators, as well as
> developers from other projects, on each approach. Since work is
> required in keystone, it would be good to get consensus before
> spec freeze (June 9th). If you have specific questions on either
> approach, feel free to ping me or drop by the weekly policy
> meeting [2].
>
> Thanks!
>

Please option 2. The concept of being an "admin" while you are only
scoped to a project is stupid when that admin role gives you super user
power yet you only have it when scoped to just that project. That
concept never really made sense. Global scope makes so much more sense
when that is the power the role gives.

At same time, it kind of would be nice to make scope actually matter. As
admin you have a role on Project X, yet you can now (while scoped to
this project) pretty much do anything anywhere! I think global roles is
a great step in the right direction, but beyond and after that we need
to seriously start looking at making scope itself matter, so that giving
someone 'admin' or some such on a project actually only gives them
something akin to project_admin or some sort of admin-lite powers scoped
to that project and sub-projects. That though falls into the policy work
being done, but should be noted, as it is related.

Still, at least global scope for roles make the superuser case make some
actual sense, because (and I can't speak for other deployers), we have
one project pretty much dedicated as an "admin_project" and it's just
odd to actually need to give our service users roles in a project when
that project is empty and a pointless construct for their purpose.

Also thanks for pushing this! I've been watching your global roles spec
review in hopes we'd go down that path. :)

-Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-24 Thread Lance Bragstad
I'd like to fill in a little more context here. I see three options with
the current two proposals.

*Option 1*

Use a special admin project to denote elevated privileges. For those
unfamiliar with the approach, it would rely on every deployment having an
"admin" project defined in configuration [0].

*How it works:*

Role assignments on this project represent global scope which is denoted by
a boolean attribute in the token response. A user with an 'admin' role
assignment on this project is equivalent to the global or cloud
administrator. Ideally, if a user has a 'reader' role assignment on the
admin project, they could have access to list everything within the
deployment, pending all the proper changes are made across the various
services. The workflow requires a special project for any sort of elevated
privilege.

Pros:
- Almost all the work is done to make keystone understand the admin
project, there are already several patches in review to other projects to
consume this
- Operators can create roles and assign them to the admin_project as needed
after the upgrade to give proper global scope to their users

Cons:
- All global assignments are linked back to a single project
- Describing the flow is confusing because in order to give someone global
access you have to give them a role assignment on a very specific project,
which seems like an anti-pattern
- We currently don't allow some things to exist in the global sense (i.e. I
can't launch instances without tenancy), the admin project could own
resources
- What happens if the admin project disappears?
- Tooling or scripts will be written around the admin project, instead of
treating all projects equally

*Option 2*

Implement global role assignments in keystone.

*How it works:*

Role assignments in keystone can be scoped to global context. Users can
then ask for a globally scoped token

Pros:
- This approach represents a more accurate long term vision for role
assignments (at least how we understand it today)
- Operators can create global roles and assign them as needed after the
upgrade to give proper global scope to their users
- It's easier to explain global scope using global role assignments instead
of a special project
- token.is_global = True and token.role = 'reader' is easier to understand
than token.is_admin_project = True and token.role = 'reader'
- A global token can't be associated to a project, making it harder for
operations that require a project to consume a global token (i.e. I
shouldn't be able to launch an instance with a globally scoped token)

Cons:
- We need to start from scratch implementing global scope in keystone,
steps for this are detailed in the spec

*Option 3*

We do option one and then follow it up with option two.

*How it works:*

We implement option one and continue solving the admin-ness issues in Pike
by helping projects consume and enforce it. We then target the
implementation of global roles for Queens.

Pros:
- If we make the interface in oslo.context for global roles consistent,
then consuming projects shouldn't know the difference between using the
admin_project or a global role assignment

Cons:
- It's more work and we're already strapped for resources
- We've told operators that the admin_project is a thing but after Queens
they will be able to do real global role assignments, so they should now
migrate *again*
- We have to support two paths for solving the same problem in keystone,
more maintenance and more testing to ensure they both behave exactly the
same way
  - This can get more complicated for projects dedicated to testing policy
and RBAC, like Patrole


Looking for feedback here as to which one is preferred given timing and
payoff, specifically from operators who would be doing the migrations to
implement and maintain proper scope in their deployments.

Thanks for reading!


[0]
https://github.com/openstack/keystone/blob/3d033df1c0fdc6cc9d2b02a702efca286371f2bd/etc/keystone.conf.sample#L2334-L2342

On Wed, May 24, 2017 at 10:35 AM, Lance Bragstad 
wrote:

> Hey all,
>
> To date we have two proposed solutions for tackling the admin-ness issue
> we have across the services. One builds on the existing scope concepts by
> scoping to an admin project [0]. The other introduces global role
> assignments [1] as a way to denote elevated privileges.
>
> I'd like to get some feedback from operators, as well as developers from
> other projects, on each approach. Since work is required in keystone, it
> would be good to get consensus before spec freeze (June 9th). If you have
> specific questions on either approach, feel free to ping me or drop by the
> weekly policy meeting [2].
>
> Thanks!
>
> [0] http://adam.younglogic.com/2017/05/fixing-bug-96869/
> [1] https://review.openstack.org/#/c/464763/
> [2] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting
>
__
OpenStack Development Mailing List (not for usage 

[openstack-dev] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-24 Thread Lance Bragstad
Hey all,

To date we have two proposed solutions for tackling the admin-ness issue we
have across the services. One builds on the existing scope concepts by
scoping to an admin project [0]. The other introduces global role
assignments [1] as a way to denote elevated privileges.

I'd like to get some feedback from operators, as well as developers from
other projects, on each approach. Since work is required in keystone, it
would be good to get consensus before spec freeze (June 9th). If you have
specific questions on either approach, feel free to ping me or drop by the
weekly policy meeting [2].

Thanks!

[0] http://adam.younglogic.com/2017/05/fixing-bug-96869/
[1] https://review.openstack.org/#/c/464763/
[2] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova][cinder][policy] policy meeting tomorrow

2017-05-16 Thread Lance Bragstad
Hey folks,

Sending out a reminder that we will have the policy meeting tomorrow [0].
The agenda [1] is already pretty full but we are going to need
cross-project involvement tomorrow considering the topics and impacts.

I'll be reviewing policy things in the morning so if anyone has questions
or wants to hash things out before hand, come find me.

Thanks,

Lance

[0] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting
[1] https://etherpad.openstack.org/p/keystone-policy-meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova][policy] policy goals and roadmap

2017-05-04 Thread Lance Bragstad
Hi all,

I spent some time today summarizing a discussion [0] about global roles. I
figured it would help build some context for next week as there are a
couple cross project policy/RBAC sessions at the Forum.

The first patch is a very general document trying to nail down our policy
goals [1]. The second is a proposed roadmap (given the existing patches and
direction) of how we can mitigate several of the security issues we face
today with policy across OpenStack [2].

Feel free to poke holes as it will hopefully lead to productive discussions
next week.

Thanks!


[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-05-04.log.html#t2017-05-04T15:00:41
[1] https://review.openstack.org/#/c/460344/7
[2] https://review.openstack.org/#/c/462733/3
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova][neutron][cinder] Limiting RPC traffic with keystoneauth

2017-03-02 Thread Lance Bragstad
Post PTG there has been some discussion regarding quotas as well as limits.
While most of the discussion has been off and on in #openstack-dev, we also
have a mailing list thread on the topic [0].

I don't want to derail the thread on quotas and limits with this thread,
but today's discussion [1] highlighted an interesting optimization we could
make with keystoneauth and the service catalog. It seemed appropriate to
have it in it's own thread.

We were trying to figure out where to advertise limits from keystone for
quota calculations. The one spot we knew we didn't want it was the service
catalog or token body. Sean elaborated on the stuff that nova does with
context that filters the catalog to only contain certain things it assumes
other parts of nova might need later [2] before putting the token on the
message bus. From an RPC load perspective, this obviously better than
putting the *entire* token on the message bus, but could we take it one
step further? Couldn't we leverage keystone's GET /v3/auth/catalog/ API [3]
in keystoneauth to re-inflate the catalog in the services that need to make
calls to other services (i.e. nova-compute needing to talk to cinder or
neutron)?

I don't think we'd be reducing the number of things put on the queue, just
the overall size of the message. I wanted to start this thread to get the
idea in front of a wider audience, specifically projects that lean heavily
on RPC for inter-service communication. Most of the changes would be in
keystoneauth to do the needful if the token doesn't have a catalog. After
that, each service would have to identify if/where it does any filtering of
the service catalog before placing the token on the message bus.

Thoughts?


[0]
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113099.html
[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-03-02.log.html#t2017-03-02T13:49:19
[2]
https://github.com/openstack/nova/blob/37cd9a961b065a07352b49ee72394cb210d8838b/nova/context.py#L102-L106
[3]
https://developer.openstack.org/api-ref/identity/v3/index.html?expanded=get-service-catalog-detail#authentication-and-token-management
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-03-02 Thread Chris Dent

On Mon, 27 Feb 2017, Sean Dague wrote:


However, when there is magic applied it means that stops being true. And
now folks think the APIs work like the magic works, not realizing it's
all client side magic, and when they try to do this in node next month,
it will all fall apart.


+many

It's good we have a plan (elsewhere in the thread) to get things
smooth again, but we should also see if we can articulate something
along the lines of "design goals" so that this kind of thing is
decreasingly common.

We've become relatively good at identifying when the problem exists:
If you find yourself justifying some cruft on side A for behavior on
side B we know that's a problem for other users of B. What we're less
good at is evolving B quickly enough such that A doesn't have to
compensate. There's likely no easy solution that also accounts for
compatibility.

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-02-27 Thread Jamie Lennox
On 27 February 2017 at 08:56, Sean Dague  wrote:

> We recently implemented a Nova feature around validating that project_id
> for quotas we real in keystone. After that merged, trippleo builds
> started to fail because their undercloud did not specify the 'identity'
> service as the unversioned endpoint.
>
> https://github.com/openstack/nova/blob/8b498ce199ac4acd94eb33a7f47c05
> ee0c743c34/nova/api/openstack/identity.py#L34-L36
> - (code merged in Nova).
>
> After some debug, it was clear that '/v2.0/v3/projects/...' was what was
> being called. And after lots of conferring in the Keystone room, we
> definitely made sure that the code in question was correct. The thing I
> wanted to do was make the failure more clear.
>
> The suggestion was made to use the following code approach:
>
> https://review.openstack.org/#/c/438049/6/nova/api/openstack/identity.py
>
> resp = sess.get('/projects/%s' % project_id,
> endpoint_filter={
> 'service_type': 'identity',
> 'version': (3, 0)
> },
> raise_exc=False)
>
>
> However, I tested that manually with an identity =>
> http:///v2.0 endpoint, and it passes. Which confused me.
>
> Until I found this -
> https://github.com/openstack/keystoneauth/blob/
> 3364703d3b0e529f7c1b7d1d8ea81726c4f5f121/keystoneauth1/discover.py#L313
>
> keystonauth is specifically coding around the keystone transition from a
> versioned /v2.0 endpoint to an unversioned one.
>
>
> While that is good for the python ecosystem using it, it's actually
> *quite* bad for the rest of our ecosystem (direct REST, java, ruby, go,
> js, php), because it means that all other facilities need the same work
> around. I actually wonder if this is one of the in the field reasons for
> why the transition from v2 -> v3 is going slow. That's actually going to
> potentially break a lot of software.
>
> It feels like this whole discovery version hack bit should be removed -
> https://review.openstack.org/#/c/438483/. It also feels like a migration
> path for non python software in changing the catalog entries needs to be
> figured out as well.
>
> I think on the Nova side we need to go back to looking for bogus
> endpoint because we don't want issues like this hidden from us.
>
> -Sean


So I would completely agree, I would like to see this behaviour
removed. However
it was done very intentionally - and at the time it was written it was
needed.

This is one of a number of situations where keystoneauth tried its best to
paper over inconsistencies in OpenStack APIs because to various levels of
effectiveness almost all the python clients were doing this. Any whilst we
have slowly pushed the documentation and standard deployment procedures to
unversioned URLs whilst this hack was maintained in keystoneauth we didn't
have to fix it individually for every client.

Where python and keystoneauth are different from every other language is
that the services themselves are written in python and using these
libraries and inter-service communication had to continue to work
throughout the transition. You may remember the fun we had trying to change
to v3 auth and unversioned URLs in devstack? This hack is what made it
possible at all. As you say this is extremely difficult for other
languages, but something there isn't a solution for whilst this transition
is in place.

Anyway a few cycles later we are in a different position and a new service
such as the placement API can decide that it shouldn't work at all if the
catalog isn't configured as OpenStack advises. This is great! We can
effectively force deployments to transition to unversioned URLs. We can't
change the default behaviour in keystoneauth but it should be relatively
easy to give you an adapter that doesn't do this. Probably something like
[1]. I also filed it as a bug, which links to this thread [2], but could
otherwise do with some more detail.

Long story short, sorry but it'll have to be a new flag. Yes, keystoneauth
is supposed to be a low-level request maker, but it is also trying to paper
over a number of historical bad decisions so at the very least the user
experience is correct and we don't have clients re-inventing it themselves.

[1] https://review.openstack.org/#/c/438788/
[2] https://bugs.launchpad.net/keystoneauth/+bug/1668484
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-02-27 Thread Sean Dague
On 02/27/2017 10:49 AM, Monty Taylor wrote:
> On 02/27/2017 09:36 AM, Morgan Fainberg wrote:
>>
>>
>> On Mon, Feb 27, 2017 at 7:26 AM, Sean Dague > > wrote:
>>
>> On 02/27/2017 10:22 AM, Morgan Fainberg wrote:
>> 
>> > I agree we should kill the discovery hack, however that is a break in
>> > the keystoneauth contract. Simply put, we cannot. Keystoneauth is one 
>> of
>> > the few things (similar to how shade works) where behavior, exposed
>> > elements, etc are considered a strict contract that will not change. If
>> > we could have avoided stevedore and PBR we would have.
>> >
>> > The best we can provide is a way to build the instances from
>> > keystoneauth that does not include that hack.
>> >
>> > The short is, we can't remove it. Similar to how we cannot change the
>> > raise of exceptions for non-200 responses (the behavior is already 
>> encoded).
>>
>> Ok, I'm going to go back to not using the version= parameter then.
>> Because it's not actually doing the right thing.
>>
>> I also am a bit concerned that basically through some client changes
>> that people didn't understand, we've missed a break in the upstream
>> transition that will impact real clouds. :(
>>
>>
>> We can make an adapter that does what you want (requests adapters are
>> cool). I was just chatting with Monty about this, and we can help you
>> out on this front.
>>
>> The adapter should make things a lot easier once the lifting is done. 
> 
> Yah. Consider me to be on this. Looking at the code you've got to make
> intra-openstack rest calls makes me want to poke out my own eyeballs. It
> does _not_ have to be this hard or this brittle.
> 
> It'll likely take a few days and a thing or two to unwind.

I'm definitely happy if there are better ways to do it.

But, I'm also concerned about the bigger picture. I thought keystoneauth
was giving a pretty low level REST interface, which is good, because it
means we can use and think about the services as they are documented in
the api-ref.

However, when there is magic applied it means that stops being true. And
now folks think the APIs work like the magic works, not realizing it's
all client side magic, and when they try to do this in node next month,
it will all fall apart.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-02-27 Thread Monty Taylor
On 02/27/2017 09:36 AM, Morgan Fainberg wrote:
> 
> 
> On Mon, Feb 27, 2017 at 7:26 AM, Sean Dague  > wrote:
> 
> On 02/27/2017 10:22 AM, Morgan Fainberg wrote:
> 
> > I agree we should kill the discovery hack, however that is a break in
> > the keystoneauth contract. Simply put, we cannot. Keystoneauth is one of
> > the few things (similar to how shade works) where behavior, exposed
> > elements, etc are considered a strict contract that will not change. If
> > we could have avoided stevedore and PBR we would have.
> >
> > The best we can provide is a way to build the instances from
> > keystoneauth that does not include that hack.
> >
> > The short is, we can't remove it. Similar to how we cannot change the
> > raise of exceptions for non-200 responses (the behavior is already 
> encoded).
> 
> Ok, I'm going to go back to not using the version= parameter then.
> Because it's not actually doing the right thing.
> 
> I also am a bit concerned that basically through some client changes
> that people didn't understand, we've missed a break in the upstream
> transition that will impact real clouds. :(
> 
> 
> We can make an adapter that does what you want (requests adapters are
> cool). I was just chatting with Monty about this, and we can help you
> out on this front.
> 
> The adapter should make things a lot easier once the lifting is done. 

Yah. Consider me to be on this. Looking at the code you've got to make
intra-openstack rest calls makes me want to poke out my own eyeballs. It
does _not_ have to be this hard or this brittle.

It'll likely take a few days and a thing or two to unwind.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-02-27 Thread Morgan Fainberg
On Mon, Feb 27, 2017 at 7:26 AM, Sean Dague  wrote:

> On 02/27/2017 10:22 AM, Morgan Fainberg wrote:
> 
> > I agree we should kill the discovery hack, however that is a break in
> > the keystoneauth contract. Simply put, we cannot. Keystoneauth is one of
> > the few things (similar to how shade works) where behavior, exposed
> > elements, etc are considered a strict contract that will not change. If
> > we could have avoided stevedore and PBR we would have.
> >
> > The best we can provide is a way to build the instances from
> > keystoneauth that does not include that hack.
> >
> > The short is, we can't remove it. Similar to how we cannot change the
> > raise of exceptions for non-200 responses (the behavior is already
> encoded).
>
> Ok, I'm going to go back to not using the version= parameter then.
> Because it's not actually doing the right thing.
>
> I also am a bit concerned that basically through some client changes
> that people didn't understand, we've missed a break in the upstream
> transition that will impact real clouds. :(
>
>
We can make an adapter that does what you want (requests adapters are
cool). I was just chatting with Monty about this, and we can help you out
on this front.

The adapter should make things a lot easier once the lifting is done.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-02-27 Thread Morgan Fainberg
On Mon, Feb 27, 2017 at 5:56 AM, Sean Dague  wrote:

> We recently implemented a Nova feature around validating that project_id
> for quotas we real in keystone. After that merged, trippleo builds
> started to fail because their undercloud did not specify the 'identity'
> service as the unversioned endpoint.
>
> https://github.com/openstack/nova/blob/8b498ce199ac4acd94eb33a7f47c05
> ee0c743c34/nova/api/openstack/identity.py#L34-L36
> - (code merged in Nova).
>
> After some debug, it was clear that '/v2.0/v3/projects/...' was what was
> being called. And after lots of conferring in the Keystone room, we
> definitely made sure that the code in question was correct. The thing I
> wanted to do was make the failure more clear.
>
> The suggestion was made to use the following code approach:
>
> https://review.openstack.org/#/c/438049/6/nova/api/openstack/identity.py
>
> resp = sess.get('/projects/%s' % project_id,
> endpoint_filter={
> 'service_type': 'identity',
> 'version': (3, 0)
> },
> raise_exc=False)
>
>
> However, I tested that manually with an identity =>
> http:///v2.0 endpoint, and it passes. Which confused me.
>
> Until I found this -
> https://github.com/openstack/keystoneauth/blob/
> 3364703d3b0e529f7c1b7d1d8ea81726c4f5f121/keystoneauth1/discover.py#L313
>
> keystonauth is specifically coding around the keystone transition from a
> versioned /v2.0 endpoint to an unversioned one.
>
>
> While that is good for the python ecosystem using it, it's actually
> *quite* bad for the rest of our ecosystem (direct REST, java, ruby, go,
> js, php), because it means that all other facilities need the same work
> around. I actually wonder if this is one of the in the field reasons for
> why the transition from v2 -> v3 is going slow. That's actually going to
> potentially break a lot of software.
>
> It feels like this whole discovery version hack bit should be removed -
> https://review.openstack.org/#/c/438483/. It also feels like a migration
> path for non python software in changing the catalog entries needs to be
> figured out as well.
>
> I think on the Nova side we need to go back to looking for bogus
> endpoint because we don't want issues like this hidden from us.
>
>
I agree we should kill the discovery hack, however that is a break in the
keystoneauth contract. Simply put, we cannot. Keystoneauth is one of the
few things (similar to how shade works) where behavior, exposed elements,
etc are considered a strict contract that will not change. If we could have
avoided stevedore and PBR we would have.

The best we can provide is a way to build the instances from keystoneauth
that does not include that hack.

The short is, we can't remove it. Similar to how we cannot change the raise
of exceptions for non-200 responses (the behavior is already encoded).

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-02-27 Thread Sean Dague
On 02/27/2017 10:22 AM, Morgan Fainberg wrote:

> I agree we should kill the discovery hack, however that is a break in
> the keystoneauth contract. Simply put, we cannot. Keystoneauth is one of
> the few things (similar to how shade works) where behavior, exposed
> elements, etc are considered a strict contract that will not change. If
> we could have avoided stevedore and PBR we would have.
> 
> The best we can provide is a way to build the instances from
> keystoneauth that does not include that hack.
> 
> The short is, we can't remove it. Similar to how we cannot change the
> raise of exceptions for non-200 responses (the behavior is already encoded).

Ok, I'm going to go back to not using the version= parameter then.
Because it's not actually doing the right thing.

I also am a bit concerned that basically through some client changes
that people didn't understand, we've missed a break in the upstream
transition that will impact real clouds. :(

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-02-27 Thread Sean Dague
We recently implemented a Nova feature around validating that project_id
for quotas we real in keystone. After that merged, trippleo builds
started to fail because their undercloud did not specify the 'identity'
service as the unversioned endpoint.

https://github.com/openstack/nova/blob/8b498ce199ac4acd94eb33a7f47c05ee0c743c34/nova/api/openstack/identity.py#L34-L36
- (code merged in Nova).

After some debug, it was clear that '/v2.0/v3/projects/...' was what was
being called. And after lots of conferring in the Keystone room, we
definitely made sure that the code in question was correct. The thing I
wanted to do was make the failure more clear.

The suggestion was made to use the following code approach:

https://review.openstack.org/#/c/438049/6/nova/api/openstack/identity.py

resp = sess.get('/projects/%s' % project_id,
endpoint_filter={
'service_type': 'identity',
'version': (3, 0)
},
raise_exc=False)


However, I tested that manually with an identity =>
http:///v2.0 endpoint, and it passes. Which confused me.

Until I found this -
https://github.com/openstack/keystoneauth/blob/3364703d3b0e529f7c1b7d1d8ea81726c4f5f121/keystoneauth1/discover.py#L313

keystonauth is specifically coding around the keystone transition from a
versioned /v2.0 endpoint to an unversioned one.


While that is good for the python ecosystem using it, it's actually
*quite* bad for the rest of our ecosystem (direct REST, java, ruby, go,
js, php), because it means that all other facilities need the same work
around. I actually wonder if this is one of the in the field reasons for
why the transition from v2 -> v3 is going slow. That's actually going to
potentially break a lot of software.

It feels like this whole discovery version hack bit should be removed -
https://review.openstack.org/#/c/438483/. It also feels like a migration
path for non python software in changing the catalog entries needs to be
figured out as well.

I think on the Nova side we need to go back to looking for bogus
endpoint because we don't want issues like this hidden from us.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [glance] [oslo] webob 1.7

2017-01-20 Thread Corey Bryant
On Thu, Jan 19, 2017 at 9:50 PM, Corey Bryant 
wrote:

>
>
> On Thu, Jan 19, 2017 at 8:29 PM, Joshua Harlow 
> wrote:
>
>> Corey Bryant wrote:
>>
>>>
>>> Added [nova] and [oslo] to the subject.  This is also affecting nova and
>>> oslo.middleware.  I know Sean's initial response on the thread was that
>>> this shouldn't be a priority for ocata but we're completely blocked by
>>> it.  Would those teams be able to prioritize a fix for this?
>>>
>>>
>> Is this the issue for that https://github.com/Pylons/webob/issues/307 ?
>>
>>
> Yes, at least for glance that is part of the issue, the dropping of the
> http_method_probably_has_body check.
>
>
>> If so, then perhaps we need to comment and work together on that and
>> introduce a fix into webob? Would that be the correct path here? What
>> otherwise would be needed to 'prioritize a fix' for it?
>>
>>
> That doesn't appear to be a bug in webob from what I can see in the issue
> 307 discussion, just a change of behavior that various projects need to
> adapt to if they're going to support webob 1.7.x.
>
>
>
Debian just uploaded python-webob 1:1.6.2-2 to replace 1.7.0-1.  I didn't
know this was an option, so I apologize for the noise.  Thanks to those who
already started on patches.  I imagine they'll be needed in Pike.  We'll
get 1:1.6.2-2 synced over to zesty and that should solve our webob problems
in Ocata.

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [glance] [oslo] webob 1.7

2017-01-19 Thread Corey Bryant
On Thu, Jan 19, 2017 at 8:29 PM, Joshua Harlow 
wrote:

> Corey Bryant wrote:
>
>>
>> Added [nova] and [oslo] to the subject.  This is also affecting nova and
>> oslo.middleware.  I know Sean's initial response on the thread was that
>> this shouldn't be a priority for ocata but we're completely blocked by
>> it.  Would those teams be able to prioritize a fix for this?
>>
>>
> Is this the issue for that https://github.com/Pylons/webob/issues/307 ?
>
>
Yes, at least for glance that is part of the issue, the dropping of the
http_method_probably_has_body check.


> If so, then perhaps we need to comment and work together on that and
> introduce a fix into webob? Would that be the correct path here? What
> otherwise would be needed to 'prioritize a fix' for it?
>
>
That doesn't appear to be a bug in webob from what I can see in the issue
307 discussion, just a change of behavior that various projects need to
adapt to if they're going to support webob 1.7.x.

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [glance] [oslo] webob 1.7

2017-01-19 Thread Joshua Harlow

Corey Bryant wrote:


Added [nova] and [oslo] to the subject.  This is also affecting nova and
oslo.middleware.  I know Sean's initial response on the thread was that
this shouldn't be a priority for ocata but we're completely blocked by
it.  Would those teams be able to prioritize a fix for this?



Is this the issue for that https://github.com/Pylons/webob/issues/307 ?

If so, then perhaps we need to comment and work together on that and 
introduce a fix into webob? Would that be the correct path here? What 
otherwise would be needed to 'prioritize a fix' for it?


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [glance] [oslo] webob 1.7

2017-01-19 Thread Corey Bryant
On Thu, Jan 19, 2017 at 11:34 AM, Corey Bryant 
wrote:

>
>
> On Thu, Jan 19, 2017 at 10:46 AM, Ian Cordasco 
> wrote:
>
>> -Original Message-
>> From: Corey Bryant 
>> Reply: OpenStack Development Mailing List (not for usage questions)
>> 
>> Date: January 19, 2017 at 08:52:25
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject:  Re: [openstack-dev] [keystone] webob 1.7
>>
>> > On Wed, Jan 18, 2017 at 9:08 AM, Ian Cordasco
>> > wrote:
>> >
>> > > -Original Message-
>> > > From: Chuck Short
>> > > Reply: OpenStack Development Mailing List (not for usage questions)
>> > >
>> > > Date: January 18, 2017 at 08:01:46
>> > > To: OpenStack Development Mailing List
>> > > Subject: [openstack-dev] [keystone] webob 1.7
>> > >
>> > > > Hi
>> > > >
>> > > > We have been expericing problems with newer versions of webob (webob
>> > > 1.7).
>> > > > Reading the changelog, it seems that the upstream developers have
>> > > > introduced some backwards incompatibility with previous versions of
>> webob
>> > > > that seems to be hitting keystone and possibly other projects as
>> well
>> > > > (nova/glance in particular). For keystone this bug has been
>> reported in
>> > > bug
>> > > > #1657452. I would just like to get more developer's eyes on this
>> > > particular
>> > > > issue and possibly get a fix. I suspect its starting to hit other
>> distros
>> > > > as well or already have hit.
>> > >
>> > > Hey Chuck,
>> > >
>> > > This is also affecting Glance
>> > > (https://bugs.launchpad.net/glance/+bug/1657459). I suspect what
>> we'll
>> > > do for now is blacklist the 1.7.x releases in openstack/requirements.
>> > > It seems a bit late in the cycle to bump the minimum version to 1.7.0
>> > > so we can safely fix this without having to deal with
>> > > incompatibilities between versions.
>> > >
>> > > --
>> > > Ian Cordasco
>> > >
>> > > 
>> __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> > Hi Ian,
>> >
>> > Were you suggesting there's a new version of webob in the works that
>> fixes
>> > this so we could bump upper-constraints and blacklist 1.7.x?
>>
>> No. I was suggesting that OpenStack not try to work with the 1.7
>> series of WebOb.
>>
>>
> Ok
>
>
>> > Unfortunately at this point we're at webob 1.7.0 in Ubuntu and there's
>> no
>> > going backward for us. The corresponding bugs were already mentioned in
>> > this thread but worth noting again, these are the bugs tracking this:
>> >
>> > https://bugs.launchpad.net/nova/+bug/1657452
>> > https://bugs.launchpad.net/glance/+bug/1657459
>> >
>> > So far this affects nova, glance, and keystone (David has a patch in
>> review
>> > - https://review.openstack.org/#/c/422234/).
>>
>> I'll have to see if we can get that prioritized for Glance next week
>> as a bug fix candidate post Ocata-3. We decided our priorities for the
>> next week just a short while ago. I'm going to see if we can move it
>> onto this week's list though.
>>
>>
> Thanks, that would be great.
>
>
>
Added [nova] and [oslo] to the subject.  This is also affecting nova and
oslo.middleware.  I know Sean's initial response on the thread was that
this shouldn't be a priority for ocata but we're completely blocked by it.
Would those teams be able to prioritize a fix for this?

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova]Quotas: Store resources and limits in the Keystone

2016-12-14 Thread Sajeesh Cimson Sasi
Hi,
I also feel that quota as a service is the best approach.It is justified as 
well, since we have multiple projects(Nova,Cinder,Neutron) now ,having the 
concept of quotas.Keeping it under a single umbrella, paves the way for lesser 
code duplication and easier feature enhancement like adoption of hierarchical 
quotas, and also better code management.
best regards,
 sajeesh


From: Andrey Volkov [avol...@mirantis.com]
Sent: 14 December 2016 17:00:32
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][nova]Quotas: Store resources and
limits in the Keystone

Hi,

I think one of the issues we're trying to solve here is duplication
reducing. Quotas in OpenStack commonly contain two parts: limits
management and limits enforcement.

If we're talking about library (delimiter) vs service (keystone or quota
service) for a duplication reducing in a limits management then IMO service is
more appropriate way because:

- Now we have API endpoint for limits
  management for the end user and we will need it in the future. Library can
  reduce an amount of code for such endpoint but can't totally eliminate it.
- Besides the code in services, we have api-ref, docs, and clients also
  and library can't reduce an effort for supporting those.
- Centralized limits management service can provide fresh and
  consistent API (see quota_class, quota_sets).

If we're talking about limits enforcement then it's more subtle thing.
I really agree with Jay that problem can be related to the cache for
usages. And I don't see the way how we can skip saving into reservation
table because we can easily define a moment of reservation with
reservation table but it can be hard with "real" objects like instance
as they have its own logic of creating.

I think a library can be appropriate if services like nova or cinder
have a possibility to deeply integrate external libraries (like django
apps). I mean if a library has own DB tables, cli commands, etc. it
can be seamlessly integrated to the main app. I'm not sure it's the
case for nova, for example. Therefore, for me, separate service
is the winner here too.

Sajeesh Cimson Sasi writes:

> Hi,
> There was an ongoing project of delimiter for Cross Project Quota 
> Management.
> But I don't know the current status.
> Kindly have a look at it.
> https://review.openstack.org/#/c/284454/
> More discussions are required on this.As more and more  projects or 
> services are having the concept ofquotas,Quota as a service can also be 
> thought of.Anyway more discussions are required on this topic.
>best regards,
> sajeesh
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: 13 December 2016 18:55:14
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [keystone][nova]Quotas: Store resources and 
> limits in the Keystone
>
> On 12/13/2016 08:09 AM, Kseniya Tychkova wrote:
>> Hi,
>> I would like to share a spec [1] with you.
>> The main idea of this spec is to start a discussion about quota
>> management in the OpenStack.
>>
>> Quotas are scattered across OpenStack services. Each service defines
>> it's own model and API for
>> managing resource's limits. Because of that, there are several problems:
>>
>>   * Names of the resources and resource-service mapping  are hardcoded.
>> They are hardcoded in the service code (Nova, for example) and it
>> should be hardcoded in the client code (Horizon, for example).
>>
>>   * There is no centralized quota management for OpenStack projects.
>>   * Cinder, Nova and Neutron support (or going to support) hierarchical
>> quotas in different ways.
>>
>> There should be a single point of managing quotas in OpenStack.
>> Keystone looks like a proper place to store resource's limits because:
>>
>>   * Keystone stores projects
>>   * Limits are belong to project.
>
> Another excellent reason to store quota limits in Keystone is because
> virtually all non-list operations require some interaction with quota
> limits, and requiring Nova (or Cinder or Neutron) to call out to yet
> another service each time the user makes one of those non-list
> operations is sub-optimal when Nova is already making a call to Keystone
> for authentication.
>
> The alternative is to have a separate REST API service just for storing
> and returning the quota limits and have Nova, Cinder and Neutron call
> this new service each time a non-list operation is made. While this is
> possible, it's just yet another service that needs to be managed and
> deployed by all installations of OpenStack.
>
> Best,
> -jay
>
>> There are a lot of

Re: [openstack-dev] [keystone][nova]Quotas: Store resources and limits in the Keystone

2016-12-14 Thread Andrey Volkov

Hi,

I think one of the issues we're trying to solve here is duplication
reducing. Quotas in OpenStack commonly contain two parts: limits
management and limits enforcement.

If we're talking about library (delimiter) vs service (keystone or quota
service) for a duplication reducing in a limits management then IMO service is
more appropriate way because:

- Now we have API endpoint for limits
  management for the end user and we will need it in the future. Library can
  reduce an amount of code for such endpoint but can't totally eliminate it.
- Besides the code in services, we have api-ref, docs, and clients also
  and library can't reduce an effort for supporting those.
- Centralized limits management service can provide fresh and
  consistent API (see quota_class, quota_sets).

If we're talking about limits enforcement then it's more subtle thing.
I really agree with Jay that problem can be related to the cache for
usages. And I don't see the way how we can skip saving into reservation
table because we can easily define a moment of reservation with
reservation table but it can be hard with "real" objects like instance
as they have its own logic of creating.

I think a library can be appropriate if services like nova or cinder
have a possibility to deeply integrate external libraries (like django
apps). I mean if a library has own DB tables, cli commands, etc. it
can be seamlessly integrated to the main app. I'm not sure it's the
case for nova, for example. Therefore, for me, separate service
is the winner here too.

Sajeesh Cimson Sasi writes:

> Hi,
> There was an ongoing project of delimiter for Cross Project Quota 
> Management.
> But I don't know the current status.
> Kindly have a look at it.
> https://review.openstack.org/#/c/284454/
> More discussions are required on this.As more and more  projects or 
> services are having the concept ofquotas,Quota as a service can also be 
> thought of.Anyway more discussions are required on this topic.
>best regards,
> sajeesh
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: 13 December 2016 18:55:14
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [keystone][nova]Quotas: Store resources and 
> limits in the Keystone
>
> On 12/13/2016 08:09 AM, Kseniya Tychkova wrote:
>> Hi,
>> I would like to share a spec [1] with you.
>> The main idea of this spec is to start a discussion about quota
>> management in the OpenStack.
>>
>> Quotas are scattered across OpenStack services. Each service defines
>> it's own model and API for
>> managing resource's limits. Because of that, there are several problems:
>>
>>   * Names of the resources and resource-service mapping  are hardcoded.
>> They are hardcoded in the service code (Nova, for example) and it
>> should be hardcoded in the client code (Horizon, for example).
>>
>>   * There is no centralized quota management for OpenStack projects.
>>   * Cinder, Nova and Neutron support (or going to support) hierarchical
>> quotas in different ways.
>>
>> There should be a single point of managing quotas in OpenStack.
>> Keystone looks like a proper place to store resource's limits because:
>>
>>   * Keystone stores projects
>>   * Limits are belong to project.
>
> Another excellent reason to store quota limits in Keystone is because
> virtually all non-list operations require some interaction with quota
> limits, and requiring Nova (or Cinder or Neutron) to call out to yet
> another service each time the user makes one of those non-list
> operations is sub-optimal when Nova is already making a call to Keystone
> for authentication.
>
> The alternative is to have a separate REST API service just for storing
> and returning the quota limits and have Nova, Cinder and Neutron call
> this new service each time a non-list operation is made. While this is
> possible, it's just yet another service that needs to be managed and
> deployed by all installations of OpenStack.
>
> Best,
> -jay
>
>> There are a lot of possible issues with “store limits in Keystone”
>> approach. But all of them can be discussed
>> and such discussion should lead to the good solution for quotas
>> management  in Openstack.
>>
>> Please take a look at the spec when you have time and share your ideas
>> or concerns.
>>
>> [1] https://review.openstack.org/#/c/363765/
>>
>>
>> Kind regards,
>> Kseniya
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: opensta

Re: [openstack-dev] [keystone][nova]Quotas: Store resources and limits in the Keystone

2016-12-13 Thread Jay Pipes

On 12/13/2016 11:27 AM, Sajeesh Cimson Sasi wrote:

Hi,
There was an ongoing project of delimiter for Cross Project Quota 
Management.
But I don't know the current status.
Kindly have a look at it.
https://review.openstack.org/#/c/284454/
More discussions are required on this.As more and more  projects or 
services are having the concept ofquotas,Quota as a service can also be 
thought of.Anyway more discussions are required on this topic.


I raised objections to having a separate endpoint process quota *usages* 
when the Delimiter project was proposed:


http://openstack.markmail.org/message/7ixvezcsj3uyiro6

I stand by those objections for the reasons stated in the above ML thread.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova]Quotas: Store resources and limits in the Keystone

2016-12-13 Thread Sajeesh Cimson Sasi
Hi,
There was an ongoing project of delimiter for Cross Project Quota 
Management.
But I don't know the current status.
Kindly have a look at it.
https://review.openstack.org/#/c/284454/
More discussions are required on this.As more and more  projects or 
services are having the concept ofquotas,Quota as a service can also be 
thought of.Anyway more discussions are required on this topic.
   best regards,
sajeesh

From: Jay Pipes [jaypi...@gmail.com]
Sent: 13 December 2016 18:55:14
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone][nova]Quotas: Store resources and limits 
in the Keystone

On 12/13/2016 08:09 AM, Kseniya Tychkova wrote:
> Hi,
> I would like to share a spec [1] with you.
> The main idea of this spec is to start a discussion about quota
> management in the OpenStack.
>
> Quotas are scattered across OpenStack services. Each service defines
> it's own model and API for
> managing resource's limits. Because of that, there are several problems:
>
>   * Names of the resources and resource-service mapping  are hardcoded.
> They are hardcoded in the service code (Nova, for example) and it
> should be hardcoded in the client code (Horizon, for example).
>
>   * There is no centralized quota management for OpenStack projects.
>   * Cinder, Nova and Neutron support (or going to support) hierarchical
> quotas in different ways.
>
> There should be a single point of managing quotas in OpenStack.
> Keystone looks like a proper place to store resource's limits because:
>
>   * Keystone stores projects
>   * Limits are belong to project.

Another excellent reason to store quota limits in Keystone is because
virtually all non-list operations require some interaction with quota
limits, and requiring Nova (or Cinder or Neutron) to call out to yet
another service each time the user makes one of those non-list
operations is sub-optimal when Nova is already making a call to Keystone
for authentication.

The alternative is to have a separate REST API service just for storing
and returning the quota limits and have Nova, Cinder and Neutron call
this new service each time a non-list operation is made. While this is
possible, it's just yet another service that needs to be managed and
deployed by all installations of OpenStack.

Best,
-jay

> There are a lot of possible issues with “store limits in Keystone”
> approach. But all of them can be discussed
> and such discussion should lead to the good solution for quotas
> management  in Openstack.
>
> Please take a look at the spec when you have time and share your ideas
> or concerns.
>
> [1] https://review.openstack.org/#/c/363765/
>
>
> Kind regards,
> Kseniya
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova]Quotas: Store resources and limits in the Keystone

2016-12-13 Thread Jay Pipes

On 12/13/2016 08:09 AM, Kseniya Tychkova wrote:

Hi,
I would like to share a spec [1] with you.
The main idea of this spec is to start a discussion about quota
management in the OpenStack.

Quotas are scattered across OpenStack services. Each service defines
it's own model and API for
managing resource's limits. Because of that, there are several problems:

  * Names of the resources and resource-service mapping  are hardcoded.
They are hardcoded in the service code (Nova, for example) and it
should be hardcoded in the client code (Horizon, for example).

  * There is no centralized quota management for OpenStack projects.
  * Cinder, Nova and Neutron support (or going to support) hierarchical
quotas in different ways.

There should be a single point of managing quotas in OpenStack.
Keystone looks like a proper place to store resource's limits because:

  * Keystone stores projects
  * Limits are belong to project.


Another excellent reason to store quota limits in Keystone is because 
virtually all non-list operations require some interaction with quota 
limits, and requiring Nova (or Cinder or Neutron) to call out to yet 
another service each time the user makes one of those non-list 
operations is sub-optimal when Nova is already making a call to Keystone 
for authentication.


The alternative is to have a separate REST API service just for storing 
and returning the quota limits and have Nova, Cinder and Neutron call 
this new service each time a non-list operation is made. While this is 
possible, it's just yet another service that needs to be managed and 
deployed by all installations of OpenStack.


Best,
-jay


There are a lot of possible issues with “store limits in Keystone”
approach. But all of them can be discussed
and such discussion should lead to the good solution for quotas
management  in Openstack.

Please take a look at the spec when you have time and share your ideas
or concerns.

[1] https://review.openstack.org/#/c/363765/


Kind regards,
Kseniya






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova]Quotas: Store resources and limits in the Keystone

2016-12-13 Thread Kseniya Tychkova
Hi,
I would like to share a spec [1] with you.
The main idea of this spec is to start a discussion about quota management
in the OpenStack.

Quotas are scattered across OpenStack services. Each service defines it's
own model and API for
managing resource's limits. Because of that, there are several problems:

   - Names of the resources and resource-service mapping  are hardcoded.
   They are hardcoded in the service code (Nova, for example) and it should be
   hardcoded in the client code (Horizon, for example).


   - There is no centralized quota management for OpenStack projects.
   - Cinder, Nova and Neutron support (or going to support) hierarchical
   quotas in different ways.

There should be a single point of managing quotas in OpenStack.
Keystone looks like a proper place to store resource's limits because:

   - Keystone stores projects
   - Limits are belong to project.


There are a lot of possible issues with “store limits in Keystone”
approach. But all of them can be discussed
and such discussion should lead to the good solution for quotas management
 in Openstack.

Please take a look at the spec when you have time and share your ideas or
concerns.

[1] https://review.openstack.org/#/c/363765/


Kind regards,
Kseniya
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][horizon][all] properties / metadata for resources

2016-11-09 Thread Dean Troyer
On Tue, Nov 8, 2016 at 4:12 PM, Gage Hugo  wrote:
> The idea is that a cloud admin could define a list of keys that they need
> for their setup within keystone's configuration file, then only those keys
> will be valid for storing values in the project properties table.  Then each
> call would check against the list of valid keys and deny any calls that are
> sent with an invalid key.

Please do not do this, it throws fuel onto the interoperability file
we still have not put out.

One actual technical drawback to doing this is that none of the
OpenStack services can depend on any of those keys actually being
defined, so this is still effectively just a single-deployment
'extras' field, with the only advance being that all rows may have the
same set of keys, depending on how the configuration changes over
time.

> This idea seems to help with the issue to avoid allowing anyone to throw any
> arbitrary values into these project properties vs just a set number of
> values.

It may feel that way, but it really does not help at all.  From a
cloud consumer point of view (including tools developed and used by
deployers, possibly across multiple cloud deployments) this is no help
at all.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][horizon][all] properties / metadata for resources

2016-11-08 Thread Matt Riedemann

On 11/8/2016 7:14 PM, Adrian Turjak wrote:



On 09/11/16 11:12, Gage Hugo wrote:

This spec was discussed at the keystone meeting today and during the
conversation that continued afterwards, an idea of using the keystone
configuration to set a list of keys was mentioned.

The idea is that a cloud admin could define a list of keys that they
need for their setup within keystone's configuration file, then only
those keys will be valid for storing values in the project properties
table.  Then each call would check against the list of valid keys and
deny any calls that are sent with an invalid key.

This idea seems to help with the issue to avoid allowing anyone to
throw any arbitrary values into these project properties vs just a set
number of values.


That feels far more restricting than it needs to be...

If done like this, the list should be optional, as having to restarting
Keystone to register the new config if you decide you need to add
additional values is a terrible approach.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Agree, whitelisting this in config sounds like a really bad idea.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][horizon][all] properties / metadata for resources

2016-11-08 Thread Adrian Turjak


On 09/11/16 11:12, Gage Hugo wrote:
> This spec was discussed at the keystone meeting today and during the
> conversation that continued afterwards, an idea of using the keystone
> configuration to set a list of keys was mentioned.
>
> The idea is that a cloud admin could define a list of keys that they
> need for their setup within keystone's configuration file, then only
> those keys will be valid for storing values in the project properties
> table.  Then each call would check against the list of valid keys and
> deny any calls that are sent with an invalid key.
>
> This idea seems to help with the issue to avoid allowing anyone to
> throw any arbitrary values into these project properties vs just a set
> number of values.

That feels far more restricting than it needs to be...

If done like this, the list should be optional, as having to restarting
Keystone to register the new config if you decide you need to add
additional values is a terrible approach.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][horizon][all] properties / metadata for resources

2016-11-08 Thread Matt Riedemann

On 11/8/2016 4:12 PM, Gage Hugo wrote:

This spec was discussed at the keystone meeting today and during the
conversation that continued afterwards, an idea of using the keystone
configuration to set a list of keys was mentioned.

The idea is that a cloud admin could define a list of keys that they
need for their setup within keystone's configuration file, then only
those keys will be valid for storing values in the project properties
table.  Then each call would check against the list of valid keys and
deny any calls that are sent with an invalid key.

This idea seems to help with the issue to avoid allowing anyone to throw
any arbitrary values into these project properties vs just a set number
of values.



So...completely undiscoverable and cloud-specific? That doesn't sound 
very interoperable.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][horizon][all] properties / metadata for resources

2016-11-08 Thread gordon chung


On 04/11/16 08:15 PM, Steve Martinelli wrote:
>
> We have somewhat had support for this, we have an "extras" column
> defined in our database schema, whatever a user puts in a request that
> doesn't match up with our API, those key-values are dumped into the
> "extras" column. It's not a pleasant user experience, since you can't
> really "unset" the data easily, or grab it, or update it. There's
> actually been patches to keystoneclient for getting around this, but its
> rather hacky and hardcodes a lot of values [2] [3]

we've been storing metadata/attributes/properties in Ceilometer and 
Gnocchi. in Ceilometer, we just flattened the json and built keys based 
on that which allowed you to index and unset/set things. that said, it 
wasn't that great in Ceilometer because allowing it to be completely 
free-form just encouraged the practice of dumping useless information in it.

in Gnocchi, we support dynamically addign attributes as well but you 
must explicitly tell it to create add the attribute to the resource. i 
won't lie, i don't know exactly how the magic works (you'll have to ask 
sileht), but it basically creates columns/tables to the db based on the 
request.

cheers,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][horizon][all] properties / metadata for resources

2016-11-08 Thread Gage Hugo
This spec was discussed at the keystone meeting today and during the
conversation that continued afterwards, an idea of using the keystone
configuration to set a list of keys was mentioned.

The idea is that a cloud admin could define a list of keys that they need
for their setup within keystone's configuration file, then only those keys
will be valid for storing values in the project properties table.  Then
each call would check against the list of valid keys and deny any calls
that are sent with an invalid key.

This idea seems to help with the issue to avoid allowing anyone to throw
any arbitrary values into these project properties vs just a set number of
values.

On Sun, Nov 6, 2016 at 6:25 PM, Matt Riedemann 
wrote:

> On 11/4/2016 7:15 PM, Steve Martinelli wrote:
>
>> The keystone team has a new spec being proposed for the Ocata release,
>> it essentially boils down to adding properties / metadata for projects
>> (for now) [1].
>>
>> We have somewhat had support for this, we have an "extras" column
>> defined in our database schema, whatever a user puts in a request that
>> doesn't match up with our API, those key-values are dumped into the
>> "extras" column. It's not a pleasant user experience, since you can't
>> really "unset" the data easily, or grab it, or update it. There's
>> actually been patches to keystoneclient for getting around this, but its
>> rather hacky and hardcodes a lot of values [2] [3]
>>
>> I've added nova and cinder here since the APIs that are being proposed
>> are more or less carbon copies of what is available through their APIs
>> (for server and volumes, respectively). What has been your project's
>> experience with handling metadata / properties? I assume that its been
>> there a while and you can't remove it. If you could go back and redo
>> things, would you do it another way? Would you take a more purist stance
>> and enforce more strict APIs, metadata be damned?
>>
>> I also added horizon because i'm curious about the impact this causes
>> when representing a resource.
>>
>> Personally, I am for the idea, we've had numerous requests from
>> operators about providing this support and I like to make them happy.
>>
>> [1] https://review.openstack.org/#/c/36/
>> [2] https://review.openstack.org/#/c/375239/
>> [3] https://review.openstack.org/#/c/296246/
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> If you're going to do it, restrict the case and characters in the keys
> because if you don't you can get fits in the backend database wrinkles. See
> this nova spec for details:
>
> https://specs.openstack.org/openstack/nova-specs/specs/newto
> n/approved/lowercase-metadata-keys.html
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][horizon][all] properties / metadata for resources

2016-11-06 Thread Matt Riedemann

On 11/4/2016 7:15 PM, Steve Martinelli wrote:

The keystone team has a new spec being proposed for the Ocata release,
it essentially boils down to adding properties / metadata for projects
(for now) [1].

We have somewhat had support for this, we have an "extras" column
defined in our database schema, whatever a user puts in a request that
doesn't match up with our API, those key-values are dumped into the
"extras" column. It's not a pleasant user experience, since you can't
really "unset" the data easily, or grab it, or update it. There's
actually been patches to keystoneclient for getting around this, but its
rather hacky and hardcodes a lot of values [2] [3]

I've added nova and cinder here since the APIs that are being proposed
are more or less carbon copies of what is available through their APIs
(for server and volumes, respectively). What has been your project's
experience with handling metadata / properties? I assume that its been
there a while and you can't remove it. If you could go back and redo
things, would you do it another way? Would you take a more purist stance
and enforce more strict APIs, metadata be damned?

I also added horizon because i'm curious about the impact this causes
when representing a resource.

Personally, I am for the idea, we've had numerous requests from
operators about providing this support and I like to make them happy.

[1] https://review.openstack.org/#/c/36/
[2] https://review.openstack.org/#/c/375239/
[3] https://review.openstack.org/#/c/296246/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



If you're going to do it, restrict the case and characters in the keys 
because if you don't you can get fits in the backend database wrinkles. 
See this nova spec for details:


https://specs.openstack.org/openstack/nova-specs/specs/newton/approved/lowercase-metadata-keys.html

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][horizon][all] properties / metadata for resources

2016-11-06 Thread Adrian Turjak
On 06/11/16 13:17, Steve Martinelli wrote:
>
> Interesting, I'll add this to the review and see how some if the folks
> proposing the new APIs would find that as suitable for their use
> cases. For reference: http://developer.openstack.org/api-ref/compute/

For our use case, I need key:value pairings, so something like the tags
system wouldn't quite work. That said, I kind of like the hardcoded
limit of: "Each server can have up to 50 tags."

The examples currently that we use (or would like to use if we could
limit access to certain roles):

Project (in extra):

created_at: 
created_by: @
terminated_at: 
terminated_by: : @
terminated_reason: 
sign_up_type: <"individual" or "organisation">
organisation: 
partner_id: 

User (in extra):

created_at: 
created_by: @
invited_by: @ of person who sent invitation.
terminated_at: 
terminated_by: : @
terminated_reason: 

Chances are we will be adding more as well. Right now part of the
problem is that a user can do project get for their own project, and
will see values in extra, which means we can't store anything in there
we don't want the clients or their users to see.

I will point out though that some of this stuff we keep in our ERP
system as well, but it is far less flexible than OpenStack and much of
that info we'd like to keep synced in both places so that it is easy to
query from either direction. This makes audit trails easier and allows a
"project show" to tell us what we need to know about a project without
going to the ERP system as well (which not everyone has access to anyway).

Also worth noting is that the reason most of this works, and is actually
enforced, is that we don't use Keystone directly for project/user
creation/management. We have a service that handles the automation of
admin tasks and automates most of this via the Keystoneclient. We do
still have people with actual admin access who do occasionally change
things manually, but we are doing more and more via this service both
for consistency, and to track who did what when.

Doing all of the above via the proposed new API would be easy, and while
the timedate values won't themselves be queryable, the
"created_at"/"updated_at" values on the property will be. So I can do a
query along the lines of:
projects where properties have "terminated_at" and property updated_at
>= ;


Doing this via swift is... I guess I could store a list of each property
in a file, and then parse the contents. For straightforward tags, that
would be fine, but not for key:value pairs where the contents of the
value will be different.

I could probably do it by making the files instead be more of a reverse
mapping, where I make a container for each resource type and have the
file name as "_" (eg "terminated_by_@")
with the file itself containing a list of resource ids. That would at
least make things less awful to search for, but it would still be MUCH
slower than if these were proper Keystone database entries. Not to
mention doing it in swift would make it hard to expose to anything but
the project where the swift data is stored in. I'd need to build a
service to handle these queries for me, and it would need to be built in
a service project so it has access to swift, but exposes its API to
OpenStack.

So not Swift I think.

>
> I am most concerned actually about the resistance from some in the
> Keystone contributor community to storing quota *limits* [1] for
> users and projects. Right now, every service project needs to
> store information about quota limits for all users and projects,
> and the services each do this annoyingly differently. Keystone is
> the thing that stores attributes of a user or a project. Limits of
> various quantitative resources in the system are an attribute of a
> user or a project. This information belongs in Keystone, IMHO,
> with a good REST API that other services can use to grab this
> information.
>
>
> Actually, this summit was the first I've heard of it (more so than
> just a passing idea with no one up for doing the work). We talked
> about it at our unconference session and Boris Bobrov (breton) has a
> few TODOs on the topic (post to ML and create a
> spec https://etherpad.openstack.org/p/ocata-keystone-unconference )
>

Storing limits (quotas) in Keystone feels wrong, although I can't place
my finger on why. While yes they are sort of attributes of a project,
they aren't exactly identity or access attributes. I do think we need to
centralise them, I just don't know if Keystone is exactly the place for
it, although I agree that there isn't a better place right now. Plus
centralising them might actually mean we can do hierarchical quotas!

As odd as it may sound, what if we considered that limits are a form of
dynamic policy? And migrate to treating resource limits as such. Combine
that with a general shift to centralised dynamic policies in Keystone,
and then it sort of feels better. It would be a massive effort, but it
could mean per 

Re: [openstack-dev] [keystone][nova][cinder][horizon][all] properties / metadata for resources

2016-11-05 Thread Steve Martinelli
On Sat, Nov 5, 2016 at 6:15 PM, Jay Pipes  wrote:

> On 11/05/2016 01:15 AM, Steve Martinelli wrote:
>
>> The keystone team has a new spec being proposed for the Ocata release,
>> it essentially boils down to adding properties / metadata for projects
>> (for now) [1].
>>
>
> Yes, I'd seen that particular spec review and found it interesting in a
> couple ways.


Please comment on it :)

I've added nova and cinder here since the APIs that are being proposed
>> are more or less carbon copies of what is available through their APIs
>> (for server and volumes, respectively). What has been your project's
>> experience with handling metadata / properties? I assume that its been
>> there a while and you can't remove it. If you could go back and redo
>> things, would you do it another way? Would you take a more purist stance
>> and enforce more strict APIs, metadata be damned?
>>
>
> Yes. I would get rid of the server metadata API that is in the Compute
> API. I believe the server tags API in the Compute API is appropriate for
> user-defined taxonomy of servers. For non user-defined things like system
> metadata, I prefer to have schema-defined attributes that are standardize
> and typed but a structured "properties" API can be useful as long as the
> key and value fields are indexable and reasonably sized.
>

Interesting, I'll add this to the review and see how some if the folks
proposing the new APIs would find that as suitable for their use cases. For
reference: http://developer.openstack.org/api-ref/compute/


> I also added horizon because i'm curious about the impact this causes
>> when representing a resource.
>>
>> Personally, I am for the idea, we've had numerous requests from
>> operators about providing this support and I like to make them happy.
>>
>
> I am most concerned actually about the resistance from some in the
> Keystone contributor community to storing quota *limits* [1] for users and
> projects. Right now, every service project needs to store information about
> quota limits for all users and projects, and the services each do this
> annoyingly differently. Keystone is the thing that stores attributes of a
> user or a project. Limits of various quantitative resources in the system
> are an attribute of a user or a project. This information belongs in
> Keystone, IMHO, with a good REST API that other services can use to grab
> this information.
>

Actually, this summit was the first I've heard of it (more so than just a
passing idea with no one up for doing the work). We talked about it at our
unconference session and Boris Bobrov (breton) has a few TODOs on the topic
(post to ML and create a spec
https://etherpad.openstack.org/p/ocata-keystone-unconference )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][horizon][all] properties / metadata for resources

2016-11-05 Thread Jay Pipes

On 11/05/2016 01:15 AM, Steve Martinelli wrote:

The keystone team has a new spec being proposed for the Ocata release,
it essentially boils down to adding properties / metadata for projects
(for now) [1].


Yes, I'd seen that particular spec review and found it interesting in a 
couple ways.



We have somewhat had support for this, we have an "extras" column
defined in our database schema, whatever a user puts in a request that
doesn't match up with our API, those key-values are dumped into the
"extras" column. It's not a pleasant user experience, since you can't
really "unset" the data easily, or grab it, or update it. There's
actually been patches to keystoneclient for getting around this, but its
rather hacky and hardcodes a lot of values [2] [3]


"not a pleasant user experience" would be an understatement :)

In addition to the unpleasant user experience, there is the additional 
problem that jamming such information into a JSON BLOB and storing it in 
a TEXT field in a relational database means none of the information 
stored in the field can be indexed which means there's no ability to 
search on particular key or value information.



I've added nova and cinder here since the APIs that are being proposed
are more or less carbon copies of what is available through their APIs
(for server and volumes, respectively). What has been your project's
experience with handling metadata / properties? I assume that its been
there a while and you can't remove it. If you could go back and redo
things, would you do it another way? Would you take a more purist stance
and enforce more strict APIs, metadata be damned?


Yes. I would get rid of the server metadata API that is in the Compute 
API. I believe the server tags API in the Compute API is appropriate for 
user-defined taxonomy of servers. For non user-defined things like 
system metadata, I prefer to have schema-defined attributes that are 
standardize and typed but a structured "properties" API can be useful as 
long as the key and value fields are indexable and reasonably sized.



I also added horizon because i'm curious about the impact this causes
when representing a resource.

Personally, I am for the idea, we've had numerous requests from
operators about providing this support and I like to make them happy.


I am most concerned actually about the resistance from some in the 
Keystone contributor community to storing quota *limits* [1] for users 
and projects. Right now, every service project needs to store 
information about quota limits for all users and projects, and the 
services each do this annoyingly differently. Keystone is the thing that 
stores attributes of a user or a project. Limits of various quantitative 
resources in the system are an attribute of a user or a project. This 
information belongs in Keystone, IMHO, with a good REST API that other 
services can use to grab this information.


Best,
-jay

[1] limits, not usages.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova][cinder][horizon][all] properties / metadata for resources

2016-11-04 Thread Steve Martinelli
The keystone team has a new spec being proposed for the Ocata release, it
essentially boils down to adding properties / metadata for projects (for
now) [1].

We have somewhat had support for this, we have an "extras" column defined
in our database schema, whatever a user puts in a request that doesn't
match up with our API, those key-values are dumped into the "extras"
column. It's not a pleasant user experience, since you can't really "unset"
the data easily, or grab it, or update it. There's actually been patches to
keystoneclient for getting around this, but its rather hacky and hardcodes
a lot of values [2] [3]

I've added nova and cinder here since the APIs that are being proposed are
more or less carbon copies of what is available through their APIs (for
server and volumes, respectively). What has been your project's experience
with handling metadata / properties? I assume that its been there a while
and you can't remove it. If you could go back and redo things, would you do
it another way? Would you take a more purist stance and enforce more strict
APIs, metadata be damned?

I also added horizon because i'm curious about the impact this causes when
representing a resource.

Personally, I am for the idea, we've had numerous requests from operators
about providing this support and I like to make them happy.

[1] https://review.openstack.org/#/c/36/
[2] https://review.openstack.org/#/c/375239/
[3] https://review.openstack.org/#/c/296246/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-10-27 Thread Bashmakov, Alexander
Hi Jay,

Thanks for the explanation. While I agree that there is a distinction between a 
distributed architecture like Nova and a centralized one like Glance, I would 
respectfully disagree with the statement that Glance cannot participate in 
rolling upgrades in a very similar fashion. We are currently working on a 
rolling upgrade POC in Glance (https://review.openstack.org/331740/). To date, 
we've successfully been able to run through a simple scenario with two Glance 
nodes running Newton and Ocata code base respectively. The latter introduces 
schema changes which are reconciled in the DB via a two-way trigger.

Regards,
Alex

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Friday, October 14, 2016 1:56 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: 
database triggers and oslo.versionedobjects

Alex, so sorry for the long delayed response! :( This just crept to the back of 
my inbox unfortunately. Answer inline...

On 09/14/2016 07:24 PM, Bashmakov, Alexander wrote:
>> Glance and Keystone do not participate in a rolling upgrade, because 
>> Keystone and Glance do not have a distributed component architecture. 
>> Online data migrations will reduce total downtime experienced during 
>> an *overall upgrade procedure* for an OpenStack cloud, but Nova, 
>> Neutron and Cinder are the only parts of OpenStack that are going to 
>> participate in a rolling upgrade because they are the services that 
>> are distributed across all the many compute nodes.
>
> Hi Jay, I'd like to better understand why your definition of rolling 
> upgrades excludes Glance and Keystone? Granted they don't run multiple 
> disparate components over distributed systems, however, they can still 
> run the same service on multiple distributed nodes. So a rolling 
> upgrade can still be applied on a large cloud that has, for instance 
> 50 Glance nodes.

If you've seen a cloud with 50 Glance nodes, I would be astonished :) That 
said, the number 50 doesn't really have to do with my definition of rolling... 
lemme explain.

The primary thing that, to me at least, differentiates rolling upgrades of 
distributed software is that different nodes can contain multiple versions of 
the software and continue to communicate with other nodes in the system without 
issue.

In the case of Glance, you cannot have different versions of the Glance service 
running simultaneously within an environment, because those Glance services 
each directly interface with the Glance database and therefore expect the 
Glance DB schema to look a particular way for a specific version of the Glance 
service software.

In contrast, Nova's distributed service nodes -- the nova-compute services and 
(mostly) the nova-api services do *not* talk directly to the Nova database. If 
those services need to get or set data in the database, they communicate with 
the nova-conductor services which are responsible for translating (called 
back-versioning) the most updated object model schema that matches the Nova 
database to the schema that the calling node understands. This means that Nova 
deployers can update the Nova database schema and not have to at the same time 
update the software on the distributed compute nodes. In this way deployers can 
"roll out" an upgrade of the Nova software across many hundreds of compute 
nodes over an extended period of time without needing to restart/upgrade 
services all at once.

Hope this clarifies things.

Best,
-jay

p.s. I see various information on the web referring to "rolling updates" 
or "rolling releases" as simply the process of continuously applying new 
versions of software to a deployment. This is decidedly *not* what I refer to 
as a "rolling upgrade". Perhaps we should invent a different term from "rolling 
upgrade" to refer to the attributes involved in being able to run multiple 
versions of distributed software with no impact on the control plane? Is that 
what folks call a "partial upgrade"? Not sure...

  > In this case different versions of the
> same service will run on different nodes simultaneously. Regards, Alex



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-10-16 Thread Duncan Thomas
On 14 October 2016 at 23:55, Jay Pipes  wrote:

> The primary thing that, to me at least, differentiates rolling upgrades of
> distributed software is that different nodes can contain multiple versions
> of the software and continue to communicate with other nodes in the system
> without issue.
>
> In the case of Glance, you cannot have different versions of the Glance
> service running simultaneously within an environment, because those Glance
> services each directly interface with the Glance database and therefore
> expect the Glance DB schema to look a particular way for a specific version
> of the Glance service software.
>

Cinder services can run N+-1 versions in a mixed manner, all talking to the
 same database, no conductor required.



-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-10-15 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2016-10-14 16:55:39 -0400:
> Alex, so sorry for the long delayed response! :( This just crept to
> the back of my inbox unfortunately. Answer inline...
> 
> On 09/14/2016 07:24 PM, Bashmakov, Alexander wrote:
> >> Glance and Keystone do not participate in a rolling upgrade,
> >> because Keystone and Glance do not have a distributed component
> >> architecture. Online data migrations will reduce total downtime
> >> experienced during an *overall upgrade procedure* for an OpenStack
> >> cloud, but Nova, Neutron and Cinder are the only parts of OpenStack
> >> that are going to participate in a rolling upgrade because they are
> >> the services that are distributed across all the many compute
> >> nodes.
> >
> > Hi Jay, I'd like to better understand why your definition of rolling
> > upgrades excludes Glance and Keystone? Granted they don't run
> > multiple disparate components over distributed systems, however, they
> > can still run the same service on multiple distributed nodes. So a
> > rolling upgrade can still be applied on a large cloud that has, for
> > instance 50 Glance nodes.
> 
> If you've seen a cloud with 50 Glance nodes, I would be astonished :) 
> That said, the number 50 doesn't really have to do with my definition of 
> rolling... lemme explain.
> 
> The primary thing that, to me at least, differentiates rolling upgrades 
> of distributed software is that different nodes can contain multiple 
> versions of the software and continue to communicate with other nodes in 
> the system without issue.
> 

Database are often (mis)used to communicate.

> In the case of Glance, you cannot have different versions of the Glance 
> service running simultaneously within an environment, because those 
> Glance services each directly interface with the Glance database and 
> therefore expect the Glance DB schema to look a particular way for a 
> specific version of the Glance service software.
> 

That's not a constraint of Glance, but a constraint of the way Glance
has been interfacing with the database. The argument of the thread was
that one can make schema changes in such a way that one can have
multiple versions of the same component running during an update.

> In contrast, Nova's distributed service nodes -- the nova-compute 
> services and (mostly) the nova-api services do *not* talk directly to 
> the Nova database. If those services need to get or set data in the 
> database, they communicate with the nova-conductor services which are 
> responsible for translating (called back-versioning) the most updated 
> object model schema that matches the Nova database to the schema that 
> the calling node understands. This means that Nova deployers can update 
> the Nova database schema and not have to at the same time update the 
> software on the distributed compute nodes. In this way deployers can 
> "roll out" an upgrade of the Nova software across many hundreds of 
> compute nodes over an extended period of time without needing to 
> restart/upgrade services all at once.
> 
> Hope this clarifies things.
> 

It clarifies your thinking, so thanks for that. However, I'm not so sure
there's any difference between components that are the same software,
and components that are different software, if they end up interacting
anyway because one version can write and read data that another version
does.

What I think is important is understanding the interfaces, and how they
can be tested to ensure that rolling/partial/0-downtime updates can be
done safely.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-10-14 Thread Jay Pipes

Alex, so sorry for the long delayed response! :( This just crept to
the back of my inbox unfortunately. Answer inline...

On 09/14/2016 07:24 PM, Bashmakov, Alexander wrote:

Glance and Keystone do not participate in a rolling upgrade,
because Keystone and Glance do not have a distributed component
architecture. Online data migrations will reduce total downtime
experienced during an *overall upgrade procedure* for an OpenStack
cloud, but Nova, Neutron and Cinder are the only parts of OpenStack
that are going to participate in a rolling upgrade because they are
the services that are distributed across all the many compute
nodes.


Hi Jay, I'd like to better understand why your definition of rolling
upgrades excludes Glance and Keystone? Granted they don't run
multiple disparate components over distributed systems, however, they
can still run the same service on multiple distributed nodes. So a
rolling upgrade can still be applied on a large cloud that has, for
instance 50 Glance nodes.


If you've seen a cloud with 50 Glance nodes, I would be astonished :) 
That said, the number 50 doesn't really have to do with my definition of 
rolling... lemme explain.


The primary thing that, to me at least, differentiates rolling upgrades 
of distributed software is that different nodes can contain multiple 
versions of the software and continue to communicate with other nodes in 
the system without issue.


In the case of Glance, you cannot have different versions of the Glance 
service running simultaneously within an environment, because those 
Glance services each directly interface with the Glance database and 
therefore expect the Glance DB schema to look a particular way for a 
specific version of the Glance service software.


In contrast, Nova's distributed service nodes -- the nova-compute 
services and (mostly) the nova-api services do *not* talk directly to 
the Nova database. If those services need to get or set data in the 
database, they communicate with the nova-conductor services which are 
responsible for translating (called back-versioning) the most updated 
object model schema that matches the Nova database to the schema that 
the calling node understands. This means that Nova deployers can update 
the Nova database schema and not have to at the same time update the 
software on the distributed compute nodes. In this way deployers can 
"roll out" an upgrade of the Nova software across many hundreds of 
compute nodes over an extended period of time without needing to 
restart/upgrade services all at once.


Hope this clarifies things.

Best,
-jay

p.s. I see various information on the web referring to "rolling updates" 
or "rolling releases" as simply the process of continuously applying new 
versions of software to a deployment. This is decidedly *not* what I 
refer to as a "rolling upgrade". Perhaps we should invent a different 
term from "rolling upgrade" to refer to the attributes involved in being 
able to run multiple versions of distributed software with no impact on 
the control plane? Is that what folks call a "partial upgrade"? Not sure...


 > In this case different versions of the

same service will run on different nodes simultaneously. Regards,
Alex




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] "admin" role and "rule:admin_or_owner" confusion

2016-09-26 Thread rezroo
I am still confused how the "cloud admin" role is fulfilled in Liberty 
release. For example, I used "nova --debug delete" to see how the 
project:admin/user:admin deletes an instance of the demo project. 
Basically, we use the project:admin/user:admin token to get a list of 
instances for all tenants and then reference the instance of demo using 
the admin project tenant-id in the:


curl -g -i -X DELETE 
http://172.31.5.216:8774/v2.1/85b0992a5845455083db84d909c218ab/servers/6c876149-ecc4-4467-b727-9dff7b059390


So 85b0992a5845455083db84d909c218ab is admin tenant id, and 
6c876149-ecc4-4467-b727-9dff7b059390 is owned by demo project.


I am able to reproduce this using curl commands - but what's confusing 
me is that the token I get from keystone clearly shows is_admin is 0:


"user": {"username": "admin", "roles_links": [], "id": 
"9b29c721bc3844a784dcffbb8c8a47f8", "roles": [{"name": "admin"}], 
"name": "admin"}, "metadata": {"is_admin": 0, "roles": 
["6a6893ea36394a2ab0b93d225ab01e25"]}}}


And the rules for compute:delete seem to require is_admin to be true. 
nova/policy.json has two rules for "compute:delete":


/Line  81 "compute:delete": "rule:admin_or_owner",
Line  88 "compute:delete": "",/

First question - why is line 88 needed?

Second, on line  3 admin_or_owner definition requires is_admin to be true:

/"admin_or_owner": "is_admin:True or project_id:%(project_id)s",/

which if my understanding is correct, is never true unless the keystone 
admin_token is used, and is certainly not true the token I got using 
curl. So why is my curl request using this token able to delete the 
instance?


Thanks,

Reza


On 9/2/2016 12:51 PM, Morgan Fainberg wrote:


On Sep 2, 2016 09:39, "rezroo" > wrote:

>
> Hello - I'm using Liberty release devstack for the below scenario. I 
have created project "abcd" with "john" as Member. I've launched one 
instance, I can use curl to list the instance. No problem.

>
> I then modify /etc/nova/policy.json and redefine "admin_or_owner" as 
follows:

>
> "admin_or_owner":  "role:admin or is_admin:True or 
project_id:%(project_id)s",

>
> My expectation was that I would be able to list the instance in abcd 
using a token of admin. However, when I use the token of user "admin" 
in project "admin" to list the instances I get the following error:

>
> stack@vlab:~/token$ curl 
http://localhost:8774/v2.1/378a4b9e0b594c24a8a753cfa40ecc14/servers/detail 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: 
f221164cd9b44da6beec70d6e1f3382f"
> {"badRequest": {"message": "Malformed request URL: URL's project_id 
'378a4b9e0b594c24a8a753cfa40ecc14' doesn't match Context's project_id 
'f73175d9cc8b4fb58ad22021f03bfef5'", "code": 400}}

>
> 378a4b9e0b594c24a8a753cfa40ecc14 is project id of abcd and 
f73175d9cc8b4fb58ad22021f03bfef5 is project id of admin.

>
> I'm confused by this behavior and the reported error, because if the 
project id used to acquire the token is the same as the project id in 
/servers/detail then I would be an "owner". So where is the "admin" in 
"admin_or_owner"? Shouldn't the "role:admin" allow me to do whatever 
functionality "rule:admin_or_owner" allows in policy.json, regardless 
of the project id used to acquire the token?

>
> I do understand that I can use the admin user and project to get all 
instances of all tenants:
> curl 
http://localhost:8774/v2.1/f73175d9cc8b4fb58ad22021f03bfef5/servers/detail?all_tenants=1 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: $1"

>
> My question is more centered around why nova has the additional 
check to make sure that the token project id matches the url project 
id - and whether this is a keystone requirement, or only nova/cinder 
and programs that have a project-id in their API choose to do this. In 
other words, is it the developers of each project that decide to only 
expose some APIs for administrative functionality (such all-tenants), 
but restrict everything else to owners, or keystone requires this check?

>
> Thanks,
>
> Reza
>
>
> 
__

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 


> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I believe this is a nova specific extra check. There is (iirc) a way 
to list out the instances for a given tenant but I do not recall the 
specifics.


Keystone does not know anything about the resource ownership in Nova. 
The Nova check is fully self-contained.


--Morgan
Please excuse brevity and typos, sent from a mobile device.



__

Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-14 Thread Clint Byrum
Excerpts from Henry Nash's message of 2016-09-15 00:29:44 +0100:
> Jay,
> 
> I agree with your distinction - and when I am referring to rolling upgrades 
> for keystone I am referring to when you are running a cluster of keystones 
> (for performance and/or redundancy), and you want to roll the upgrade across 
> the cluster without creating downtime of the overall keystone service. Such a 
> keystone cluster deployment will be common in large clouds - and prior to 
> Newton, keystone did not support such a rolling upgrade (you had to take all 
> the nodes down, upgrade the DB and then boot them all back up). In order to 
> support such a rolling upgrade you either need to have code that can work on 
> different DB versions (either explicitly or via versioned objects), or you 
> hide the schema changes by “data synchronisation via Triggers”, which is 
> where this whole thread came from.
> 

It doesn't always need to be explicit or through versioned objects. One
can often manipulate the schema and even migrate data without disturbing
old code.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-14 Thread Henry Nash
Jay,

I agree with your distinction - and when I am referring to rolling upgrades for 
keystone I am referring to when you are running a cluster of keystones (for 
performance and/or redundancy), and you want to roll the upgrade across the 
cluster without creating downtime of the overall keystone service. Such a 
keystone cluster deployment will be common in large clouds - and prior to 
Newton, keystone did not support such a rolling upgrade (you had to take all 
the nodes down, upgrade the DB and then boot them all back up). In order to 
support such a rolling upgrade you either need to have code that can work on 
different DB versions (either explicitly or via versioned objects), or you hide 
the schema changes by “data synchronisation via Triggers”, which is where this 
whole thread came from.

Henry
> On 14 Sep 2016, at 23:08, Jay Pipes  wrote:
> 
> On 09/01/2016 05:29 AM, Henry Nash wrote:
>> So as the person who drove the rolling upgrade requirements into
>> keystone in this cycle (because we have real customers that need it),
>> and having first written the keystone upgrade process to be
>> “versioned object ready” (because I assumed we would do this the same
>> as everyone else), and subsequently re-written it to be “DB Trigger
>> ready”…and written migration scripts for both these cases for the (in
>> fact very minor) DB changes that keystone has in Newton…I guess I
>> should also weigh in here :-)
> 
> Sorry for delayed response. PTO and all... I'd just like to make a 
> clarification here. Henry, you are not referring to *rolling upgrades* but 
> rather *online database migrations*. There's an important distinction between 
> the two concepts.
> 
> Online schema migrations, as discussed in this thread, are all about 
> minimizing the time that a database server is locked or otherwise busy 
> performing the tasks of changing SQL schemas and moving the underlying stored 
> data from their old location/name to their new location/name. As noted in 
> this thread, there's numerous ways of reducing the downtime experienced 
> during these data and schema migrations.
> 
> Rolling upgrades are not the same thing, however. What rolling upgrades refer 
> to is the ability of a *distributed system* to have its distributed component 
> services running different versions of the software and still be able to 
> communicate with the other components of the system. This time period during 
> which the components of the distributed system may run different versions of 
> the software may be quite lengthy (days or weeks long). The "rolling" part of 
> "rolling upgrade" refers to the fact that in a distributed system of 
> thousands of components or nodes, the upgraded software must be "rolled out" 
> to those thousands of nodes over a period of time.
> 
> Glance and Keystone do not participate in a rolling upgrade, because Keystone 
> and Glance do not have a distributed component architecture. Online data 
> migrations will reduce total downtime experienced during an *overall upgrade 
> procedure* for an OpenStack cloud, but Nova, Neutron and Cinder are the only 
> parts of OpenStack that are going to participate in a rolling upgrade because 
> they are the services that are distributed across all the many compute nodes.
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-14 Thread Bashmakov, Alexander
> Glance and Keystone do not participate in a rolling upgrade, because
> Keystone and Glance do not have a distributed component architecture.
> Online data migrations will reduce total downtime experienced during an
> *overall upgrade procedure* for an OpenStack cloud, but Nova, Neutron and
> Cinder are the only parts of OpenStack that are going to participate in a 
> rolling
> upgrade because they are the services that are distributed across all the
> many compute nodes.

Hi Jay,
I'd like to better understand why your definition of rolling upgrades excludes 
Glance and Keystone? Granted they don't run multiple disparate components over 
distributed systems, however, they can still run the same service on multiple 
distributed nodes. So a rolling upgrade can still be applied on a large cloud 
that has, for instance 50 Glance nodes.  In this case different versions of the 
same service will run on different nodes simultaneously.
Regards,
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-14 Thread Jay Pipes

On 09/01/2016 05:29 AM, Henry Nash wrote:

So as the person who drove the rolling upgrade requirements into
keystone in this cycle (because we have real customers that need it),
and having first written the keystone upgrade process to be
“versioned object ready” (because I assumed we would do this the same
as everyone else), and subsequently re-written it to be “DB Trigger
ready”…and written migration scripts for both these cases for the (in
fact very minor) DB changes that keystone has in Newton…I guess I
should also weigh in here :-)


Sorry for delayed response. PTO and all... I'd just like to make a 
clarification here. Henry, you are not referring to *rolling upgrades* 
but rather *online database migrations*. There's an important 
distinction between the two concepts.


Online schema migrations, as discussed in this thread, are all about 
minimizing the time that a database server is locked or otherwise busy 
performing the tasks of changing SQL schemas and moving the underlying 
stored data from their old location/name to their new location/name. As 
noted in this thread, there's numerous ways of reducing the downtime 
experienced during these data and schema migrations.


Rolling upgrades are not the same thing, however. What rolling upgrades 
refer to is the ability of a *distributed system* to have its 
distributed component services running different versions of the 
software and still be able to communicate with the other components of 
the system. This time period during which the components of the 
distributed system may run different versions of the software may be 
quite lengthy (days or weeks long). The "rolling" part of "rolling 
upgrade" refers to the fact that in a distributed system of thousands of 
components or nodes, the upgraded software must be "rolled out" to those 
thousands of nodes over a period of time.


Glance and Keystone do not participate in a rolling upgrade, because 
Keystone and Glance do not have a distributed component architecture. 
Online data migrations will reduce total downtime experienced during an 
*overall upgrade procedure* for an OpenStack cloud, but Nova, Neutron 
and Cinder are the only parts of OpenStack that are going to participate 
in a rolling upgrade because they are the services that are distributed 
across all the many compute nodes.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-04 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2016-09-02 17:58:42 -0400:
> 
> On 09/02/2016 01:53 PM, Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2016-09-02 12:15:33 +0200:
> >> Sean Dague wrote:
> >>> Putting DB trigger failure analysis into the toolkit required to manage
> >>> an upgrade failure is a really high bar for new ops.
> >>
> >> I agree with Sean: increasing the variety of technologies used increases
> >> the system complexity, which in turn requires more skills to fully
> >> understand and maintain operationally. It should only be done as a last
> >> resort, with pros and cons carefully weighted. We really should involve
> >> operators in this discussion to get the full picture of arguments for
> >> and against.
> >>
> >
> > Yes, I would like to understand better what aspect of the approach
> > taken elsewhere is leading to the keystone team exploring other
> > options. So far I'm not seeing much upside to being different, and I'm
> > hearing a lot of cons.
> 
> I continue to maintain that the problems themselves being discussed at 
> https://review.openstack.org/#/c/331740/ are different than what has 
> been discussed in detail before.   To be "not different", this spec 
> would need to no longer discuss the concept of "we need N to be reading 
> from and writing to the old column to be compatible with N-1 as shown in 
> the below diagram...Once all the N-1 services are upgraded, N services 
> should be moved out of compatibility mode to use the new column. ". 
> To my knowledge, there are no examples of code in Openstack that 
> straddles table and column changes directly in the SQL access layer as 
> this document describes.There's still a handful of folks including 
> myself that think this is a new kind of awkwardness we've not had to 
> deal with yet.   My only ideas on how to reduce it is to put the N-1/N 
> differences on the write side, not the read side, and triggers are *not* 
> the only way to do it.   But if "being different" means, "doing it on 
> the write side", then it seems like that overall concept is being 
> vetoed.  Which I actually appreciate knowing up front before I spend a 
> lot of time on it.
> 

The example for glance shows where two entirely new objects have been
created for the database (community and shared images). The compatibility
mode flag in config is cool, I think operators deal with things like
that all the time, like when a new API version arrives and they might
not be ready to support it. I'd hope that having it turned off would
also restrict the API microversion if such a thing exists so that the
community/shared image types aren't allowed yet. This seems straight
forward, and I feel like the spec was good except for the addition of
extra layers.

In this case, I'd just create the new column nullable, and maintain
both.

* Add visibility column to schema (in spec, 'glance-manage db_expand')

* upgrade all API nodes

* run the migration code to resolve the null visibility columns
  (traditional "glance-manage db_migrate")

* advance compatibility mode to lowest commit that exists running
  against DB

* set visibility to be not null (I think this would be 'glance-manage
  db_contract latest_commit_desired')

Where, in this scheme, do triggers come in?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-02 Thread Steve Martinelli
>
> On 09/02/2016 01:53 PM, Doug Hellmann wrote:
>
>> Excerpts from Thierry Carrez's message of 2016-09-02 12:15:33 +0200:
>
> I agree with Sean: increasing the variety of technologies used increases
>>> the system complexity, which in turn requires more skills to fully
>>> understand and maintain operationally. It should only be done as a last
>>> resort, with pros and cons carefully weighted. We really should involve
>>> operators in this discussion to get the full picture of arguments for
>>> and against.
>>>
>>
Two quick remarks about involving operators. First, see Matt Fischer's
reply to the thread, we have a great operator-developer experience with
Matt (he was one of the first folks looking at Fernet tokens), he
volunteered to out any triggers we write on his MySQL Galera cluster.
Secondly, the use of triggers was brought up at the OpenStack Ansible
midcycle, where several operators were present, and as I understand it,
felt positive about the idea.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-02 Thread Mike Bayer



On 09/02/2016 01:53 PM, Doug Hellmann wrote:

Excerpts from Thierry Carrez's message of 2016-09-02 12:15:33 +0200:

Sean Dague wrote:

Putting DB trigger failure analysis into the toolkit required to manage
an upgrade failure is a really high bar for new ops.


I agree with Sean: increasing the variety of technologies used increases
the system complexity, which in turn requires more skills to fully
understand and maintain operationally. It should only be done as a last
resort, with pros and cons carefully weighted. We really should involve
operators in this discussion to get the full picture of arguments for
and against.



Yes, I would like to understand better what aspect of the approach
taken elsewhere is leading to the keystone team exploring other
options. So far I'm not seeing much upside to being different, and I'm
hearing a lot of cons.


I continue to maintain that the problems themselves being discussed at 
https://review.openstack.org/#/c/331740/ are different than what has 
been discussed in detail before.   To be "not different", this spec 
would need to no longer discuss the concept of "we need N to be reading 
from and writing to the old column to be compatible with N-1 as shown in 
the below diagram...Once all the N-1 services are upgraded, N services 
should be moved out of compatibility mode to use the new column. ". 
To my knowledge, there are no examples of code in Openstack that 
straddles table and column changes directly in the SQL access layer as 
this document describes.There's still a handful of folks including 
myself that think this is a new kind of awkwardness we've not had to 
deal with yet.   My only ideas on how to reduce it is to put the N-1/N 
differences on the write side, not the read side, and triggers are *not* 
the only way to do it.   But if "being different" means, "doing it on 
the write side", then it seems like that overall concept is being 
vetoed.  Which I actually appreciate knowing up front before I spend a 
lot of time on it.


















Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-02 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2016-09-02 12:15:33 +0200:
> Sean Dague wrote:
> > Putting DB trigger failure analysis into the toolkit required to manage
> > an upgrade failure is a really high bar for new ops.
> 
> I agree with Sean: increasing the variety of technologies used increases
> the system complexity, which in turn requires more skills to fully
> understand and maintain operationally. It should only be done as a last
> resort, with pros and cons carefully weighted. We really should involve
> operators in this discussion to get the full picture of arguments for
> and against.
> 

Yes, I would like to understand better what aspect of the approach
taken elsewhere is leading to the keystone team exploring other
options. So far I'm not seeing much upside to being different, and I'm
hearing a lot of cons.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] "admin" role and "rule:admin_or_owner" confusion

2016-09-02 Thread Morgan Fainberg
On Sep 2, 2016 09:39, "rezroo"  wrote:
>
> Hello - I'm using Liberty release devstack for the below scenario. I have
created project "abcd" with "john" as Member. I've launched one instance, I
can use curl to list the instance. No problem.
>
> I then modify /etc/nova/policy.json and redefine "admin_or_owner" as
follows:
>
> "admin_or_owner":  "role:admin or is_admin:True or
project_id:%(project_id)s",
>
> My expectation was that I would be able to list the instance in abcd
using a token of admin. However, when I use the token of user "admin" in
project "admin" to list the instances I get the following error:
>
> stack@vlab:~/token$ curl
http://localhost:8774/v2.1/378a4b9e0b594c24a8a753cfa40ecc14/servers/detail
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token:
f221164cd9b44da6beec70d6e1f3382f"
> {"badRequest": {"message": "Malformed request URL: URL's project_id
'378a4b9e0b594c24a8a753cfa40ecc14' doesn't match Context's project_id
'f73175d9cc8b4fb58ad22021f03bfef5'", "code": 400}}
>
> 378a4b9e0b594c24a8a753cfa40ecc14 is project id of abcd and
f73175d9cc8b4fb58ad22021f03bfef5 is project id of admin.
>
> I'm confused by this behavior and the reported error, because if the
project id used to acquire the token is the same as the project id in
/servers/detail then I would be an "owner". So where is the "admin" in
"admin_or_owner"? Shouldn't the "role:admin" allow me to do whatever
functionality "rule:admin_or_owner" allows in policy.json, regardless of
the project id used to acquire the token?
>
> I do understand that I can use the admin user and project to get all
instances of all tenants:
> curl
http://localhost:8774/v2.1/f73175d9cc8b4fb58ad22021f03bfef5/servers/detail?all_tenants=1
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: $1"
>
> My question is more centered around why nova has the additional check to
make sure that the token project id matches the url project id - and
whether this is a keystone requirement, or only nova/cinder and programs
that have a project-id in their API choose to do this. In other words, is
it the developers of each project that decide to only expose some APIs for
administrative functionality (such all-tenants), but restrict everything
else to owners, or keystone requires this check?
>
> Thanks,
>
> Reza
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I believe this is a nova specific extra check. There is (iirc) a way to
list out the instances for a given tenant but I do not recall the
specifics.

Keystone does not know anything about the resource ownership in Nova. The
Nova check is fully self-contained.

--Morgan
Please excuse brevity and typos, sent from a mobile device.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova] "admin" role and "rule:admin_or_owner" confusion

2016-09-02 Thread rezroo
Hello - I'm using Liberty release devstack for the below scenario. I 
have created project "abcd" with "john" as Member. I've launched one 
instance, I can use curl to list the instance. No problem.


I then modify /etc/nova/policy.json and redefine "admin_or_owner" as 
follows:


"admin_or_owner":  "role:admin or is_admin:True or 
project_id:%(project_id)s",


My expectation was that I would be able to list the instance in abcd 
using a token of admin. However, when I use the token of user "admin" in 
project "admin" to list the instances I get the following error:


/stack@vlab:~/token$ curl 
http://localhost:8774/v2.1///378a4b9e0b594c24a8a753cfa40ecc14///servers/detail 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: 
f221164cd9b44da6beec70d6e1f3382f"//
//{"badRequest": {"message": "Malformed request URL: URL's project_id 
'//378a4b9e0b594c24a8a753cfa40ecc14//' doesn't match Context's 
project_id '//f73175d9cc8b4fb58ad22021f03bfef5//'", "code": 400}}/


378a4b9e0b594c24a8a753cfa40ecc14 is project id of abcd and 
f73175d9cc8b4fb58ad22021f03bfef5 is project id of admin.


I'm confused by this behavior and the reported error, because if the 
project id used to acquire the token is the same as the project id in 
/servers/detail then I would be an "owner". So where is the "admin" in 
"admin_or_owner"? Shouldn't the "role:admin" allow me to do whatever 
functionality "rule:admin_or_owner" allows in policy.json, regardless of 
the project id used to acquire the token?


I do understand that I can use the admin user and project to get all 
instances of all tenants:
/curl 
http://localhost:8774/v2.1/f73175d9cc8b4fb58ad22021f03bfef5/servers/detail?all_tenants=1 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: $1"/


My question is more centered around why nova has the additional check to 
make sure that the token project id matches the url project id - and 
whether this is a keystone requirement, or only nova/cinder and programs 
that have a project-id in their API choose to do this. In other words, 
is it the developers of each project that decide to only expose some 
APIs for administrative functionality (such all-tenants), but restrict 
everything else to owners, or keystone requires this check?


Thanks,

Reza

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-02 Thread Thierry Carrez
Sean Dague wrote:
> Putting DB trigger failure analysis into the toolkit required to manage
> an upgrade failure is a really high bar for new ops.

I agree with Sean: increasing the variety of technologies used increases
the system complexity, which in turn requires more skills to fully
understand and maintain operationally. It should only be done as a last
resort, with pros and cons carefully weighted. We really should involve
operators in this discussion to get the full picture of arguments for
and against.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread Michael Bayer
On Thursday, September 1, 2016, Jeremy Stanley  wrote:

>
> I don't read that at all as suggesting "the problem is solved, go
> away" but rather "help us make it better for everyone, don't just
> take one project off in a new direction and leave the others
> behind."


I can clarify.  I don't work directly on glance or keystone, I do oslo.db,
sqlalchemy, and alembic development.   If it's decided that the approach is
"no special technique, just query more columns and tables in your data
access layer and straddle across API versions", that does not indicate any
new patterns or tools in Oslo or further up, hence "solved" in that the
techniques are already available.  If OTOH we are getting into triggers or
this idea I have to do Python level translation events at the write side,
that indicates the need for new library features and patterns.

I've been tasked with being ready to assist Nova and Neutron with online
migrations for over a year.   Other than helping Neutron get
expand/contract going, I've not been involved at all, and not with anything
related to data migrations.   There hasn't been any need.



> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread Clint Byrum
Excerpts from Robert Collins's message of 2016-09-01 20:45:22 +1200:
> On 31 August 2016 at 01:57, Clint Byrum  wrote:
> >
> >
> > It's simple, these are the holy SQL schema commandments:
> >
> > Don't delete columns, ignore them.
> > Don't change columns, create new ones.
> > When you create a column, give it a default that makes sense.
> 
> I'm sure you're aware of this but I think its worth clarifying for non
> DBAish folk: non-NULL values can change a DDL statements execution
> time from O(1) to O(N) depending on the DB in use. E.g. for Postgres
> DDL requires an exclusive table lock, and adding a column with any
> non-NULL value (including constants) requires calculating a new value
> for every row, vs just updating the metadata - see
> https://www.postgresql.org/docs/9.5/static/sql-altertable.html
> """
> When a column is added with ADD COLUMN, all existing rows in the table
> are initialized with the column's default value (NULL if no DEFAULT
> clause is specified). If there is no DEFAULT clause, this is merely a
> metadata change and does not require any immediate update of the
> table's data; the added NULL values are supplied on readout, instead.
> """
> 

InnoDB (via MySQL) has no such restrictions for online DDL:

https://dev.mysql.com/doc/refman/5.6/en/innodb-create-index-overview.html#innodb-online-ddl-summary-grid

Basically what the link above says is that anything except these
operations can be done without locking up the table:

- Fulltext index creation
- Change column data type
- Convert or specify column character sets

Specifically, defaults are only ever stored in the rows if they're
changed. The current default is kept in the table definition, so the
rows end up with NULL physically unless the default is changed. An alter
that does a default change is just like a big update to set the current
NULL's to the old default.

> > Do not add new foreign key constraints.
> 
> What's the reason for this - if it's to avoid exclusive locks, I'd
> note that the other rules above don't avoid exclusive locks - again,
> DB specific, and for better or worse we are now testing on multiple DB
> engines via 3rd party testing.
> 
> https://dev.launchpad.net/Database/LivePatching has some info from our
> experience doing online and very fast offline patches in Launchpad.
> 

The reason is to avoid the old code running into new restrictions. If
you add a FK constraint to an existing table, old code will insert into
it and fail because it doesn't add the FK rows needed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread Jeremy Stanley
On 2016-09-01 10:39:09 -0400 (-0400), Mike Bayer wrote:
> On 08/31/2016 06:18 PM, Monty Taylor wrote:
[...]
> >OpenStack is One Project
> >
> >
> > Nova and Neutron have an approach for this. It may or may not be
> > ideal - but it exists right now. While it can be satisfying to
> > discount the existing approach and write a new one, I do not
> > believe that is in the best interests of OpenStack as a whole.
> > To diverge in _keystone_ - which is one of the few projects that
> > must exist in every OpenStack install - when there exists an
> > approach in the two other most commonly deployed projects - is
> > such a terrible example of the problems inherent in Conway's Law
> > that it makes me want to push up a proposal to dissolve all of
> > the individual project teams and merge all of the repos into a
> > single repo.
[...]
> The "be more similar" argument would be the only one you have to
> make. It basically says, "problem X is 'solved', other approaches
> are now unnecessary". I'm skeptical that I am reading that
> correctly. I have another approach to the issue of "rolling
> upgrades where we really need to translate at the SQL layer" that
> is in some ways similar to what triggers do, but entirely within
> the abstraction layer that you so appropriately appreciate :). I
> have a binary decision to make here, "do i work on this new idea
> that Glance has already expressed an interest in and Keystone
> might like also? Or do I not, because this problem is solved?". I
> have other projects to work on, so it's not like I'm looking for
> more. It's just I'd like to see Glance and others have their
> rolling upgrades problem solved, at least with the benefit of a
> fixed and predictable pattern, rather than every schema change
> being an ongoing seat-of-the-pants type of operation as it is
> right now.
[...]

You (presumably accidentally) snipped the next paragraph of context,
which started out:

> > Make the oslo libraries Nova and Neutron are using better. Work
> > with the Nova and Neutron teams on a consolidated approach.
[...]

I don't read that at all as suggesting "the problem is solved, go
away" but rather "help us make it better for everyone, don't just
take one project off in a new direction and leave the others
behind."
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread Mike Bayer



On 09/01/2016 11:52 AM, Dan Smith wrote:


The indirection service is really unrelated to this discussion, IMHO. If
you take RPC out of the picture, all you have left is a
direct-to-the-database facade to handle the fact that schema has
expanded underneath you. As Clint (et al) have said -- designing the
application to expect schema expansion (and avoiding unnecessary
contraction) is the key here.


pretty much.  there's no fixed pattern in how to do these.  Every 
version of a data access API will be weighed down with baggage from the 
previous version and an inability to take full advantage of new 
improvements until the next release, and background migrations are 
complicated by the old application undoing their work.  Even small 
migrations mean all these issues have to be considered each time on a 
case-by-case basis.   These are the problems people are hoping to 
improve upon if possible.   The spec at 
https://review.openstack.org/#/c/331740/ is discussing these issues in 
detail and is the first such specification I've seen that tries to get 
into it at this level.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread Dan Smith
> So that is fine.  However, correct me if I'm wrong but you're 
> proposing just that these projects migrate to also use a new service 
> layer with oslo.versionedobjects, because IIUC Nova/Neutron's 
> approach is dependent on that area of indirection being present. 
> Otherwise, if you meant something like, "use an approach that's kind 
> of like what Nova does w/ versionedobjects but without actually 
> having to use versionedobjects", that still sounds like, "come up 
> with a new idea".

If you don't need the RPC bits, versionedobjects is nothing more than an
object facade for you to insulate your upper layers from such change.
Writing your facade using versionedobjects just means inheriting from a
superclass that does a bunch of stuff you don't need. So I would not say
that taking the same general approach without that inheritance is "come
up with a new idea".

Using triggers and magic to solve this instead of an application-level
facade is a substantially different approach to the problem.

> I suppose if you're thinking more at the macro level, where "current
>  approach" means "do whatever you have to on the app side", then your
>  position is consistent, but I think there's still a lot of
> confusion in that area when the indirection of a versioned service
> layer is not present. It gets into the SQL nastiness I was discussing
> w/ Clint and I don't see anyone doing anything like that yet.

The indirection service is really unrelated to this discussion, IMHO. If
you take RPC out of the picture, all you have left is a
direct-to-the-database facade to handle the fact that schema has
expanded underneath you. As Clint (et al) have said -- designing the
application to expect schema expansion (and avoiding unnecessary
contraction) is the key here.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread Sean Dague
On 09/01/2016 09:45 AM, David Stanek wrote:
> On Thu, Aug 25 at 13:13 -0400, Steve Martinelli wrote:
>> The keystone team is pursuing a trigger-based approach to support rolling,
>> zero-downtime upgrades. The proposed operator experience is documented here:
>>
>>   http://docs.openstack.org/developer/keystone/upgrading.html
>>
> 
> I wanted to mention a few things. One of the reasons I suggested this
> approach for keystone is that I've had success in the past using a
> combination of triggers and code to do live, online migrations. Many
> times using completely different schemas.
> 
> In keystone we are just talking about some simple data transformations
> between columns and things like that. The triggers themselves shouldn't
> get too complicated. If there are cases where triggers won't work, then
> we won't force them. (A current example of this is encrypting
> credentials.)
> 
> The online migrations are not required. Operators can still go the old
> route and db_sync while others help test out the cutting edge features.
> 
> The triggers are not there during the entire lifecycle of the
> application. The expand phase adds them and the contract removes them.

But you did that for an application where you were on call to handle any
issues, and you knew the data somewhat in advance.

In OpenStack this code would get committed. It would get executed 12 to
18 months later (the average current OpenStack level at the ops meetup
was Kilo/Liberty). It would be executed by people far away, possibly
running in different locales, without an idea about what's in the data set.

Part of OpenStack being a successful open source project is that the
mean expertise of our operators will keep decreasing over time. It will
be deployed and maintained by less and less skilled operators in each
release, because it will be deployed and maintained by more total
operators each release.

Putting DB trigger failure analysis into the toolkit required to manage
an upgrade failure is a really high bar for new ops.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread Mike Bayer



On 09/01/2016 08:29 AM, Henry Nash wrote:


From a purely keystone perspective, my gut feeling is that actually the
trigger approach is likely to lead to a more robust, not less, solution - due
to the fact that we solve the very specific problems of a given migration
(i.e. need to keep column A in sync with Column B) or a short period of time,
right at the point of pain, with well established techniques - albeit they be
complex ones that need experienced coders in those techniques.


this is really the same philosophy I'm going for, that is, make a schema 
migration, then accompany it by a data migration, and then you're done. 
The rest of the world need not be concerned.


It's not as much about "triggers" as it is, "handle the data difference 
on the write side, not the read side".  That is, writing data to a SQL 
database is squeezed through exactly three very boring forms of 
statement, the INSERT, UPDATE, and DELETE.   These are easy to intercept 
in the database, and since we use an abstraction like SQLAlchemy they 
are easy to intercept in the application layer too (foreshadowing). 
  When you put it on the read side, reading is of course (mostly) 
through just one statement, the SELECT, but it is a crazy beast in 
practice and it is all over the place in an unlimited number of forms.


If you can get your migrations to be, hey, we can just read JSON records 
from version 1.0 of the service and pump them into version 2.0, then 
you're doing read-side, but you've solved the problem at the service 
layer.  This only works for those situations where it "works", and the 
dual-layer service architecture has to be feasibly present as well.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread Mike Bayer



On 08/31/2016 06:18 PM, Monty Taylor wrote:


I said this the other day in the IRC channel, and I'm going to say it
again here. I'm going to do it as bluntly as I can - please keeping in
mind that I respect all of the humans involved.

I think this is a monstrously terrible idea.

There are MANY reasons for this -but I'm going to limit myself to two.

OpenStack is One Project


Nova and Neutron have an approach for this. It may or may not be ideal -
but it exists right now. While it can be satisfying to discount the
existing approach and write a new one, I do not believe that is in the
best interests of OpenStack as a whole. To diverge in _keystone_ - which
is one of the few projects that must exist in every OpenStack install -
when there exists an approach in the two other most commonly deployed
projects - is such a terrible example of the problems inherent in
Conway's Law that it makes me want to push up a proposal to dissolve all
of the individual project teams and merge all of the repos into a single
repo.


So that is fine.  However, correct me if I'm wrong but you're proposing 
just that these projects migrate to also use a new service layer with 
oslo.versionedobjects, because IIUC Nova/Neutron's approach is dependent 
on that area of indirection being present. Otherwise, if you meant 
something like, "use an approach that's kind of like what Nova does w/ 
versionedobjects but without actually having to use versionedobjects", 
that still sounds like, "come up with a new idea".


I suppose if you're thinking more at the macro level, where "current 
approach" means "do whatever you have to on the app side", then your 
position is consistent, but I think there's still a lot of confusion in 
that area when the indirection of a versioned service layer is not 
present.   It gets into the SQL nastiness I was discussing w/ Clint and 
I don't see anyone doing anything like that yet.


Triggers aside since it clearly is "triggering" (ahem) allergic 
reactions, what's the approach when new approaches are devised that are 
alternatives to what "exists right now"?   E.g. I have yet another 
proposal in the works that allows for SQL-level translations but runs in 
the Python application space and does not use triggers.  Should I stop 
right now because Nova/Neutron already have a system that's "good 
enough"?This would be fine.  I find it uncomfortable working in this 
ambiguous space where some projects rightly proclaim they've solved a 
problem, and others continue to disregard that and plow forward with 
other approaches without a universally accepted reason why the current 
solution is not feasible.





BUT - I also don't think it's a good technical solution. That isn't
because triggers don't work in MySQL (they do) - but because we've spent
the last six years explicitly NOT writing raw SQL. We've chosen an
abstraction layer (SQLAlchemy) which does its job well.


There's a canard in there which is that all along I've been proposing to 
start adding systems to oslo.db to help produce and maintain triggers 
which certainly would have among its goals that consuming projects 
wouldn't be writing raw SQL.  That part of the discomfort is more 
manageable than Clint's, which is that he doesn't want the database 
doing things with the data other than storing it, and I totally know 
where he's coming from on that.


The "be more similar" argument would be the only one you have to make. 
It basically says, "problem X is 'solved', other approaches are now 
unnecessary".   I'm skeptical that I am reading that correctly.  I have 
another approach to the issue of "rolling upgrades where we really need 
to translate at the SQL layer" that is in some ways similar to what 
triggers do, but entirely within the abstraction layer that you so 
appropriately appreciate :).   I have a binary decision to make here, 
"do i work on this new idea that Glance has already expressed an 
interest in and Keystone might like also? Or do I not, because this 
problem is solved?".   I have other projects to work on, so it's not 
like I'm looking for more.   It's just I'd like to see Glance and others 
have their rolling upgrades problem solved, at least with the benefit of 
a fixed and predictable pattern, rather than every schema change being 
an ongoing seat-of-the-pants type of operation as it is right now.


Finally, it's a known and accepted pattern in large

scale MySQL shops ... Roll out a new version of the app code which
understands both the old and the new schema version, then roll out a
no-downtime additive schema change to the database, then have the app
layer process and handle on the fly transformation if needed.



Right, as I've mentioned previously, I only take issue with the 
"monolithic app code that speaks both versions of the schema" part. 
Assuming there's no layer of service indirection where migration issues 
can be finessed outside of the SQL interaction layer, it means every 

Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread David Stanek
On Wed, Aug 31 at 17:18 -0500, Monty Taylor wrote:
> 
> Nova and Neutron have an approach for this. It may or may not be ideal -
> but it exists right now. While it can be satisfying to discount the
> existing approach and write a new one, I do not believe that is in the
> best interests of OpenStack as a whole. To diverge in _keystone_ - which
> is one of the few projects that must exist in every OpenStack install -
> when there exists an approach in the two other most commonly deployed
> projects - is such a terrible example of the problems inherent in
> Conway's Law that it makes me want to push up a proposal to dissolve all
> of the individual project teams and merge all of the repos into a single
> repo.

That's a bit overly dramatic. I think having some innovation is a good
thing. Specifically in this case where our needs appear to be a little
simpler than those of nova.

> 
> Make the oslo libraries Nova and Neutron are using better. Work with the
> Nova and Neutron teams on a consolidated approach. We need to be driving
> more towards an OpenStack that behaves as if it wasn't written by
> warring factions of developers who barely communicate.

I believe we tried to keep with the same extract/migrate/contract
patterns. Sure our implementation differs, but I don't see operators
caring about that as long as it works.

> 
> Even if the idea was one I thought was good technically, the above would
> still trump that. Work with Nova and Neutron. Be more similar.
> 
> PLEASE
> 
> BUT - I also don't think it's a good technical solution. That isn't
> because triggers don't work in MySQL (they do) - but because we've spent
> the last six years explicitly NOT writing raw SQL. We've chosen an
> abstraction layer (SQLAlchemy) which does its job well.
> 
> IF this were going to be accompanied by a corresponding shift in
> approach to not support any backends by MySQL and to start writing our
> database interactions directly in SQL in ALL of our projects - I could
> MAYBE be convinced. Even then I think doing it in triggers is the wrong
> place to put logic.
> 
> "Database triggers are obviously a new challenge for developers to
> write, honestly challenging to debug (being side effects), and are made
> even more difficult by having to hand write triggers for MySQL,
> PostgreSQL, and SQLite independently (SQLAlchemy offers no assistance in
> this case)"
> 
> If you look at:
> 
> https://review.openstack.org/#/c/355618/40/keystone/common/sql/expand_repo/versions/002_add_key_hash_and_encrypted_blob_to_credential.py
> 
> You will see the three different SQL dialects this. Not only that, but
> some of the more esoteric corners of those backends. We can barely get
> _indexes_ right in our database layers ... now we think we're going to
> get triggers right? Consistently? And handle things like Galera?
> 
> The other option is app level, which is what nova and neutron are doing.
> It's a good option, because it puts the logic in python, which is a
> thing we have 2500 developers fairly well versed in. It's also scalable,
> as the things executing whatever the logic is are themselves a scale-out
> set of servers. Finally, it's a known and accepted pattern in large
> scale MySQL shops ... Roll out a new version of the app code which
> understands both the old and the new schema version, then roll out a
> no-downtime additive schema change to the database, then have the app
> layer process and handle on the fly transformation if needed.

I've done both types of migrations in the past, but with one imporant
exception. We could roll out our application on Tuesday and then the
cleanup on Thursday. We didn't carry baggage for 6 months to a year. My
fear with keystone is that we'd slow development even more by adding
more cruft and cruft on top of cuft.

> 
> SO ...
> 
> Just do what Nova and Neutron are doing - and if it's not good enough,
> fix it. Having some projects use triggers and other projects not use
> triggers is one of the more epically crazypants things I've heard around
> here ... and I lived through the twisted/eventlet argument.

-- 
David Stanek
web: http://dstanek.com
blog: http://traceback.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread David Stanek
On Thu, Aug 25 at 13:13 -0400, Steve Martinelli wrote:
> The keystone team is pursuing a trigger-based approach to support rolling,
> zero-downtime upgrades. The proposed operator experience is documented here:
> 
>   http://docs.openstack.org/developer/keystone/upgrading.html
> 

I wanted to mention a few things. One of the reasons I suggested this
approach for keystone is that I've had success in the past using a
combination of triggers and code to do live, online migrations. Many
times using completely different schemas.

In keystone we are just talking about some simple data transformations
between columns and things like that. The triggers themselves shouldn't
get too complicated. If there are cases where triggers won't work, then
we won't force them. (A current example of this is encrypting
credentials.)

The online migrations are not required. Operators can still go the old
route and db_sync while others help test out the cutting edge features.

The triggers are not there during the entire lifecycle of the
application. The expand phase adds them and the contract removes them.

-- 
David Stanek
web: http://dstanek.com
blog: http://traceback.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread Henry Nash
So as the person who drove the rolling upgrade requirements into keystone in 
this cycle (because we have real customers that need it), and having first 
written the keystone upgrade process to be “versioned object ready” (because I 
assumed we would do this the same as everyone else), and subsequently 
re-written it to be “DB Trigger ready”…and written migration scripts for both 
these cases for the (in fact very minor) DB changes that keystone has in 
Newton…I guess I should also weigh in here :-)

For me, the argument comes down to:

a) Is the pain that needs to cured by the rolling upgrade requirement broadly 
in the same place in the various projects (i.e. nova, glance, keystone etc.)? 
If it is, then working towards a common solution is always preferable (whatever 
that solution is)
b) I would characterise the difference between the trigger approach, the 
versioned objects approach and the “n-app approach as: do we want a small 
amount of very nasty complexity vs. spreading that complexity out to be not as 
bad, but over a broader area. Probably fewer people can (successfully) write 
the nasty complexity trigger work, than they can, say, the “do it all in the 
app” work. LOC (which, of course, isn’t always a good measure) is also 
reflected in this characterisation, with the trigger code having probably the 
fewest LOC, and the app code having the greatest. 
c) I don’t really follow the argument that somehow the trigger code in 
migrations is less desirable because we use higher level sqla abstractions in 
our main-line code - I’ve always seen migration as different and expected that 
we might have to do strange things there. Further, we should be aware of the 
time-preiods…the migration cycle is a small % of elapsed time the cloud is 
running (well, hopefully) - so again, do we solve the “issues of migration” as 
part of the migration cycle (which is what the trigger approach does) or make 
our code be (effectively) continually migration aware (using versioned objects 
or in-app code)
d) The actual process (for an operator) is simpler for a rolling upgrade 
process with Triggers than the alternative (since you don’t require several of 
the checkpoints, e.g. when you know you can move out of compatibility mode 
etc.). Operator error is also a cause of problems in upgrades (especially as 
the complexity of a cloud increases).

From a purely keystone perspective, my gut feeling is that actually the trigger 
approach is likely to lead to a more robust, not less, solution - due to the 
fact that we solve the very specific problems of a given migration (i.e. need 
to keep column A in sync with Column B) or a short period of time, right at the 
point of pain, with well established techniques - albeit they be complex ones 
that need experienced coders in those techniques. I actually prefer the small 
locality of complexity (marked with “there be dragons there, be careful”), as 
opposed to spreading medium pain over a large area, which by definition is 
updated by many…and  may do the wrong thing inadvertently. It is simpler for 
operators.

I do recognise, however, the “let’s not do different stuff for a core project 
like keytsone” as a powerful argument. I just don’t know how to square this 
with the fact that although I started in the “versioned objects camp”, having 
worked through many of the issues have come to believe that the Trigger 
approach will be more reliable overall for this specific use case. From the 
other reaction to this thread, I don’t detect a lot of support for the Trigger 
approach becoming our overall, cross-project solution.

The actual migrations in Keystone needed for Newton are minor, so one 
possibility is we use keystone as a guinea pig for this approach in Newton…if 
we had to undo this in a subsequent release, we are not talking about rafts of 
migration code to redo.

Henry



> On 1 Sep 2016, at 09:45, Robert Collins  wrote:
> 
> On 31 August 2016 at 01:57, Clint Byrum  wrote:
>> 
>> 
>> It's simple, these are the holy SQL schema commandments:
>> 
>> Don't delete columns, ignore them.
>> Don't change columns, create new ones.
>> When you create a column, give it a default that makes sense.
> 
> I'm sure you're aware of this but I think its worth clarifying for non
> DBAish folk: non-NULL values can change a DDL statements execution
> time from O(1) to O(N) depending on the DB in use. E.g. for Postgres
> DDL requires an exclusive table lock, and adding a column with any
> non-NULL value (including constants) requires calculating a new value
> for every row, vs just updating the metadata - see
> https://www.postgresql.org/docs/9.5/static/sql-altertable.html
> """
> When a column is added with ADD COLUMN, all existing rows in the table
> are initialized with the column's default value (NULL if no DEFAULT
> clause is specified). If there is no DEFAULT clause, this is merely a
> metadata change and does not require any immediate 

Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-09-01 Thread Robert Collins
On 31 August 2016 at 01:57, Clint Byrum  wrote:
>
>
> It's simple, these are the holy SQL schema commandments:
>
> Don't delete columns, ignore them.
> Don't change columns, create new ones.
> When you create a column, give it a default that makes sense.

I'm sure you're aware of this but I think its worth clarifying for non
DBAish folk: non-NULL values can change a DDL statements execution
time from O(1) to O(N) depending on the DB in use. E.g. for Postgres
DDL requires an exclusive table lock, and adding a column with any
non-NULL value (including constants) requires calculating a new value
for every row, vs just updating the metadata - see
https://www.postgresql.org/docs/9.5/static/sql-altertable.html
"""
When a column is added with ADD COLUMN, all existing rows in the table
are initialized with the column's default value (NULL if no DEFAULT
clause is specified). If there is no DEFAULT clause, this is merely a
metadata change and does not require any immediate update of the
table's data; the added NULL values are supplied on readout, instead.
"""

> Do not add new foreign key constraints.

What's the reason for this - if it's to avoid exclusive locks, I'd
note that the other rules above don't avoid exclusive locks - again,
DB specific, and for better or worse we are now testing on multiple DB
engines via 3rd party testing.

https://dev.launchpad.net/Database/LivePatching has some info from our
experience doing online and very fast offline patches in Launchpad.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-31 Thread Monty Taylor
On 08/25/2016 04:14 PM, Sean Dague wrote:
> On 08/25/2016 01:13 PM, Steve Martinelli wrote:
>> The keystone team is pursuing a trigger-based approach to support
>> rolling, zero-downtime upgrades. The proposed operator experience is
>> documented here:
>>
>>   http://docs.openstack.org/developer/keystone/upgrading.html
>>
>> This differs from Nova and Neutron's approaches to solve for rolling
>> upgrades (which use oslo.versionedobjects), however Keystone is one of
>> the few services that doesn't need to manage communication between
>> multiple releases of multiple service components talking over the
>> message bus (which is the original use case for oslo.versionedobjects,
>> and for which it is aptly suited). Keystone simply scales horizontally
>> and every node talks directly to the database.
>>
>> Database triggers are obviously a new challenge for developers to write,
>> honestly challenging to debug (being side effects), and are made even
>> more difficult by having to hand write triggers for MySQL, PostgreSQL,
>> and SQLite independently (SQLAlchemy offers no assistance in this case),
>> as seen in this patch:
>>
>>   https://review.openstack.org/#/c/355618/
>>
>> However, implementing an application-layer solution with
>> oslo.versionedobjects is not an easy task either; refer to Neutron's
>> implementation:
>>
>>
>> https://review.openstack.org/#/q/topic:bp/adopt-oslo-versioned-objects-for-db
>>
>>
>> Our primary concern at this point are how to effectively test the
>> triggers we write against our supported database systems, and their
>> various deployment variations. We might be able to easily drop SQLite
>> support (as it's only supported for our own test suite), but should we
>> expect variation in support and/or actual behavior of triggers across
>> the MySQLs, MariaDBs, Perconas, etc, of the world that would make it
>> necessary to test each of them independently? If you have operational
>> experience working with triggers at scale: are there landmines that we
>> need to be aware of? What is it going to take for us to say we support
>> *zero* dowtime upgrades with confidence?
> 
> I would really hold off doing anything triggers related until there was
> sufficient testing for that, especially with potentially dirty data.
> 
> Triggers also really bring in a whole new DSL that people need to learn
> and understand, not just across this boundary, but in the future
> debugging issues. And it means that any errors happening here are now in
> a place outside of normal logging / recovery mechanisms.
> 
> There is a lot of value that in these hard problem spaces like zero down
> uptime we keep to common patterns between projects because there are
> limited folks with the domain knowledge, and splitting that even further
> makes it hard to make this more universal among projects.

I said this the other day in the IRC channel, and I'm going to say it
again here. I'm going to do it as bluntly as I can - please keeping in
mind that I respect all of the humans involved.

I think this is a monstrously terrible idea.

There are MANY reasons for this -but I'm going to limit myself to two.

OpenStack is One Project


Nova and Neutron have an approach for this. It may or may not be ideal -
but it exists right now. While it can be satisfying to discount the
existing approach and write a new one, I do not believe that is in the
best interests of OpenStack as a whole. To diverge in _keystone_ - which
is one of the few projects that must exist in every OpenStack install -
when there exists an approach in the two other most commonly deployed
projects - is such a terrible example of the problems inherent in
Conway's Law that it makes me want to push up a proposal to dissolve all
of the individual project teams and merge all of the repos into a single
repo.

Make the oslo libraries Nova and Neutron are using better. Work with the
Nova and Neutron teams on a consolidated approach. We need to be driving
more towards an OpenStack that behaves as if it wasn't written by
warring factions of developers who barely communicate.

Even if the idea was one I thought was good technically, the above would
still trump that. Work with Nova and Neutron. Be more similar.

PLEASE

BUT - I also don't think it's a good technical solution. That isn't
because triggers don't work in MySQL (they do) - but because we've spent
the last six years explicitly NOT writing raw SQL. We've chosen an
abstraction layer (SQLAlchemy) which does its job well.

IF this were going to be accompanied by a corresponding shift in
approach to not support any backends by MySQL and to start writing our
database interactions directly in SQL in ALL of our projects - I could
MAYBE be convinced. Even then I think doing it in triggers is the wrong
place to put logic.

"Database triggers are obviously a new challenge for developers to
write, honestly challenging to debug (being side effects), and are made
even more difficult by having to hand 

  1   2   >