Re: [openstack-dev] [oslo] nominating Alexis Lee for oslo-core

2016-01-30 Thread Julien Danjou
On Fri, Jan 29 2016, Sylvain Bauza wrote:

> While my heart is about that, my brain thinks about some regressions could
> be happening because of a +W even for a small change.

I suggest you read the git-revert manpage then, you might discover
something interesting there. :)

The "shit happened" (e.g. bad thing merged) rate difference between a
"permission" policy and a "forgiveness" policy is based on my very
precise guessed estimation probably close to +1% in disfavor of
"forgiveness". Right.

But at the same time, the velocity rate difference is close to +50% for
that same policy. So I've picked my side. :)

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][cross-project] Standardized role names and policy

2016-01-30 Thread Henry Nash
Hi Adam,

Fully support this kind of approach.

I am still concerned over the scope check, since we do have examples of when 
there is more than one (target) scope check, e.g.: an API that might operate on 
an object that maybe global, domain or project specific - in which case you 
need to “match up with scope checks with the object in question”, for example 
for a given API:

If cloud admin, allow the API
If domain admin and the object is domain or project specific, then allow the API
If project admin and the object is project specific then allow the API

Today we can (and do with keystone) encode this in policy rules. I’m not clear 
how the “scope check in code” will work in this kind of situation.

Henry

> On 30 Jan 2016, at 17:44, Adam Young  wrote:
> 
> I'd like to bring people's attention to a Cross Project spec that has the 
> potential to really strengthen the security story for OpenStack in a scalable 
> way.
> 
> "A common policy scenario across all projects" 
> https://review.openstack.org/#/c/245629/
> 
> The summary version is:
> 
> Role name or patternExplanation or example
> -:--
> admin:  Overall cloud admin
> service  :  for service users only, not real 
> humans
> {service_type}_admin :  identity_admin, compute_admin, 
> network_admin etc.
> {service_type}_{api_resource}_manager: identity_user_manager,
>compute_server_manager, 
> network_subnet_manager
> observer :  read only access
> {service_type}_observer  : identity_observer, image_observer
> 
> 
> Jamie Lennox originally wrote the spec that got the ball rolling, and Dolph 
> Matthews just took it to the next level.  It is worth a read.
> 
> I think this is the way to go.  There might be details on how to get there, 
> but the granularity is about right.
> If we go with that approach, we might want to rethink about how we enforce 
> policy.  Specifically, I think we should split the policy enforcement up into 
> two stages:
> 
> 1.  Role check.  This only needs to know the service and the api resource.  
> As such, it could happen in middleware.
> 
> 2. Scope check:  for user or project ownership.  This happens in the code 
> where it is currently called.  Often, an object needs to be fetched from the 
> database
> 
> The scope check is an engineering decision:  Nova developers need to be able 
> to say where to find the scope on the virtual machine, Cinder developers on 
> the volume objects.
> 
> Ideally, The python-*clients, Horizon and other tools would be able to 
> determine what capabilities a given token would provide based on the roles 
> included in the validation response. If the role check is based on the URL as 
> opposed to the current keys in the policy file, the client can determine 
> based on the request and the policy file whether the user would have any 
> chance of succeeding in a call. As an example, to create a user in Keystone, 
> the API is:
> 
> POST https://hostname:port/v3/users
> 
> Assuming the client has access to the appropriate policy file, if can 
> determine that a token with only the role "identity_observer" would not have 
> the ability to execute that command.  Horizon could then modify the users 
> view to remove the "add user" form.
> 
> For user management, we want to make role assignments as simple as possible 
> and no simpler.  An admin should not have to assign all of the individual 
> roles that a user needs.  Instead, assigning the role "Member" should imply 
> all of the subordinate roles that a user needs to perform the standard 
> workflows.  Expanding out the implied roles can be done either when issuing a 
> token, or when evaluating the policy file, or both.
> 
> I'd like to get the conversation on this started here on the mailing list, 
> and lead in to a really productive set of talks at the Austin summit.
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Alexis Lee for oslo-core

2016-01-30 Thread Julien Danjou
On Sat, Jan 30 2016, Sylvain Bauza wrote:

> I suggest you look how to revert an RPC API change by thinking of our
> continuous deployers, you might discover something interesting there.
> :)

This is an interesting thought indeed. If you consider every commit to
be releasable so then deploy-able, then there's somehow a blurry line on
what are bugs and not bugs as soon as you merge any code.

But then if you support such a continuous deployment you have to commit
that many resources to not have to use git revert, and, as you describe,
a way to both revert and support the broken behavior.

Considering your reply, I imagine that this is not the case in the
project(s) you contribute to unfortunately. :-( Something to think about
maybe.

> I would like to understand your 1% estimate. Do you think that only one
> merged change is bad vs. 100 others good ?

What I meant was that if in a "permission policy" your error rate is 1%,
the "forgiveness policy" is likely to be 2%.
(and it was merely meant as a guesstimate joke based on my few past
experience contributing to FOSS – YMMV)

> If so, how can you be sure that having an expert could not avoid the
> problem ?

What's an expert?

> I disagree with you. Say that one change will raise an important gate issue
> if merged.

Sure. Easy to imagine since this happened just yesterday where all the
telemetry projects gates got broken by devstack merging Keystone v3
support by default¹.

> Of course the change looks good. It's perfectly acceptable from a python
> perspective and Jenkins is happy.
> Unfortunately, merging that change would create lots of problems because it
> would wedge all the service projects CIs because that would be a behavioral
> change that wouldn't have a backwards compatibility.

Oh right… you mean like the change that broke all the telemetry project
gates this week… but that has been merged by a team of (what you call)
experts? Should we dispose them from devstack-core since they don't
qualify anymore to your definition of expert?

> If we have your forgiveness policy, it could have this change merged
> earlier, sure. But wouldn't you think that all the respective service
> projects velocities would be impacted by far more than this single change ?

It's funny that you pose a theoretical case that just happened a few
days ago. But you're not posing the problem correctly.

…unless you want to kick out Sean and Dean (sorry guys ;-), there is
already a forgiveness permission. Which means acknowledging that, people
do mistakes, whatever their level of expertise is, whatever your test
coverage is, and when that happens, you fix it. And it's easier/faster
to fix with a larger team than a few. Which mean inclusion. Which mean
openness.

You want zero defect? Then remove all humans from the equation (good
luck with that :-).

The point of my original email is that team should recognize that,
embrace it, and not try to implement the opposite. That does not mean
giving any power to anyone, that just means being fair and trusting
people to be good and honest. Most people just are – unless proven
otherwise as Joshua stated. ;-)

¹  https://review.openstack.org/#/c/271508/

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][cross-project] Standardized role names and policy

2016-01-30 Thread Adam Young

On 01/30/2016 04:14 PM, Henry Nash wrote:

Hi Adam,

Fully support this kind of approach.

I am still concerned over the scope check, since we do have examples of when 
there is more than one (target) scope check, e.g.: an API that might operate on 
an object that maybe global, domain or project specific - in which case you 
need to “match up with scope checks with the object in question”, for example 
for a given API:

If cloud admin, allow the API
If domain admin and the object is domain or project specific, then allow the API
If project admin and the object is project specific then allow the API

Today we can (and do with keystone) encode this in policy rules. I’m not clear 
how the “scope check in code” will work in this kind of situation.
I originally favored an approach that a user would need to get a token 
scoped to a resource in order to affect change on that resource, and 
admin users could get tokens scoped to anything,  but I know that makes 
things harder for Administrators trying to fix broken deployments. So I 
backed off on that approach.


I think the right answer would be that the role check would set some 
value to indicate it was an admin override.  So long as the check does 
not need the actual object from the database, t can perform whatever 
logic we like.


The policy check deep in the code can be as strict or permissive as it 
desires.  If there is a need to re-check the role for an admin check 
there, policy can still do so.  A role check that passes at the 
Middleware level can still be blocked at the in-code level.


"If domain admin and the object is domain or project specific, then 
allow the API" is trh tricky one, but I don't think we even have a 
solution for that now.  Domain1->p1->p2->p3 type hierarchies don't allow 
operations on p3 with a token scoped to Domain1.


I think that in those cases, I would still favor the user getting a 
token from Keystone scoped to p3, and use the inherited-role-assignment 
approach.





Henry


On 30 Jan 2016, at 17:44, Adam Young  wrote:

I'd like to bring people's attention to a Cross Project spec that has the 
potential to really strengthen the security story for OpenStack in a scalable 
way.

"A common policy scenario across all projects" 
https://review.openstack.org/#/c/245629/

The summary version is:

Role name or patternExplanation or example
-:--
admin:  Overall cloud admin
service  :  for service users only, not real humans
{service_type}_admin :  identity_admin, compute_admin, 
network_admin etc.
{service_type}_{api_resource}_manager: identity_user_manager,
compute_server_manager, 
network_subnet_manager
observer :  read only access
{service_type}_observer  : identity_observer, image_observer


Jamie Lennox originally wrote the spec that got the ball rolling, and Dolph 
Matthews just took it to the next level.  It is worth a read.

I think this is the way to go.  There might be details on how to get there, but 
the granularity is about right.
If we go with that approach, we might want to rethink about how we enforce 
policy.  Specifically, I think we should split the policy enforcement up into 
two stages:

1.  Role check.  This only needs to know the service and the api resource.  As 
such, it could happen in middleware.

2. Scope check:  for user or project ownership.  This happens in the code where 
it is currently called.  Often, an object needs to be fetched from the database

The scope check is an engineering decision:  Nova developers need to be able to 
say where to find the scope on the virtual machine, Cinder developers on the 
volume objects.

Ideally, The python-*clients, Horizon and other tools would be able to 
determine what capabilities a given token would provide based on the roles 
included in the validation response. If the role check is based on the URL as 
opposed to the current keys in the policy file, the client can determine based 
on the request and the policy file whether the user would have any chance of 
succeeding in a call. As an example, to create a user in Keystone, the API is:

POST https://hostname:port/v3/users

Assuming the client has access to the appropriate policy file, if can determine that a token with 
only the role "identity_observer" would not have the ability to execute that command.  
Horizon could then modify the users view to remove the "add user" form.

For user management, we want to make role assignments as simple as possible and no 
simpler.  An admin should not have to assign all of the individual roles that a user 
needs.  Instead, assigning the role "Member" should imply all of the 
subordinate roles that a user needs to perform the standard workflows.  Expanding out the 
implied roles can be 

Re: [openstack-dev] [Cinder] Nominating Patrick East to Cinder Core

2016-01-30 Thread Jay Bryant
+1. Patrick's contributions to Cinder have been notable since he joined us
and he is a pleasure to work with!   Welcome to the core team Patrick!

Jay

On Fri, Jan 29, 2016, 19:05 Sean McGinnis  wrote:

> Patrick has been a strong contributor to Cinder over the last few
> releases, both with great code submissions and useful reviews. He also
> participates regularly on IRC helping answer questions and providing
> valuable feedback.
>
> I would like to add Patrick to the core reviewers for Cinder. Per our
> governance process [1], existing core reviewers please respond with any
> feedback within the next five days. Unless there are no objections, I will
> add Patrick to the group by February 3rd.
>
> Thanks!
>
> Sean (smcginnis)
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Team meeting this Tuesday at 1400 UTC

2016-01-30 Thread Armando M.
Hi neutrinos,

According to [1], this is a kind reminder for next week's meeting: please
do not get caught by the confusion.

The Tuesday meetings will be hosted by Ihar, and I will be working with him
to discuss these meeting agendas [2] ahead of time. For this reason, stay
tuned for reminder updates coming from him too.

I do not plan on attending, but I may occasionally join the irc meetings
when I travel to more friendly time zones. If you have something to discuss
with me (whilst I am in the PTL capacity), please do not rely on the
Tuesday meetings to reach out.

In the meantime, let's thank Ihar for volunteering!

Cheers,
Armando

[1] https://review.openstack.org/#/c/272494/
[2] https://wiki.openstack.org/wiki/Network/Meetings
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][cross-project] Standardized role names and policy

2016-01-30 Thread Adam Young
I'd like to bring people's attention to a Cross Project spec that has 
the potential to really strengthen the security story for OpenStack in a 
scalable way.


"A common policy scenario across all projects" 
https://review.openstack.org/#/c/245629/


The summary version is:

Role name or patternExplanation or example
-:--
admin:  Overall cloud admin
service  :  for service users only, not real 
humans
{service_type}_admin :  identity_admin, compute_admin, 
network_admin etc.

{service_type}_{api_resource}_manager: identity_user_manager,
compute_server_manager, 
network_subnet_manager

observer :  read only access
{service_type}_observer  : identity_observer, image_observer


Jamie Lennox originally wrote the spec that got the ball rolling, and 
Dolph Matthews just took it to the next level.  It is worth a read.


I think this is the way to go.  There might be details on how to get 
there, but the granularity is about right.
If we go with that approach, we might want to rethink about how we 
enforce policy.  Specifically, I think we should split the policy 
enforcement up into two stages:


1.  Role check.  This only needs to know the service and the api 
resource.  As such, it could happen in middleware.


2. Scope check:  for user or project ownership.  This happens in the 
code where it is currently called.  Often, an object needs to be fetched 
from the database


The scope check is an engineering decision:  Nova developers need to be 
able to say where to find the scope on the virtual machine, Cinder 
developers on the volume objects.


Ideally, The python-*clients, Horizon and other tools would be able to 
determine what capabilities a given token would provide based on the 
roles included in the validation response. If the role check is based on 
the URL as opposed to the current keys in the policy file, the client 
can determine based on the request and the policy file whether the user 
would have any chance of succeeding in a call. As an example, to create 
a user in Keystone, the API is:


POST https://hostname:port/v3/users

Assuming the client has access to the appropriate policy file, if can 
determine that a token with only the role "identity_observer" would not 
have the ability to execute that command.  Horizon could then modify the 
users view to remove the "add user" form.


For user management, we want to make role assignments as simple as 
possible and no simpler.  An admin should not have to assign all of the 
individual roles that a user needs.  Instead, assigning the role 
"Member" should imply all of the subordinate roles that a user needs to 
perform the standard workflows.  Expanding out the implied roles can be 
done either when issuing a token, or when evaluating the policy file, or 
both.


I'd like to get the conversation on this started here on the mailing 
list, and lead in to a really productive set of talks at the Austin summit.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Upstream University: Signup for Mentors and Mentees

2016-01-30 Thread Mike Perez
Hello all!

I'm trying to gather how much interest there is do an Upstream University at
the Austin summit as we've done at the summits since Paris. Here's an excerpt
from the training guides about the program [1]:

We've designed a training program to help professional developers negotiate
this hurdle. It shows them how to ensure their bug fix or feature is accepted
in the OpenStack project in a minimum amount of time. The educational program
requires students to work on real-life bug fixes or new features during two
days of real-life classes and online mentoring, until the work is accepted by
OpenStack. The live two-day class teaches them to navigate the intricacies of
the project's technical tools and social interactions. In a followup session,
the students benefit from individual online sessions to help them resolve any
remaining problems they might have.

In order for this program to be successful, we need mentors and
students/mentees. If you're available April 23-24 before the summit starts in
Austin, TX and want to help others and mentor or just want to learn more on
contributing to OpenStack, please sign up and check the box that indicates
you're interested in the Upstream University.

https://openstackfoundation.formstack.com/forms/mentoring

Even if you're unable to participate, please consider spreading the word so
this event can be a success!

[1] - 
https://github.com/openstack/training-guides/blob/master/doc/upstream-training/upstream-details.rst#first-day

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][ceilometer][all] stable/kilo 2015.1.3 delayed

2016-01-30 Thread Dave Walker
On 29 January 2016 at 19:34, gordon chung  wrote:

>
> hmm.. that's unfortunate... anything we need to update so this doesn't
> happen again? or just a matter of lesson learned, let's keep an eye out
> next time?
>
> i guess the question is can users wait (a month?) for next release? i'm
> willing to poll operator list (or any list) to query for demand if
> that's easier on your end? if there's very little interest we can defer
> -- i do have a few patches lined up for next kilo release window so i
> would expect another release.
>
> cheers,
>

I'd like to think that in the new world order of proposing tags through
gerrit, rather than direct applying this could be avoided.

When Iapplied the tag locally, the current state of the branch did sdist
successfully.. but when jenkins tried to react to the pushed tag it was
non-buildable. This is yet another reason why directly applying tags
should burn.

Thanks

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Alexis Lee for oslo-core

2016-01-30 Thread Davanum Srinivas
Sylvain,

Let's agree to disagree. This works for Oslo, so lets leave it at that.

Also, *please* switch Subject as this is not fair to Alexis's
nomination if you wish to continue.

-- Dims

On Sat, Jan 30, 2016 at 6:55 AM, Sylvain Bauza  wrote:
>
> Le 30 janv. 2016 09:32, "Julien Danjou"  a écrit :
>>
>> On Fri, Jan 29 2016, Sylvain Bauza wrote:
>>
>> > While my heart is about that, my brain thinks about some regressions
>> > could
>> > be happening because of a +W even for a small change.
>>
>> I suggest you read the git-revert manpage then, you might discover
>> something interesting there. :)
>>
>
> I suggest you look how to revert an RPC API change by thinking of our
> continuous deployers, you might discover something interesting there. :)
>
>> The "shit happened" (e.g. bad thing merged) rate difference between a
>> "permission" policy and a "forgiveness" policy is based on my very
>> precise guessed estimation probably close to +1% in disfavor of
>> "forgiveness". Right.
>>
>
> I would like to understand your 1% estimate. Do you think that only one
> merged change is bad vs. 100 others good ?
> If so, how can you be sure that having an expert could not avoid the problem
> ?
>
>> But at the same time, the velocity rate difference is close to +50% for
>> that same policy. So I've picked my side. :)
>>
>
> I disagree with you. Say that one change will raise an important gate issue
> if merged.
> Of course the change looks good. It's perfectly acceptable from a python
> perspective and Jenkins is happy.
> Unfortunately, merging that change would create lots of problems because it
> would wedge all the service projects CIs because that would be a behavioral
> change that wouldn't have a backwards compatibility.
>
> If we have your forgiveness policy, it could have this change merged
> earlier, sure. But wouldn't you think that all the respective service
> projects velocities would be impacted by far more than this single change ?
>
> -Sylvain
>
>> --
>> Julien Danjou
>> ;; Free Software hacker
>> ;; https://julien.danjou.info
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Using nova interface extension instead of networks extension

2016-01-30 Thread Brandon Logan
Yeah our public cloud does not support that call.  We actually have a
different endpoint that is almost just like the os-interfaces one!
Except the openstack nova client doesn't know about it, of course.  If
for the time being we can temporarily support the os-networks way as a
fall back method if the os-interfaces one fails, then I think that'd be
best.

Thanks,
Brandon

On Fri, 2016-01-29 at 23:37 +, Eichberger, German wrote:
> All,
> 
> In a recent patch [1] Bharath and I proposed to replace the call to the nova 
> os-networks extension with a call to the nova-interface extension. Apparently 
> os-networks is geared towards nova networks and us being neutron I see no 
> reason to continue to support it. I have taken to the ML to gather feedback 
> if there are cloud operators which don’t have/won't  the nova interface 
> extension enabled and need us to support os-networks in Mitaka and beyond.
> 
> Thanks,
> German
> 
> [1] https://review.openstack.org/#/c/273733/4
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Recent integration tests failures

2016-01-30 Thread Timur Sufiev
Problematic Selenium versions have been successfully excluded from Horizon
test-requirements, if you still experiencing the error described above,
rebase your patch onto the latest master.
On Fri, 29 Jan 2016 at 12:36, Itxaka Serrano Garcia 
wrote:

> Can confirm, had the same issue locally, was fixed after a downgrade to
> selenium 2.48.
>
>
> Good catch!
>
> Itxaka
>
> On 01/28/2016 10:08 PM, Timur Sufiev wrote:
> > According to the results at
> > https://review.openstack.org/#/c/273697/1 capping Selenium to be not
> > greater than 2.49 fixes broken tests. Patch to global-requirements is
> > here: https://review.openstack.org/#/c/273750/
> >
> > On Thu, Jan 28, 2016 at 9:22 PM Timur Sufiev  > > wrote:
> >
> > Hello, Horizoneers
> >
> > You may have noticed recent integration tests failures seemingly
> > unrelated to you patches, with a stacktrace like:
> > http://paste2.org/2Hk9138U I've already filed a bug for that,
> > https://bugs.launchpad.net/horizon/+bug/1539197 Appears to be a
> > Selenium issue, currently investigating it.
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/trove failed

2016-01-30 Thread Amrith Kumar
This (likely) relates to a change made on master. I'm looking into it.

-amrith

--
Amrith Kumar, CTO   | amr...@tesora.com
Tesora, Inc | @amrithkumar
125 CambridgePark Drive, Suite 400  | http://www.tesora.com
Cambridge, MA. 02140| GPG: 0x5e48849a9d21a29b

On 01/30/2016 01:12 AM, A mailing list for the OpenStack Stable Branch
test reports. wrote:
> Build failed.
> 
> - periodic-trove-docs-kilo 
> http://logs.openstack.org/periodic-stable/periodic-trove-docs-kilo/d1d4d9c/ : 
> SUCCESS in 2m 34s
> - periodic-trove-python27-kilo 
> http://logs.openstack.org/periodic-stable/periodic-trove-python27-kilo/32e356c/
>  : SUCCESS in 3m 45s
> - periodic-trove-docs-liberty 
> http://logs.openstack.org/periodic-stable/periodic-trove-docs-liberty/421563f/
>  : SUCCESS in 2m 21s
> - periodic-trove-python27-liberty 
> http://logs.openstack.org/periodic-stable/periodic-trove-python27-liberty/5399888/
>  : FAILURE in 3m 55s
> 
> ___
> Openstack-stable-maint mailing list
> openstack-stable-ma...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
> 



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][ceilometer][all] stable/kilo 2015.1.3 delayed

2016-01-30 Thread Dave Walker
On 29 January 2016 at 20:36, Jeremy Stanley  wrote:
> On 2016-01-29 19:34:01 + (+), gordon chung wrote:
>> hmm.. that's unfortunate... anything we need to update so this doesn't
>> happen again? or just a matter of lesson learned, let's keep an eye out
>> next time?
>
> Well, I backported the downloadcache removal to the stable/kilo
> branch after discovering this issue, and while that's too late to
> solve it for 2015.1.3 it will at least no longer prevent a 2015.1.4
> tarball from being built.
>
>> i guess the question is can users wait (a month?) for next release? i'm
>> willing to poll operator list (or any list) to query for demand if
>> that's easier on your end? if there's very little interest we can defer
>> -- i do have a few patches lined up for next kilo release window so i
>> would expect another release.
>
> I'm perfectly okay uploading a tarball I or someone else builds for
> this, as long as it's acceptable to leadership from stable branch
> management, Telemetry and the community at large. Our infrastructure
> exists to make things more consistent and convenient, but it's there
> to serve us and so we shouldn't be slaves to it.

Unless anyone else objects, I'd be really happy if you are willing to
scp a handrolled tarball.

I'm happy to help validate it's pristine-state locally here.

Thanks Jeremy!

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Alexis Lee for oslo-core

2016-01-30 Thread Sylvain Bauza
Le 30 janv. 2016 09:32, "Julien Danjou"  a écrit :
>
> On Fri, Jan 29 2016, Sylvain Bauza wrote:
>
> > While my heart is about that, my brain thinks about some regressions
could
> > be happening because of a +W even for a small change.
>
> I suggest you read the git-revert manpage then, you might discover
> something interesting there. :)
>

I suggest you look how to revert an RPC API change by thinking of our
continuous deployers, you might discover something interesting there. :)

> The "shit happened" (e.g. bad thing merged) rate difference between a
> "permission" policy and a "forgiveness" policy is based on my very
> precise guessed estimation probably close to +1% in disfavor of
> "forgiveness". Right.
>

I would like to understand your 1% estimate. Do you think that only one
merged change is bad vs. 100 others good ?
If so, how can you be sure that having an expert could not avoid the
problem ?

> But at the same time, the velocity rate difference is close to +50% for
> that same policy. So I've picked my side. :)
>

I disagree with you. Say that one change will raise an important gate issue
if merged.
Of course the change looks good. It's perfectly acceptable from a python
perspective and Jenkins is happy.
Unfortunately, merging that change would create lots of problems because it
would wedge all the service projects CIs because that would be a behavioral
change that wouldn't have a backwards compatibility.

If we have your forgiveness policy, it could have this change merged
earlier, sure. But wouldn't you think that all the respective service
projects velocities would be impacted by far more than this single change ?

-Sylvain

> --
> Julien Danjou
> ;; Free Software hacker
> ;; https://julien.danjou.info
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Patrick East to Cinder Core

2016-01-30 Thread Duncan Thomas
+1

He's been doing great work, and is a pleasure to work with.
On 29 Jan 2016 19:05, "Sean McGinnis"  wrote:

> Patrick has been a strong contributor to Cinder over the last few
> releases, both with great code submissions and useful reviews. He also
> participates regularly on IRC helping answer questions and providing
> valuable feedback.
>
> I would like to add Patrick to the core reviewers for Cinder. Per our
> governance process [1], existing core reviewers please respond with any
> feedback within the next five days. Unless there are no objections, I will
> add Patrick to the group by February 3rd.
>
> Thanks!
>
> Sean (smcginnis)
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Testing Cinder upgrades - c-bak upgrade

2016-01-30 Thread Duncan Thomas
On 29 Jan 2016 19:37, "Michał Dulko"  wrote:
>

> Resolution on this matter from the Cinder mid-cycle is that we're fine
> as long as we safely fail in case of upgrade conducted in an improper
> order. And it seems we can implement that in a simple way by raising an
> exception from volume.rpcapi when c-vol is pinned to a version too old.
> This means that scalable backup patches aren't blocked by this issue.

Agreed. As long as:
a) there is a correct order to upgrade, with no loss of service

And

b) incorrect ordering results in graceful failure (zero data loss, new
volumes / backups go to error, old backups are in a state where they can be
restored once the upgrade is complete, sensible user error messages where
possible)

If those two conditions are met (and it sounds like they are) then I'm happy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] documenting configuration option segregation between services and agents

2016-01-30 Thread Kevin Benton
Propose it as a devref patch!

On Wed, Jan 27, 2016 at 12:30 PM, Dustin Lundquist 
wrote:

> We should expand services_and_agents devref to describe how and why
> configuration options should be segregated between services and agents. I
> stumbled into this recently while trying to remove a confusing duplicate
> configuration option [1][2][3]. The present separation appears to be
> 'tribal knowledge', and not consistently enforced. So I'll take a shot at
> explaining the status quo as I understand it and hopefully some seasoned
> contributors can fill in the gaps.
>
> =BEGIN PROPOSED DEVREF SECTION=
> Configuration Options
> -
>
> In addition to database access, configuration options are segregated
> between neutron-server and agents. Both services and agents may load the
> main neutron.conf since this file should contain the Oslo message
> configuration for internal Neutron RPCs and may contain host specific
> configuration such as file paths. In addition neutron.conf contains the
> database, keystone and nova credentials and endpoints strictly for use by
> neutron-server.
>
> In addition neutron-server may load a plugin specific configuration file,
> yet the agents should not. As the plugin configuration is primarily site
> wide options and the plugin provides the persistence layer for Neutron,
> agents should instructed to act upon these values via RPC.
>
> Each individual agent may have its own configuration file. This file
> should be loaded after the main neutron.conf file, so the agent
> configuration takes precedence. The agent specific configuration may
> contain configurations which vary between hosts in a Neutron deployment
> such as the external_network_bridge for a L3 agent. If any agent requires
> access to additional external services beyond the Neutron RPC, those
> endpoints should be defined in the agent specific configuration file (e.g.
> nova metadata for metadata agent).
>
>
> ==END PROPOSED DEVREF SECTION==
>
> Disclaimers: this description is informed my by own experiences reading
> existing documentation and examining example configurations including
> various devstack deployments. I've tried to use RFC style wording: should,
> may, etc.. I'm relatively confused on this subject, and my goal in writing
> this is to obtain some clarity myself and share it with others in the form
> of documentation.
>
>
> [1] https://review.openstack.org/262621
> [2] https://bugs.launchpad.net/neutron/+bug/1523614
> [3] https://review.openstack.org/268153
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][cross-project] Standardized role names and policy

2016-01-30 Thread Henry Nash

> On 30 Jan 2016, at 21:55, Adam Young  > wrote:
> 
> On 01/30/2016 04:14 PM, Henry Nash wrote:
>> Hi Adam,
>> 
>> Fully support this kind of approach.
>> 
>> I am still concerned over the scope check, since we do have examples of when 
>> there is more than one (target) scope check, e.g.: an API that might operate 
>> on an object that maybe global, domain or project specific - in which case 
>> you need to “match up with scope checks with the object in question”, for 
>> example for a given API:
>> 
>> If cloud admin, allow the API
>> If domain admin and the object is domain or project specific, then allow the 
>> API
>> If project admin and the object is project specific then allow the API
>> 
>> Today we can (and do with keystone) encode this in policy rules. I’m not 
>> clear how the “scope check in code” will work in this kind of situation.
> I originally favored an approach that a user would need to get a token scoped 
> to a resource in order to affect change on that resource, and admin users 
> could get tokens scoped to anything,  but I know that makes things harder for 
> Administrators trying to fix broken deployments. So I backed off on that 
> approach.
> 
> I think the right answer would be that the role check would set some value to 
> indicate it was an admin override.  So long as the check does not need the 
> actual object from the database, t can perform whatever logic we like.
> 
> The policy check deep in the code can be as strict or permissive as it 
> desires.  If there is a need to re-check the role for an admin check there, 
> policy can still do so.  A role check that passes at the Middleware level can 
> still be blocked at the in-code level.
> 
> "If domain admin and the object is domain or project specific, then allow the 
> API" is trh tricky one, but I don't think we even have a solution for that 
> now.  Domain1->p1->p2->p3 type hierarchies don't allow operations on p3 with 
> a token scoped to Domain1.

So we do actually support things like that, e.g. (from the domain specific role 
additions):

”identity:some_api": role:admin and project_domain_id:%(target.role.domain_id)s 
   (which means I’m project admin and the domain specific role I am going to 
manipulate is specific to my domain)

….and although we don’t have this in our standard policy, you could also write

”identity:some_api": role:admin and domain_id:%(target.project.domain_id)s
(which means I’m domain admin and I can do some operation on any project in my 
domain)

> 
> I think that in those cases, I would still favor the user getting a token 
> from Keystone scoped to p3, and use the inherited-role-assignment approach.
> 
> 
>> 
>> Henry
>> 
>>> On 30 Jan 2016, at 17:44, Adam Young >> > wrote:
>>> 
>>> I'd like to bring people's attention to a Cross Project spec that has the 
>>> potential to really strengthen the security story for OpenStack in a 
>>> scalable way.
>>> 
>>> "A common policy scenario across all projects" 
>>> https://review.openstack.org/#/c/245629/ 
>>> 
>>> 
>>> The summary version is:
>>> 
>>> Role name or patternExplanation or example
>>> -:--
>>> admin:  Overall cloud admin
>>> service  :  for service users only, not real 
>>> humans
>>> {service_type}_admin :  identity_admin, compute_admin, 
>>> network_admin etc.
>>> {service_type}_{api_resource}_manager: identity_user_manager,
>>>compute_server_manager, 
>>> network_subnet_manager
>>> observer :  read only access
>>> {service_type}_observer  : identity_observer, image_observer
>>> 
>>> 
>>> Jamie Lennox originally wrote the spec that got the ball rolling, and Dolph 
>>> Matthews just took it to the next level.  It is worth a read.
>>> 
>>> I think this is the way to go.  There might be details on how to get there, 
>>> but the granularity is about right.
>>> If we go with that approach, we might want to rethink about how we enforce 
>>> policy.  Specifically, I think we should split the policy enforcement up 
>>> into two stages:
>>> 
>>> 1.  Role check.  This only needs to know the service and the api resource.  
>>> As such, it could happen in middleware.
>>> 
>>> 2. Scope check:  for user or project ownership.  This happens in the code 
>>> where it is currently called.  Often, an object needs to be fetched from 
>>> the database
>>> 
>>> The scope check is an engineering decision:  Nova developers need to be 
>>> able to say where to find the scope on the virtual machine, Cinder 
>>> developers on the volume objects.
>>> 
>>> Ideally, The python-*clients, Horizon and other tools would be able to 
>>> determine what capabilities a given 

Re: [openstack-dev] [api] service type vs. project name for use in headers

2016-01-30 Thread Mike Perez
On 10:55 Jan 28, Kevin L. Mitchell wrote:
> On Thu, 2016-01-28 at 11:06 +, Chris Dent wrote:
> > I think it is high time we resolve the question of whether the
> > api-wg guidelines are evaluating existing behaviors in OpenStack and
> > blessing the best or providing aspirational guidelines of practices
> > which are considered best at a more universal level.
> 
> From my historical perspective, the API WG had essentially two phases,
> with only the first phase getting general support at the time: 1. trying
> to document existing practices and recommend best practices; 2.
> establishing rules that all OpenStack APIs must adhere to.  I think the
> first phase is essentially complete at this point, but I think Chris is
> right that it's high time to decide whether the guidelines are normative
> or informative…and my vote would be for normative, and with a focus on
> the API consumer.  After all, an API is useless if it's a pain to use :)

+1

So I see TC members commenting on this thread. I think it would be great to
have the TC members discuss this, but I know they're going to want consensus
from projects.

Projects under the big tent hopefully have some representation with the API
working group at this point [1].

It would be great if said group could actually begin this conversation with
their respected project team and understand if any of these guidelines [2] are
not being followed, and collect that information to understand where we're at
today.

I'm sure what this thread is raising is just one of the things, but right now
it's a big unknown to us where we all are currently today, and will continue to
block us from making a big decision like this.

[1] - http://specs.openstack.org/openstack/api-wg/liaisons.html
[2] - https://specs.openstack.org/openstack/api-wg/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-30 Thread Fausto Marzi
Hi Preston,
No need to apologize. They are aspect of the same problem.
However, VMs backup is one of the many aspects that we are approaching
here, such as:

- VM backups
- Volumes backups
- Specific applications consistent data backup (i.e. MySQL, Mongo, file
system, etc)
- Provide capabilities to restore data even if keystone and swift are not
available
- Upload data during backup to multiple media storage in parallel
- Web Interface
- Provide capability to synchronize backups for sharded data on multiple
nodes
- Encryption
- File based incremental
- Block based incremental
- Tenant related data backup and restore
- Multi platform OS support (i.e. Linux, BSD, OSX, Windows, iOS, Android,
etc)
- Everything is upstreamed.

This looks like a list of features... and actually it is.

Block based and some multi platform OS aside, all the mentioned features
are provided to the date. Most of them are available since Kilo.

I agree with the multi API, room for vendors and to provide different
approaches, but please let me say something (*not referring specifically to
you or Sam or anyone*)

All the time people say you have to do this and that, but the facts are
that at the end of the day, always the same 6 engineers (not even full
time) are working on it since 2 years, investing professional and personal
time on it.

We try to be open, to accept everybody (even the most arrogant), to
implement features for whoever needs it, but the facts are that the only
Companies that invested on it are HP, a bit Ericsson and Orange (apologize
if I forgot anyone). We never said no to anyone about anything, never
focused only to a single Company influence, never blocked a thing... and
never will.

Wouldn't be better to join efforts if companies need a backup solution and
have their own requirements implemented by a common public Team, rather
then start creating many tools to solve the same set of problems? How can
ever competition benefit this? How can ever fragmenting projects help to
provide a better solution?

I'm sorry, but unless the TC or many people from the Community, tell us to
do something different (in that case we'll do it straight away), we'll keep
doing what we are doing, focusing on delivering what we think is the most
advanced solution, according the resources and time we have.

We need to understand that here the most important thing is to work in
Team, to provide great tools to the Community, rather then thinking to be
PTL or maintain independence just for the sake of it or focusing only on
what's the best for a single Company. If this vision is not shared, then,
unfortunately, good luck competing, while if the vision is shared... let's
do together unprecedented things.

Many thanks,
Fausto


On Sun, Jan 31, 2016 at 1:01 AM, Preston L. Bannister 
wrote:

> Seems to me there are three threads here.
>
> The Freezer folk were given a task, and did the best possible to support
> backup given what OpenStack allowed. To date, OpenStack is simply not very
> good at supporting backup as a service. (Apologies to the Freezer folk if I
> misinterpreted.)
>
> The patches (finally) landing in QEMU in support of incremental backup
> could be the basis for efficient backup services in OpenStack. This is all
> fairly high risk, in the short term. The bits that landed in QEMU 2.4 may
> not be sufficient (there are more QEMU patches trying to land). When put
> into production, we may find faults. For use in OpenStack, we may need
> changes in libvirt, and/or in Nova. (Or *maybe* not if usage for backup
> proves orthogonal.)  The only way to work out the prior is to start. The
> timeline could be months or years.
>
> There is a need for a common API for backup as a service in the cloud.
> Something more than imitating AWS. Allow some room for vendors with
> differing approach.
>
> I see the above as not competing, but aspects of the same problem.
>
>
> ​
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-30 Thread Preston L. Bannister
Seems to me there are three threads here.

The Freezer folk were given a task, and did the best possible to support
backup given what OpenStack allowed. To date, OpenStack is simply not very
good at supporting backup as a service. (Apologies to the Freezer folk if I
misinterpreted.)

The patches (finally) landing in QEMU in support of incremental backup
could be the basis for efficient backup services in OpenStack. This is all
fairly high risk, in the short term. The bits that landed in QEMU 2.4 may
not be sufficient (there are more QEMU patches trying to land). When put
into production, we may find faults. For use in OpenStack, we may need
changes in libvirt, and/or in Nova. (Or *maybe* not if usage for backup
proves orthogonal.)  The only way to work out the prior is to start. The
timeline could be months or years.

There is a need for a common API for backup as a service in the cloud.
Something more than imitating AWS. Allow some room for vendors with
differing approach.

I see the above as not competing, but aspects of the same problem.


​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Dragonflow] IRC Meeting tommorrow (1/2) - 0900 UTC (#openstack-meeting-4)

2016-01-30 Thread Gal Sagie
Hello All,

We will have an IRC meeting tomorrow (Monday, 1/2) at 0900 UTC
in #openstack-meeting-4

Please review the expected meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/Dragonflow

We are going to have a busy meeting, many new specs/design, i have put
the links to all specs in the agenda page above, please try to review them
before
the meeting.

You can view last meeting action items and logs here:
http://eavesdrop.openstack.org/meetings/dragonflow/2016/dragonflow.2016-01-25-09.00.html

Please update the agenda if you have any subject you would like to discuss
about.


Thanks
Gal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova][cinder][horizon] Projects acting as a domain at the top of the project hierarchy

2016-01-30 Thread Henry Nash
Hi

One of the things the keystone team was planning to merge ahead of milestone-3 
of Mitaka, was “projects acting as a domain”. Up until now, domains in keystone 
have been stored totally separately from projects, even though all projects 
must be owned by a domain (even tenants created via the keystone v2 APIs will 
be owned by a domain, in this case the ‘default’ domain). All projects in a 
project hierarchy are always owned by the same domain. Keystone supports a 
number of duplicate concepts (e.g. domain assignments, domain tokens) similar 
to their project equivalents.

The idea of  “projects acting as a domain” is:

- A domain is actually represented as a super-top-level project (with an 
attribute, “is_domain" set to True), and all previous top level projects in the 
domain specify this special project as their parent in their parent_id 
attribute. A project with is_domain=True is said to be a “project acing as a 
domain”. Such projects can not have parents - i.e. they are at the top of the 
tree.
- The project_id of a project acting as a domain is the equivalent of the 
domain_id.
- The existing domain APIs are still supported, but behind the scenes actually 
reference the “project acing as a domain”, although in the long run may be 
deprecated. On migration to Mitaka, the entries of the domain table are moved 
to be projects acting as domains in the project table
- The project api can now be used to create/update/delete a project acting as a 
domain (by setting is_domain=True) just like a regular project - and do the 
equivalent of the domain CRUD APIs
- Although domain scoped tokens are still supported, you can get a project 
scoped token to the project acting as a domain (and the is_domain attribute 
will be part of the token), so you can write policy rules that can solely 
respond to project tokens. We can eventually deprecate domain tokens, if we 
chose.
- Domain assignments (which will still be supported) really just become project 
assignments placed on the project acting as a domain.
- In terms of the impact on the results of list projects:
— There is no change to listing projects within a domain (since you don’t see 
“the domain” is such a listing today)
— A filter is being added to the list projects API to allow filtering by the 
is_domain attribute - with a default of is_domain=False (i.e. so unless you ask 
for them when listing all projects, you won’t see the projects acting as a 
domain). Hence again, by default, no change to the collection returned today.

The above proposed changes have been integrated into the latest version of the 
Identity API spec: 
https://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html

I’ve got a couple of questions about the impact of the above:

1) I already know that if we do exactly as described above, the cinder gets 
confused with how it does quotas today - since suddenly there is a new parent 
to what it thought was a top level project (and the permission rules it encodes 
requires the caller to be cloud admin, or admin of the root project of a 
hierarchy).
2) I’m not sure of the state of nova quotas - and whether it would suffer a 
similar problem?
3) Will Horizon get confused by this at all?

Depending on the answers to the above, we can go in a couple of directions. The 
cinder issues looks easy to fix (having had a quick look at the code) - and if 
that was the only issue, then that may be fine. If we think there may be 
problems in multiple services, we could, for Mitaka, still create the projects 
acting as domains, but not set the parent_id of the current top level projects 
to point at the new project acting as a domain - that way those projects acting 
as domains remain isolated from the hierarchy for now (and essentially 
invisible to any calling service). Then as part of Newton we can provide 
patches to those services that need changing, and then wire up the projects 
acting as a domain to their children.

Interested in feedback to the questions above.

Henry
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Alexis Lee for oslo-core

2016-01-30 Thread Joshua Harlow

On 01/30/2016 01:25 PM, Julien Danjou wrote:

On Sat, Jan 30 2016, Sylvain Bauza wrote:


I suggest you look how to revert an RPC API change by thinking of our
continuous deployers, you might discover something interesting there.
:)


This is an interesting thought indeed. If you consider every commit to
be releasable so then deploy-able, then there's somehow a blurry line on
what are bugs and not bugs as soon as you merge any code.


This is an interesting point, perhaps worthy of another thread... But 
I've always wondered how this works out for those who are continuously 
deploying from master/head. Now I know openstack is obviously bug-free 
(ha) so it likely never happens, but how is that handled (especially if 
it's a critical bug let *accidently* in by some *expert*) and what is 
the workflow...


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev