[openstack-dev] Replacing Keystone Admin Accounts

2018-03-14 Thread Adam Young
As we attempt to close the gap on Bug 968696, we have to make sure we are
headed forward in a path that won't get us stuck.

It seems that many people use Admin-every accounts for many things that
they are not really meant for.  Such as performing Operations that should
be scoped to a project, like creating networks in Neutron or Block devices
in Cinder.

With the service scoping of role assignments, we have both the opportunity
and responsibility to rework how these operations are authorized.

Back in the time when we were discussing and engineering Hierarchical
Multi-tenancy (HMT) the operators told us that they did not want to have to
rescope tokens in order to provide help for their users.  I remember
getting this both verbally and in writing, although I cannot find the
message now.

If we created basic policy rules that allowed a Nova service account to
list all servers (for example) but not to change those servers without
getting a token scoped to that specific project, would it break a lot of
tooling?

The other use case we've found is the need to clean up project-scoped
resources.  Once a project has been deleted in Keystone, it is impossible
to get a project scoped token to delete the resources in cinder, glance,
and so on.  It seems like these operations need to be on a per-system
(service? endpoint) basis for the foreseeable future.  Is this acceptable?
Are there any alterntives that people would rather see implemented?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Weirdness around domain/project scope in role assignments

2018-03-09 Thread Adam Young
On Fri, Mar 9, 2018 at 2:42 AM, Adrian Turjak 
wrote:

> Sooo to follow up from the discussion last night partly with Lance and
> Adam, I'm still not exactly sure what difference, if any, there is
> between a domain scoped role assignment, and a project scoped role
> assignment. And... It appears stuff breaks when you used both, or either
> actually (more on that further down).
>
> My problem/confusion was why the following exists or is possible:
> http://paste.openstack.org/show/695978/
> The amusing part, I now can't remove the above role assignments. They
> throw a 500:
> http://paste.openstack.org/show/696013/
> The error itself being:
> http://paste.openstack.org/show/695994/





This is a bug.  It looks like the one() assumes there is only every one
record that comes back that matches, and this matches multiple.

A 500 is never appropriate.


>
>
> Then lets look at just project scope:
> http://paste.openstack.org/show/696007/
> I can't seem to do 'include_names' on the project scoped role
> assignment, but effective works since it doesn't include the project. I
> have a feeling the error is because keystone isn't including projects
> with is_domain when doing the names mapping.
>

Probably correct. Bug on this, too.




>
> So... going a little further, does domain scope still act like project
> scope in regards to effective roles:
> http://paste.openstack.org/show/695992/
> The answer is yes. But again, this is domain scope, not project scope
> which still results in project scope down the tree. Although here
> 'include_names' works, this time because keystone internally is directly
> checking for is_domain I assume.
>

Interesting.  That might have been a "works as designed"  with the idea
that assigning a role on a domain that is inherited is used by anything
underneath it.  It actually makes sense, as domains can't nest, so this
may be intentional syntactic sugar on top of the format I used:




>
> Also worth mentioning that the following works (and maybe shouldn't?):
> http://paste.openstack.org/show/696006/
> Alice has a role on a 'project' that isn't part of her domain. I can't
> add her to a project that isn't in her domain... but I can add her to
> another domain? That surely isn't expected behavior...
>
> That is a typo.  You added an additional character at the end of the ID:

86a8b3dc1b8844fd8c2af8dd50cc21386

86a8b3dc1b8844fd8c2af8dd50cc2138





>
> Weird broken stuff aside, I'm still not seeing a difference between
> domain/project role assignment scope on a project that is a domain. Is
> there a difference that I'm missing, and where is such a difference used?
>
> Looking at the blog post Adam linked
> (https://adam.younglogic.com/2018/02/openstack-hmt-cloudforms/), he
> isn't  really making use of domain scope, just project scope on a
> domain, and inheritance down the tree, which is indeed a valid and
> useful case, but again, not domain scope assignment. Although domain
> scope on the same project would probably (as we see above) achieve the
> same result.
>
> Then looking at the policy he linked:
> http://git.openstack.org/cgit/openstack/keystone/tree/etc/
> policy.v3cloudsample.json#n52
> "identity:list_projects": "rule:cloud_admin or
> rule:admin_and_matching_domain_id",
> - "cloud_admin": "role:admin and (is_admin_project:True or
> domain_id:admin_domain_id)",
> - "admin_and_matching_domain_id": "rule:admin_required and
> domain_id:%(domain_id)s",
> -  "admin_required": "role:admin",
>
> I can't exactly see how it also uses domain scope. It still seems to be
> project scope focused.
>
> It is subtle.  domain_id:admin_domain_id  Means that the token has a
domain_id, which means it is a domain scoped token.






> So my question then is why on the role assignment object do we
> distinguish between a domain/project when it comes to scope when a
> domain IS a project, and clearly things break when you set both.
>
> Can we make it so the following works (a hypothetical example):
> http://paste.openstack.org/show/696010/
> At which point the whole idea of 'domain' scope on a role assignment
> goes away since and is exactly the same thing as project scope, and also
> the potential database 500 issues goes away since... there isn't more
> than 1 row. We can then start phasing out the domain scope stuff and
> hiding it away unless someone is explicitly still looking for it.
>
> Because in reality, right now I think we only have project scope, and
> system scope. Domain scope == project scope and we should probably make
> that clear because obviously the code base is confused on that matter. :P
>


I'd love it if Domains went away and we only had projects.  We'd have to
find a way to implement such that people using domains today don't get
broken.
We could also add a 3 value toggle on the inheritance: none, children_only,
both to get it down to a single entry.that would be an implementation
detail that the end users would not see.


The one 

Re: [openstack-dev] [security] Security PTG Planning, x-project request for topics.

2018-01-29 Thread Adam Young
Bug 968696 and System Roles.   Needs to be addressed across the Service
catalog.

On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds  wrote:

> Just a reminder as we have not had many uptakes yet..
>
> Are there any projects (new and old) that would like to make use of the
> security SIG for either gaining another perspective on security challenges
> / blueprints etc or for help gaining some cross project collaboration?
>
> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds  wrote:
>
>> Hello All,
>>
>> I am seeking topics for the PTG from all projects, as this will be where
>> we try out are new form of being a SIG.
>>
>> For this PTG, we hope to facilitate more cross project collaboration
>> topics now that we are a SIG, so if your project has a security need /
>> problem / proposal than please do use the security SIG room where a larger
>> audience may be present to help solve problems and gain x-project consensus.
>>
>> Please see our PTG planning pad [0] where I encourage you to add to the
>> topics.
>>
>> [0] https://etherpad.openstack.org/p/security-ptg-rocky
>>
>> --
>> Luke Hinds
>> Security Project PTL
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Token Verify Role Check

2016-11-03 Thread Adam Young
There has been a lot of talk about Policy this past summit and release.  
Based on feedback, we've come up with the following spec to address it.



https://review.openstack.org/#/c/391624/


The idea is that we are going to split the role check off from the 
existing policy checks. The role check will happen at token validation 
time.  The existing policy checks will still be executed in the body of 
the code bases where they are now, as they confirm additional attributes 
about the operations and resources.



It is the amalgamation of work by many people, and I've attempted to 
list them all at the bottom.



Comments highly appreciated, either in the review or in this thread.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Kuryr][Keystone] Securing services in container orchestration

2016-10-20 Thread Adam Young

On 10/09/2016 10:57 PM, Ton Ngo wrote:


Hi Keystone team,
We have a scenario that involves securing services for container and 
this has
turned out to be rather difficult to solve, so we would like to bring 
to the larger team for

ideas.
Examples of this scenario:
1. Kubernetes cluster:
To support the load balancer and persistent storage features for 
containers, Kubernetes
needs to interface with Neutron and Cinder. This requires the user 
credential to establish
a session and request Openstack services. Currently this is done by 
requiring the
user to manually enter the credential in a Kubernetes config file and 
restarting some

of the Kubernetes services.
2. Swarm cluster:
To support the Swarm networking for container, the Kuryr libnetwork 
agent needs to
interface with the Kuryr driver, so the agent needs a service 
credential to establish

a session with the driver running on some controllers.

The problem is in handling and storing these credential on the user 
VMs in the cluster.


For #1, Magnum deploys the Kubernetes cluster but does not handle the
user credential, so the automation is not complete and the user needs 
to perform
some manual steps. Even this is not desirable since if the cluster is 
shared within
a tenant, the user credential can be exposed to other users. Token 
does not work
well since token would expire and the service is required for the life 
of the cluster.
For #2, storing a Kuryr service credential on the user VM is a 
security exposure

so we are still looking for a solution.

The Magnum and Kuryr teams have been discussing this topic for some time.
We would welcome any suggestion.

Ton Ngo,



Create a service user in a service domain, and grant a trust to the 
service user. This is what Heat does.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Project name DB length

2016-10-20 Thread Adam Young

On 09/28/2016 11:06 PM, Adrian Turjak wrote:

Hello Keystone Devs,

Just curious as to the choice to have the project name be only 64
characters:
https://github.com/openstack/keystone/blob/master/keystone/resource/backends/sql.py#L241

Seems short, and an odd choice when the user.name field is 255 characters:
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/sql_model.py#L216

Is there a good reason for it only being 64 characters, or is this just
something that was done a long time ago and no one thought about it?

Not hugely important, just seemed odd and may prove limiting for
something I'm playing with.

Cheers,
Adrian Turjak


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


No. keep them short.  We are working toward a scheme where you can nest 
the names like this"



parent/child1/child2


But if you make them too long, that becomes a disaster.  There is a 
strict option in the config file that prevents you from making names 
with non-URL safe characters.  Set that option.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] indoor climbing break at summit?

2016-10-17 Thread Adam Young

On 10/17/2016 09:53 AM, Chris Dent wrote:


It turns out that summit this year will be just down the road from
Chris Sharma's relatively new indoor climbing gym in Barcelona:

http://www.sharmaclimbingbcn.com/

If the fun, frisson and frustration of summit sessions leaves you with
the energy or need to pull down, maybe we should arrange a visit if we
can find the time. Maybe we can scribble on an etherpad or something.
Most of the pictures are of bouldering but since they rent out
harnesses I guess there must be roped climbing too.

If you're thinking of going apparently they do pre-registration to
speed things up:

http://www.sharmaclimbingbcn.com/en/gym/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I'm in.  What night?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fernet Key rotation

2016-09-23 Thread Adam Young

On 08/11/2016 06:25 AM, Steven Hardy wrote:

On Wed, Aug 10, 2016 at 11:31:29AM -0400, Zane Bitter wrote:

On 09/08/16 21:21, Adam Young wrote:

On 08/09/2016 06:00 PM, Zane Bitter wrote:

In either case a good mechanism might be to use a Heat Software
Deployment via the Heat API directly (i.e. not as part of a stack) to
push changes to the servers. (I say 'push' but it's more a case of
making the data available for os-collect-config to grab it.)

This is the part that interests me most.  The rest, I'll code in python
and we can call either from mistral or from Cron.  What would a stack
like this look like?  Are there comparable examples?

Basically use the "openstack software config create" command to upload a
script and the "openstack software deployment create" command to deploy it
to a server. I don't have an example I can point you at, but the data is in
essentially the same format as the properties of the corresponding Heat
resources.[1][2] Steve Baker would know if we have any more detailed docs.

Actually we wrapped a mistral workflow and CLI interface around this for
operator convenience, so you can just do:

[stack@instack ~]$ cat run_ls.sh
#!/bin/sh
ls /tmp

[stack@instack ~]$ openstack overcloud execute -s overcloud-controller-0 
run_ls.sh

This runs a mistral workflow that creates the heat software config and
software deployment, waits for the deployment to complete, then returns the
result.

Wiring in a periodic mistral workflow which does the same should be
possible, but tbh I've not yet looked into the deferred authentication
method in that case (e.g I assume it uses trusts but I've not tried it
yet).

This is the mistral workflow, it could pretty easily be reused or adapted
for the use-case described I think:

https://github.com/openstack/tripleo-common/blob/master/workbooks/deployment.yaml

Again, thanks for the stellar blooging, Steve.  POC was posted earlier 
this month.


http://adam.younglogic.com/2016/09/fernet-overcloud/

Packing up the tarball on the undercloud is the eay part.  I would like 
to come up with a general approach for securely distributing 
keys/secrets from undercloud to overcloud.  It might make sense to make 
use of Barbican for that in future release.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][oslo][release][requirements][FFE] global-requirements update for requests-kerberos

2016-09-13 Thread Adam Young

https://review.openstack.org/#/c/368530/

This change is for Python >2.7 only, as python2.7 already supports the 
latest version of these libraraies.  Back in the "just get pythoin3 to 
work" days we cut our losses on Kerberos support, but now it is 
working.  Getting this restriction removed means we don't have to edit 
away the tests for Kerberos in python3.


"The requests-kerberos package was marked as available for only python 
2.6 and python 2.7 because pykerberos did not support python 3. This has 
since been fixed, however we don't directly have a kerberos dependency 
we can increase so just leave this unbound."



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-09-08 Thread Adam Young

On 09/01/2016 08:48 PM, Michael Still wrote:
On Thu, Sep 1, 2016 at 11:58 AM, Adam Young <ayo...@redhat.com 
<mailto:ayo...@redhat.com>> wrote:


On 08/31/2016 07:56 AM, Michael Still wrote:

There is a quick sketch of what a service account might look like
at https://review.openstack.org/#/c/363606/
<https://review.openstack.org/#/c/363606/> -- I need to do some
more fiddling to get the new option group working, but I could do
that if we wanted to try and get this into Newton.


So, I don't think we need it.  I think that doing an identity for
the new node *in order* to register it with an IdP is backwards: 
register it, and use the identity from the IdP via Federation.


Anything authenticated should be done from the metadata server or
from Nova itself, based on the token used to launch the workflow.


I'm not sure we're on the same page here. The flows would be something 
like this:


 - Instance boot request
   - Initiating user token is available, and is passed through to the 
vendordata REST service

   - Metadata _might_ be generated, if the instance is using config drive

 - Metadata request from within the instance (any use case not using 
config drive)
  - No user token, this is just cloud-init running on the instance, 
although it could be other client software too
  - We don't have a token to pass to the vendordata REST service, so 
we currently pass nothing, keystone middleware denies request


So, its those post-boot requests from inside the instance that have me 
concerned.


I gues I was not clear on wehat you were doping here.  Is this a service 
user for additional calls to the metadata service for post-boot only?  I 
would think that, by then,. the instance should have an identity, and 
that identity would be used for calls to the outside world.  But I gues 
this is from nova-metadata to vendordata dynamic?  I thought 
nova-metadata already had a service account?


The chain of identity is from the user requesting the new VM to Nova to 
the join service.  When the VM kicks off, all the vendordata dynamic 
join service cares about is the the instance is the expected instance, 
and then it will hand down whatever it needs.


Let's say that what we were doing here is provisioning a Keystone user 
for each instance (not suggesting, just showing) and assigning the user 
a role in the project that owns the VM.  All of that work would be done 
by the join service upon notification from Nova that there is a new VM.  
What would then go to the instance would be the new userid and the 
keystone credential (Password?) that the vm can use.  Yes, it would be 
good if the instance were then to change the password.


In general, the join service should run as a specific user, and 
operations performed are not necessarily those limited by the token of 
the user requesting the new VM.  Put another way, just because a user 
can create a VM and thus a host entry in the IdP does not mean that same 
user can directly create a host entry in the IdP.


A user should not be able to call the Join service directly, either.  It 
should only be call-able by Nova.






Michael



--
Rackspace Australia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] new core reviewer (rderose)

2016-09-01 Thread Adam Young

On 09/01/2016 10:44 AM, Steve Martinelli wrote:
I want to welcome Ron De Rose (rderose) to the Keystone core team. In 
a short time Ron has shown a very positive impact. Ron has contributed 
feature work for shadowing LDAP and federated users, as well as 
enhancing password support for SQL users. Implementing these features 
and picking up various bugs along the way has helped Ron to understand 
the keystone code base. As a result he is able to contribute to the 
team with quality code reviews.


Thanks for all your hard work Ron, we sincerely appreciate it.

Steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I want to state that I (jokingly) opposed Ron's inclusion as core.

Not because he didn't earn it. He very much earned it.

But because Ron is doing amazing work actually writing code for 
Keystone, and I don't know that we can spare him to now review code, too.


Fantastic work, Ron.  Keep it up.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-31 Thread Adam Young

On 08/31/2016 07:56 AM, Michael Still wrote:
There is a quick sketch of what a service account might look like at 
https://review.openstack.org/#/c/363606/ -- I need to do some more 
fiddling to get the new option group working, but I could do that if 
we wanted to try and get this into Newton.


So, I don't think we need it.  I think that doing an identity for the 
new node *in order* to register it with an IdP is backwards: register 
it, and use the identity from the IdP via Federation.


Anything authenticated should be done from the metadata server or from 
Nova itself, based on the token used to launch the workflow.




Michael

On Wed, Aug 31, 2016 at 7:54 AM, Matt Riedemann 
<mrie...@linux.vnet.ibm.com <mailto:mrie...@linux.vnet.ibm.com>> wrote:


On 8/30/2016 4:36 PM, Michael Still wrote:

Sorry for being slow on this one, I've been pulled into some
internal
things at work.

So... Talking to Matt Riedemann just now, it seems like we should
continue to pass through the user authentication details when
we have
them to the plugin. The problem is what to do in the case
where we do
not (which is mostly going to be when the instance itself makes a
metadata request).

I think what you're saying though is that the middleware wont
let any
requests through if they have no auth details? Is that correct?

Michael




On Fri, Aug 26, 2016 at 12:46 PM, Adam Young
<ayo...@redhat.com <mailto:ayo...@redhat.com>
<mailto:ayo...@redhat.com <mailto:ayo...@redhat.com>>> wrote:

On 08/22/2016 11:11 AM, Rob Crittenden wrote:

Adam Young wrote:

On 08/15/2016 05:10 PM, Rob Crittenden wrote:

Review
https://review.openstack.org/#/c/317739/
<https://review.openstack.org/#/c/317739/>
<https://review.openstack.org/#/c/317739/
<https://review.openstack.org/#/c/317739/>> added a new
dynamic
metadata handler to nova. The basic jist is
that rather
than serving
metadata statically, it can be done
dyamically, so that
certain values
aren't provided until they are needed, mostly for
security purposes
(like credentials to enroll in an AD domain). The
metadata is
configured as URLs to a REST service.

Very little is passed into the REST call,
mostly UUIDs
of the
instance, image, etc. to ensure a stable API.
What this
means though
is that the REST service may need to make
calls into
nova or glance to
get information, like looking up the image
metadata in
glance.

Currently the dynamic metadata handler _can_
generate
auth headers if
an authenticated request is made to it, but
consider
that a common use
case is fetching metadata from within an
instance using
something like:

% curl
http://169.254.169.254/openstack/2016-10-06/vendor_data2.json
<http://169.254.169.254/openstack/2016-10-06/vendor_data2.json>
   
<http://169.254.169.254/openstack/2016-10-06/vendor_data2.json

<http://169.254.169.254/openstack/2016-10-06/vendor_data2.json>>

This will come into the nova metadata service
unauthenticated.

So a few questions:

1. Is it possible to configure paste (I'm a
relative
newbie) both
authenticated and unauthenticated requests are
accepted
such that IF
an authenticated request comes it, those
credentials can
be used,
otherwise fall back to something else?



Only if they are on different URLs, I think.  Its
auth_token
middleware
for all services but Keystone.  Keystone, the rles are
similar, but the
implementation is a little different.


Ok. I'm fine with the unauthenticated path if the
service we can
just create a separate service user for it.

2. If an unauthentica

Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-31 Thread Adam Young

On 08/30/2016 05:36 PM, Michael Still wrote:
Sorry for being slow on this one, I've been pulled into some internal 
things at work.


So... Talking to Matt Riedemann just now, it seems like we should 
continue to pass through the user authentication details when we have 
them to the plugin. The problem is what to do in the case where we do 
not (which is mostly going to be when the instance itself makes a 
metadata request).


I think what you're saying though is that the middleware wont let any 
requests through if they have no auth details? Is that correct?



Yes, that is correct.


Michael




On Fri, Aug 26, 2016 at 12:46 PM, Adam Young <ayo...@redhat.com 
<mailto:ayo...@redhat.com>> wrote:


On 08/22/2016 11:11 AM, Rob Crittenden wrote:

    Adam Young wrote:

On 08/15/2016 05:10 PM, Rob Crittenden wrote:

Review https://review.openstack.org/#/c/317739/
<https://review.openstack.org/#/c/317739/> added a new
dynamic
metadata handler to nova. The basic jist is that
rather than serving
metadata statically, it can be done dyamically, so
that certain values
aren't provided until they are needed, mostly for
security purposes
(like credentials to enroll in an AD domain). The
metadata is
configured as URLs to a REST service.

Very little is passed into the REST call, mostly UUIDs
of the
instance, image, etc. to ensure a stable API. What
this means though
is that the REST service may need to make calls into
nova or glance to
get information, like looking up the image metadata in
glance.

Currently the dynamic metadata handler _can_ generate
auth headers if
an authenticated request is made to it, but consider
that a common use
case is fetching metadata from within an instance
using something like:

% curl
http://169.254.169.254/openstack/2016-10-06/vendor_data2.json
<http://169.254.169.254/openstack/2016-10-06/vendor_data2.json>

This will come into the nova metadata service
unauthenticated.

So a few questions:

1. Is it possible to configure paste (I'm a relative
newbie) both
authenticated and unauthenticated requests are
accepted such that IF
an authenticated request comes it, those credentials
can be used,
otherwise fall back to something else?



Only if they are on different URLs, I think.  Its
auth_token middleware
for all services but Keystone.  Keystone, the rles are
similar, but the
implementation is a little different.


Ok. I'm fine with the unauthenticated path if the service we
can just create a separate service user for it.

2. If an unauthenticated request comes in, how best to
obtain a token
to use? Is it best to create a service user for the
REST services
(perhaps several), use a shared user, something else?



No unauthenticated requests, please.  If the call is to
Keystone, we
could use the X509 Tokenless approach, but if the call
comes from the
new server, you won't have a cert by the time you need to
make the call,
will you?


Not sure which cert you're referring too but yeah, the
metadata service is unauthenticated. The requests can come in
from the instance which has no credentials (via
http://169.254.169.254/).

Shared service users are probably your best bet.  We can
limit the roles
that they get.  What are these calls you need to make?


To glance for image metadata, Keystone for project information
and nova for instance information. The REST call passes in
various UUIDs for these so they need to be dereferenced. There
is no guarantee that these would be called in all cases but it
is a possibility.

rob


I guess if config_drive is True then this isn't really
a problem as
the metadata will be there in the instance already.

thanks

rob


__


OpenStack Development Mailing List (not for usage
questions)
Unsubscribe:
openstack-dev-req

Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-25 Thread Adam Young

On 08/22/2016 11:11 AM, Rob Crittenden wrote:

Adam Young wrote:

On 08/15/2016 05:10 PM, Rob Crittenden wrote:

Review https://review.openstack.org/#/c/317739/ added a new dynamic
metadata handler to nova. The basic jist is that rather than serving
metadata statically, it can be done dyamically, so that certain values
aren't provided until they are needed, mostly for security purposes
(like credentials to enroll in an AD domain). The metadata is
configured as URLs to a REST service.

Very little is passed into the REST call, mostly UUIDs of the
instance, image, etc. to ensure a stable API. What this means though
is that the REST service may need to make calls into nova or glance to
get information, like looking up the image metadata in glance.

Currently the dynamic metadata handler _can_ generate auth headers if
an authenticated request is made to it, but consider that a common use
case is fetching metadata from within an instance using something like:

% curl http://169.254.169.254/openstack/2016-10-06/vendor_data2.json

This will come into the nova metadata service unauthenticated.

So a few questions:

1. Is it possible to configure paste (I'm a relative newbie) both
authenticated and unauthenticated requests are accepted such that IF
an authenticated request comes it, those credentials can be used,
otherwise fall back to something else?



Only if they are on different URLs, I think.  Its auth_token middleware
for all services but Keystone.  Keystone, the rles are similar, but the
implementation is a little different.


Ok. I'm fine with the unauthenticated path if the service we can just 
create a separate service user for it.



2. If an unauthenticated request comes in, how best to obtain a token
to use? Is it best to create a service user for the REST services
(perhaps several), use a shared user, something else?



No unauthenticated requests, please.  If the call is to Keystone, we
could use the X509 Tokenless approach, but if the call comes from the
new server, you won't have a cert by the time you need to make the call,
will you?


Not sure which cert you're referring too but yeah, the metadata 
service is unauthenticated. The requests can come in from the instance 
which has no credentials (via http://169.254.169.254/).



Shared service users are probably your best bet.  We can limit the roles
that they get.  What are these calls you need to make?


To glance for image metadata, Keystone for project information and 
nova for instance information. The REST call passes in various UUIDs 
for these so they need to be dereferenced. There is no guarantee that 
these would be called in all cases but it is a possibility.


rob



I guess if config_drive is True then this isn't really a problem as
the metadata will be there in the instance already.

thanks

rob

__ 



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Sounded like you had this sorted.  True?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cross-Project] [Cinder][Neutron][Cue]

2016-08-18 Thread Adam Young
These changes are necessary so policy files can in include the check 
"is_admin_project:True"  which allows us to Scope what is meant by "Admin"



Use from_environ to load context

Use to_policy_values for enforcing policy

Use context from_environ to load contexts

Use from_dict to load context params


https://review.openstack.org/340206 

https://review.openstack.org/340205 

https://review.openstack.org/340195 

https://review.openstack.org/340194 


Let's please get them merged.  This will improve RBAC for all of the 
services.



It looks like there are a handful for Cue as well:

Use oslo.context's from_environ to create context

Use standard to_policy_values for policy enforcement

Explicitly load context attributes in from_dict


https://review.openstack.org/#/c/345694/

https://review.openstack.org/#/c/345695/

https://review.openstack.org/#/c/345693/


Try to go light on the testing requirements in review feedback, as Jamie 
is working tio make this happen across a lot of projects.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-17 Thread Adam Young

On 08/15/2016 05:10 PM, Rob Crittenden wrote:
Review https://review.openstack.org/#/c/317739/ added a new dynamic 
metadata handler to nova. The basic jist is that rather than serving 
metadata statically, it can be done dyamically, so that certain values 
aren't provided until they are needed, mostly for security purposes 
(like credentials to enroll in an AD domain). The metadata is 
configured as URLs to a REST service.


Very little is passed into the REST call, mostly UUIDs of the 
instance, image, etc. to ensure a stable API. What this means though 
is that the REST service may need to make calls into nova or glance to 
get information, like looking up the image metadata in glance.


Currently the dynamic metadata handler _can_ generate auth headers if 
an authenticated request is made to it, but consider that a common use 
case is fetching metadata from within an instance using something like:


% curl http://169.254.169.254/openstack/2016-10-06/vendor_data2.json

This will come into the nova metadata service unauthenticated.

So a few questions:

1. Is it possible to configure paste (I'm a relative newbie) both 
authenticated and unauthenticated requests are accepted such that IF 
an authenticated request comes it, those credentials can be used, 
otherwise fall back to something else?



Only if they are on different URLs, I think.  Its auth_token middleware 
for all services but Keystone.  Keystone, the rles are similar, but the 
implementation is a little different.


2. If an unauthenticated request comes in, how best to obtain a token 
to use? Is it best to create a service user for the REST services 
(perhaps several), use a shared user, something else?



No unauthenticated requests, please.  If the call is to Keystone, we 
could use the X509 Tokenless approach, but if the call comes from the 
new server, you won't have a cert by the time you need to make the call, 
will you?


Shared service users are probably your best bet.  We can limit the roles 
that they get.  What are these calls you need to make?




I guess if config_drive is True then this isn't really a problem as 
the metadata will be there in the instance already.


thanks

rob

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] Tripleo HA Federation Proof-of-Concept

2016-08-11 Thread Adam Young

 http://adam.younglogic.com/2016/08/ooo-ha-fed-poc/


It is painful, sloppy, Mitaka based.  Have at it, and lets make 
Federation a reality for Newton based deployments.  Feedback eagerly sought.


Thanks for all the people that helped get me through this.  Won't list 
you all, as it would start to sound like an Oscars acceptance speech.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fernet Key rotation

2016-08-10 Thread Adam Young

On 08/09/2016 05:11 PM, Adam Young wrote:
The Fernet token format uses a symmetric key to sign tokens.  In order 
to check the signature, these keys need to be synchronized across all 
of the Keystone servers.



I don't want to pass around nake symmetric keys.  The right way to do 
this is to put them into a PKCS 11 Envelope.  Roughly, this:



1.  Each server generates a keypair and sends the public key to the 
undercloud


2.  undercloud generates a Fernet key

3.  Undercloud puts the Fernet token into a PKCS11 document signed 
with the overcloud nodes public key


4.  Undercloud posts the PKCS11 data to metadata

Sorry, PKCS12.  Not 11.



5.  os-*config Node downloads and stores the proper PKCS11 data

6.  Something unpackst the pkcs11 data and puts the key into the 
Fernet key store


That last step needs to make use of the keystone-manage fernet_rotate 
command.



How do we go about making this happen?  The key rotations should be 
scheduled infrequently; let me throw out monthly as a starting point 
for the discussion, although that is probably way too frequent.  How 
do we schedule this?  Is this a new stack that depends on the Keystone 
role?



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fernet Key rotation

2016-08-10 Thread Adam Young

On 08/09/2016 09:21 PM, Adam Young wrote:

On 08/09/2016 06:00 PM, Zane Bitter wrote:


In either case a good mechanism might be to use a Heat Software 
Deployment via the Heat API directly (i.e. not as part of a stack) to 
push changes to the servers. (I say 'push' but it's more a case of 
making the data available for os-collect-config to grab it.)


This is the part that interests me most.  The rest, I'll code in 
python and we can call either from mistral or from Cron.  What would a 
stack like this look like?  Are there comparable examples?



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


So, another aspect to the  problem is also that this needs to be done 
initially as part of the overcloud deployment.  If we go Fernet, the 
keys need to be in place when the Keystone servers boot.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fernet Key rotation

2016-08-09 Thread Adam Young

On 08/09/2016 06:00 PM, Zane Bitter wrote:


In either case a good mechanism might be to use a Heat Software 
Deployment via the Heat API directly (i.e. not as part of a stack) to 
push changes to the servers. (I say 'push' but it's more a case of 
making the data available for os-collect-config to grab it.)


This is the part that interests me most.  The rest, I'll code in python 
and we can call either from mistral or from Cron.  What would a stack 
like this look like?  Are there comparable examples?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Fernet Key rotation

2016-08-09 Thread Adam Young
The Fernet token format uses a symmetric key to sign tokens.  In order 
to check the signature, these keys need to be synchronized across all of 
the Keystone servers.



I don't want to pass around nake symmetric keys.  The right way to do 
this is to put them into a PKCS 11 Envelope.  Roughly, this:



1.  Each server generates a keypair and sends the public key to the 
undercloud


2.  undercloud generates a Fernet key

3.  Undercloud puts the Fernet token into a PKCS11 document signed with 
the overcloud nodes public key


4.  Undercloud posts the PKCS11 data to metadata

5.  os-*config Node downloads and stores the proper PKCS11 data

6.  Something unpackst the pkcs11 data and puts the key into the Fernet 
key store


That last step needs to make use of the keystone-manage fernet_rotate 
command.



How do we go about making this happen?  The key rotations should be 
scheduled infrequently; let me throw out monthly as a starting point for 
the discussion, although that is probably way too frequent.  How do we 
schedule this?  Is this a new stack that depends on the Keystone role?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-07 Thread Adam Young

On 08/06/2016 08:44 AM, John Dennis wrote:

On 08/05/2016 06:06 PM, Adam Young wrote:

Ah...just noticed the redirect is to :5000, not port :13000 which is
the HA Proxy port.


OK, this is due to the SAML request:


https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml;
Consent="urn:oasis:names:tc:SAML:2.0:consent:current-implicit"
ForceAuthn="false"
IsPassive="false"
AssertionConsumerServiceURL="https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse;
>
https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/metadata
Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient"

AllowCreate="true"
/>



My guess is HA proxy is not passing on the proper, and the
mod_auth_mellon does not know to rewrite it from 5000 to 13000


You can't change the contents of a SAML AuthnRequest, often they are 
signed. Also, the AssertionConsumerServiceURL's and other URL's in 
SAML messages are validated to assure they match the metadata 
associated with EntityID (issuer). The addresses used inbound and 
outbound have to be correctly handled by the proxy configuration 
without modifying the content of the message being passed on the 
transport.



Got a a little further by twerking HA proxy settings.  Added in

  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
  rsprep ^Location:\ http://(.*) Location:\ https://\1

whicxh tells HA proxy to translate Location headers (used in redirects) 
from http to https.



As of now, it looks good up until the response comes back from the IdP 
and mod mellon rejects it.  I think this is due to Mellon issuing a 
request for http://:  but it gets translated through the 
proxy as https://:.



mod_auth_mellon is failing the following check in auth_mellon_handler.c


  url = am_reconstruct_url(r);

  ...

  if (response->parent.Destination) {

if (strcmp(response->parent.Destination, url)) {
ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r,
  "Invalid Destination on Response. Should be: 
%s",

  url);
lasso_login_destroy(login);
return HTTP_BAD_REQUEST;
}
}

It does not spit out the parent.Destination value, but considering I am 
seeing http and not https in the error message, I assume that at least 
the protocol does not match.  Full error message at the bottom.


Assuming the problem is just that the URL is http and not https, I have 
an approach that should work.  I need to test it out, but want to record 
it here, and also get feedback:


I can clone the current 10-keystone_wsgi_main.conf which listens for 
straight http on port 5000.  If I make a file 
11-keystone_wsgi_main.conf  that listens on port 13000 (not on the 
external VIP)  but that enables SSL, I should be able to make HA proxy 
talk to that port and re-encrypt traffic, maintaining the 'https://' 
protocol.



However, I am not certain that Destination means the SP URL.  It seems 
like it should mean the IdP.  Further on in auth_mellon_handler.c


  destination_url = lasso_provider_get_metadata_one(
provider, "SingleSignOnService HTTP-Redirect");
if (destination_url == NULL) {
/* HTTP-Redirect unsupported - try HTTP-POST. */
http_method = LASSO_HTTP_METHOD_POST;
destination_url = lasso_provider_get_metadata_one(
provider, "SingleSignOnService HTTP-POST");
}

Looking in the metadata, it seems that this value should be:

 Location="https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml; 
/>


So maybe something has rewritten the value used as the url ?


Here is the full error message


Invalid Destination on Response. Should be: 
http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse, 
referer: 
https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml?SAMLRequest=nZJba%2BMwEEb%2FitG7I%2BXi1Igk4OYCge5S0m4f%2BlKEM2lFLcmrGWc3%2F35HDu22D22hIDCMZ%2FTpHGmGxjWtrjp68jv43QFS9tc1HnX%2FYy666HUwaFF74wA11fqm%2BnGlRwOl2xgo1KERb0Y%2BnzCIEMkGL7Ltai4e1LoYq%2FFoXapJWU2GhSouN5vhelpNyqIcX2xEdgcRuX8ueJyHEDvYeiTjiUtqOM1VmavprRppXkVxL7IVM1hvqJ96ImpRS2n34MnSaWBOofOP%2BR6aJqfhhVID4n5pWICMYBqHMrSQEupn%2BQIoE5nIlsEjpODPEOtzk667GPmbW9c2trYksk2INfSm5%2BJgGoTEc81K7BFeK9WLoRTWOYg3EI%2B2hl%2B7q%2F80ryf8AEcXSil5HEvH9eBlG5B2gG06mljMEo3uVcbFd7d0QGZvyMzk291m5%2Bf0k61sV9eBwU8J25kvpKWK3eeHvlVTNB4ty2MdHPZnyRdDrIhiB0IuzpHvH%2B3iHw%3D%3D=http%3A%2F%2Fopenstack.ayoung-dell-t1700.test%3A5000%2Fv3%2Fauth%2FOS-FEDERATION%2Fwebsso%2Fsaml2%3Forigin%3Dhttp%3A%2F%2Fopenstack.ayoung-dell-t1700.test%2Fdashboard%2Fauth%2Fwebsso%2F=http%3A%2F%2Fwww.w3.org%2F2000%2F09%2Fxmldsig%23rsa-sha1=oJzAwE7ma3m0gZtO%2FvPQKCnk18u4OsjKcRQ3wiDu7txUGiPr4Cc9XIzKIGwzSGPSaWi8j1qbN76XwdNICOk! 

HI5RsTdeS2Yeufw5Q5Ahol5cJHGEQO

Re: [openstack-dev] [tripleo] HA with only one node.

2016-08-06 Thread Adam Young

On 08/06/2016 03:20 PM, Dan Prince wrote:

On Sat, 2016-08-06 at 13:21 -0400, Adam Young wrote:

As I try to debug Federaion problems, I am often finding I have to
check
three nodes to see where the actual requrest was processed. However,
If
I close down to of the controller nodes in Nova, the whole thing just
fails.


So, while that in it self is a problem, what I would like to be able
to
do in development is have HA running, but with only a single
controller
node answering requests.  How do I do that?

I have a $HOME/custom.yaml environment file which contains this:

parameters:
   ControllerCount: 1

If you do something similar and then include that environment in your
--environments list you should end up with just a single controller.

Do this in addition to using environments/puppet-pacemaker.yaml and you
should have "single node HA" (aka pacemaker on a single controller).

Cool, will try it.

I kindof am still doing trial and error on a cluster we're I've made 
changes on the node.   Not ready to tear them down.  But the fact that 
killing a node means that Web calls fail means that HA Proxy is not 
sufficient to give us HA.  Is there something I can do with the Load 
Balancer or something if I shut down two of the nodes to keep things 
running?





Dan



_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] HA with only one node.

2016-08-06 Thread Adam Young
As I try to debug Federaion problems, I am often finding I have to check 
three nodes to see where the actual requrest was processed. However, If 
I close down to of the controller nodes in Nova, the whole thing just fails.



So, while that in it self is a problem, what I would like to be able to 
do in development is have HA running, but with only a single controller 
node answering requests.  How do I do that?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-05 Thread Adam Young

On 08/05/2016 06:40 PM, Fox, Kevin M wrote:


*From:* Adam Young [ayo...@redhat.com]
*Sent:* Friday, August 05, 2016 3:06 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [keystone][tripleo] Federation, 
mod_mellon, and HA Proxy


On 08/05/2016 04:54 PM, Adam Young wrote:

On 08/05/2016 04:52 PM, Adam Young wrote:
Today I discovered that we need to modify the HA proxy config to 
tell it to rewrite redirects.  Otherwise, I get a link to


http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse


Which should be https, not http.


I mimicked the lines in the horizon config so that the keystone 
section looks like this:



listen keystone_public
  bind 10.0.0.4:13000 transparent ssl crt 
/etc/pki/tls/private/overcloud_endpoint.pem

  bind 172.16.2.5:5000 transparent
  mode http
  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ 
ssl_fc }

  rsprep ^Location:\ http://(.*) Location:\ https://\1
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-1 172.16.2.6:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-2 172.16.2.9:5000 check fall 5 inter 
2000 rise 2


And.. it seemed to work the first time, but not the second.  Now I get

"Secure Connection Failed

The connection to openstack.ayoung-dell-t1700.test:5000 was 
interrupted while the page was loading."


Guessing the first success was actually a transient error.

So it looks like my change was necessary but not sufficient.

This is needed to make mod_auth_mellon work when loaded into Apache, 
and Apache is running behind  HA proxy (Tripleo setup).



There is no SSL setup inside the Keystone server, it is just doing 
straight HTTP.  While I'd like to change this long term, I'd like to 
get things working this way first, but am willing to make whatever 
changes are needed to get SAML and Federation working soonest.





Ah...just noticed the redirect is to :5000, not port :13000 which is 
the HA Proxy port.


OK, this is due to the SAML request:


https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml;
 
Consent="urn:oasis:names:tc:SAML:2.0:consent:current-implicit"
 ForceAuthn="false"
 IsPassive="false"
 
AssertionConsumerServiceURL="https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse;
 >
 
https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/metadata
 


My guess is HA proxy is not passing on the proper, and the 
mod_auth_mellon does not know to rewrite it from 5000 to 13000




"rewriting is more expensive then getting the web server to return the 
right prefix. Is that an option? Usually its just a bug that needs a 
minor patch to fix.


Thanks,
Kevin"


Well, I think in this case, the expense is not something to worry 
about:  SAML is way more chatty than normal traffic, and the rewrite 
won't be a drop a in the bucket.


I think the right thing to do is to get HA proxy top pass on the correct 
URL, including the port, to the backend, but I don't think it is done in 
the rsprep directive.  As John Dennis pointed out to me, the 
mod_auth_mellon code uses the apache ap_construct_url(r->pool, 
cfg->endpoint_path, r) where r is the current request record.  And that 
has to be passed from HA proxy to Apache.


HA proxy is terminating SSL, and then calling Apache via


server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 2000 rise 2
and two others.  Everything appears to be properly translated except the 
port.








__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-05 Thread Adam Young

On 08/05/2016 04:54 PM, Adam Young wrote:

On 08/05/2016 04:52 PM, Adam Young wrote:
Today I discovered that we need to modify the HA proxy config to tell 
it to rewrite redirects.  Otherwise, I get a link to


http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse


Which should be https, not http.


I mimicked the lines in the horizon config so that the keystone 
section looks like this:



listen keystone_public
  bind 10.0.0.4:13000 transparent ssl crt 
/etc/pki/tls/private/overcloud_endpoint.pem

  bind 172.16.2.5:5000 transparent
  mode http
  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ 
ssl_fc }

  rsprep ^Location:\ http://(.*) Location:\ https://\1
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-1 172.16.2.6:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-2 172.16.2.9:5000 check fall 5 inter 
2000 rise 2


And.. it seemed to work the first time, but not the second.  Now I get

"Secure Connection Failed

The connection to openstack.ayoung-dell-t1700.test:5000 was 
interrupted while the page was loading."


Guessing the first success was actually a transient error.

So it looks like my change was necessary but not sufficient.

This is needed to make mod_auth_mellon work when loaded into Apache, 
and Apache is running behind  HA proxy (Tripleo setup).



There is no SSL setup inside the Keystone server, it is just doing 
straight HTTP.  While I'd like to change this long term, I'd like to 
get things working this way first, but am willing to make whatever 
changes are needed to get SAML and Federation working soonest.





Ah...just noticed the redirect is to :5000, not port :13000 which is 
the HA Proxy port.


OK, this is due to the SAML request:


https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml;

Consent="urn:oasis:names:tc:SAML:2.0:consent:current-implicit"
ForceAuthn="false"
IsPassive="false"

AssertionConsumerServiceURL="https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse;
>

https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/metadata




My guess is HA proxy is not passing on the proper, and the 
mod_auth_mellon does not know to rewrite it from 5000 to 13000






__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-05 Thread Adam Young

On 08/05/2016 04:52 PM, Adam Young wrote:
Today I discovered that we need to modify the HA proxy config to tell 
it to rewrite redirects.  Otherwise, I get a link to


http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse


Which should be https, not http.


I mimicked the lines in the horizon config so that the keystone 
section looks like this:



listen keystone_public
  bind 10.0.0.4:13000 transparent ssl crt 
/etc/pki/tls/private/overcloud_endpoint.pem

  bind 172.16.2.5:5000 transparent
  mode http
  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
  rsprep ^Location:\ http://(.*) Location:\ https://\1
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-1 172.16.2.6:5000 check fall 5 inter 
2000 rise 2
  server overcloud-controller-2 172.16.2.9:5000 check fall 5 inter 
2000 rise 2


And.. it seemed to work the first time, but not the second.  Now I get

"Secure Connection Failed

The connection to openstack.ayoung-dell-t1700.test:5000 was 
interrupted while the page was loading."


Guessing the first success was actually a transient error.

So it looks like my change was necessary but not sufficient.

This is needed to make mod_auth_mellon work when loaded into Apache, 
and Apache is running behind  HA proxy (Tripleo setup).



There is no SSL setup inside the Keystone server, it is just doing 
straight HTTP.  While I'd like to change this long term, I'd like to 
get things working this way first, but am willing to make whatever 
changes are needed to get SAML and Federation working soonest.





Ah...just noticed the redirect is to :5000, not port :13000 which is the 
HA Proxy port.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-05 Thread Adam Young
Today I discovered that we need to modify the HA proxy config to tell it 
to rewrite redirects.  Otherwise, I get a link to


http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse


Which should be https, not http.


I mimicked the lines in the horizon config so that the keystone section 
looks like this:



listen keystone_public
  bind 10.0.0.4:13000 transparent ssl crt 
/etc/pki/tls/private/overcloud_endpoint.pem

  bind 172.16.2.5:5000 transparent
  mode http
  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
  rsprep ^Location:\ http://(.*) Location:\ https://\1
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 2000 
rise 2
  server overcloud-controller-1 172.16.2.6:5000 check fall 5 inter 2000 
rise 2
  server overcloud-controller-2 172.16.2.9:5000 check fall 5 inter 2000 
rise 2


And.. it seemed to work the first time, but not the second.  Now I get

"Secure Connection Failed

The connection to openstack.ayoung-dell-t1700.test:5000 was interrupted 
while the page was loading."


Guessing the first success was actually a transient error.

So it looks like my change was necessary but not sufficient.

This is needed to make mod_auth_mellon work when loaded into Apache, and 
Apache is running behind  HA proxy (Tripleo setup).



There is no SSL setup inside the Keystone server, it is just doing 
straight HTTP.  While I'd like to change this long term, I'd like to get 
things working this way first, but am willing to make whatever changes 
are needed to get SAML and Federation working soonest.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress horizon plugin - congressclient/congress API auth issue - help

2016-07-29 Thread Adam Young

On 07/28/2016 10:05 PM, Tim Hinrichs wrote:


I've never worked on the authentication details, so this may be off 
track, but that error message indicates the failure is happening 
inside Congress's oslo_policy.


Error message shows up here as a Python exception class.
https://github.com/openstack/congress/blob/master/congress/exception.py#L135

That exception class is instantiated only here
https://github.com/openstack/congress/blob/master/congress/common/policy.py#L93 



The code that uses the instantiated exception class (which actually 
does the enforcement):

https://github.com/openstack/congress/blob/7c2f4132b9693e7969e704cb9914963274c2c4a1/congress/api/webservice.py#L373

I don't remember off the top of my head how the default policy.json 
gets created, but I'm sure the admin credentials will work.  You might 
want to ensure you're logged in as the admin with...


$ source openrc admin admin



IN most projects, policy is enforced against an oslo-context object.  
That shouild abstract away the differences between V2 and V3 keystone 
token formats.


Make sure that the policy is not dying on something specific to one 
version or the other.  Post the actual rule executed, please.





Tim

On Thu, Jul 28, 2016 at 1:56 PM Aimee Ukasick 
> wrote:


I've gotten a little farther, which leads me to my next question -
does the API support v3 token auth?
or am I making mistakes in my manual testing?

using the CLI on local devstack
1) did not modify openrc
2) source openrc
3) openstack token issue
4)  openstack congress datasource list --os-auth-type v3token
--os-token ad74073300e244768e08e0d4cd73fbbd --os-auth-url
http://192.168.56.101:5000/v3
--os-project-id da9a9ba573c34c18a037fd04812d81bc   --debug --verbose

When the python-congressclient calls the API, this is the response:
RESP BODY: Policy doesn't allow get_v1 to be performed.
Request returned failure status: 403

Log: http://paste.openstack.org/show/543445/

So then I called the API directly:
curl -X POST -H "Content-Type: application/json" -H
"Cache-Control: no-cache"
-d '{ "auth": {
"identity": {
  "methods": ["password"],
  "password": {
"user": {
  "name": "demo",
  "domain": { "id": "default" },
  "password": "secret"
}
  }
}
  }
}' "http://192.168.56.101:5000/v3/auth/tokens;

Response:
{
  "token": {
"issued_at": "2016-07-28T20:43:44.258137Z",
"audit_ids": [
  "N6tnfbI5QvyRT4xEB7pGCA"
],
"methods": [
  "password"
],
"expires_at": "2016-07-28T21:43:44.258112Z",
"user": {
  "domain": {
"id": "default",
"name": "Default"
  },
  "id": "f2bf5189bbd7466cbecc1b1315cff3b5",
  "name": "demo"
}
  }
}

Then:
curl -X GET -H "X-Auth-Token: f2bf5189bbd7466cbecc1b1315cff3b5" -H
"Cache-Control: no-cache" "http://192.168.56.101:1789/v1/data-sources;

Response:
{
  "error": {
"message": "The request you have made requires authentication.",
"code": 401,
"title": "Unauthorized"
  }
}

I'm feeling pretty stupid at the moment, like I've missed
something obvious.
Any ideas?

Thanks!

aimee

On Fri, Jul 22, 2016 at 9:21 PM, Anusha Ramineni
> wrote:
> Hi Aimee,
>
> Thanks for the investigation.
>
> I remember testing congress client with V3 password based
authentication ,
> which worked fine .. but never tested with token based .
>
> Please go ahead and fix it , if you think there is any issue .
>
>
> On 22-Jul-2016 9:38 PM, "Aimee Ukasick"
>
wrote:
>>
>> All - I made the change to the auth_url that Anusha suggested.
>> Same problem as before " Cannot authorize API client"
>> 2016-07-22 14:13:50.835861 * calling policies_list =
>> client.list_policy()*
>> 2016-07-22 14:13:50.836062 Unable to get policies list: Cannot
>> authorize API client.
>>
>> I used the token from the log output to query the Congress API with
>> the keystone v3 token - no issues.
>> curl -X GET -H "X-Auth-Token: 18ec54ac811b49aa8265c3d535ba0095" -H
>> "Cache-Control: no-cache" "http://192.168.56.103:1789/v1/policies;
>>
>> So I really think the problem is that the python-congressclient
>> doesn't support identity v3.
>> I thought it did, but then I came across this:
>> "support keystone v3 api and session based authentication "
>> https://bugs.launchpad.net/python-congressclient/+bug/1564361
>> This is currently assigned to Anusha.
>> I'd like to start 

Re: [openstack-dev] [tripleo] Modifying just a few values on overcloud redeploy

2016-07-27 Thread Adam Young

On 07/27/2016 06:04 AM, Steven Hardy wrote:

On Tue, Jul 26, 2016 at 05:23:21PM -0400, Adam Young wrote:

I worked through how to do a complete clone of the templates to do a
deploy and change a couple values here:

http://adam.younglogic.com/2016/06/custom-overcloud-deploys/

However, all I want to do is to set two config options in Keystone.  Is
there a simple way to just modify the two values below?  Ideally, just
making a single env file and passing it via openstack overcloud deploy -e
somehow.

'identity/domain_specific_drivers_enabled': value => 'True';

'identity/domain_configurations_from_database': value => 'True';

Yes, the best way to do this is to pass a hieradata override, as documented
here:

http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_config.html

First step is to look at the puppet module that manages that configuration,
in this case I assume it's puppet-keystone:

https://github.com/openstack/puppet-keystone/tree/master/manifests

Some grepping shows that domain_specific_drivers_enabled is configured
here:

https://github.com/openstack/puppet-keystone/blob/master/manifests/init.pp#L1124..L1155

So working back from those variables, "using_domain_config" and
"domain_config_directory", you'd create a yaml file that looks like:

parameter_defaults:
   ControllerExtraConfig:
 keystone::using_domain_config: true
 keystone::domain_config_directory: /path/to/config

However, it seems that you want to configure domain_specific_drivers_enabled
*without* configuring domain_config_directory, so that it comes from the
database?

In that case, puppet has a "passthrough" interface you can use (this is the
same for all openstack puppet modules AFAIK):

https://github.com/openstack/puppet-keystone/blob/master/manifests/config.pp

Environment (referred to as controller_extra.yaml below) file looks like:

parameter_defaults:
   ControllerExtraConfig:
 keystone::config::keystone_config:
   identity/domain_specific_drivers_enabled:
 value: true
   identity/domain_configurations_from_database:
 value: true

I'm assuming I can mix these two approaches, so that, if I need

keystone::using_domain_config: true



as well it would look like this:

parameter_defaults:
  ControllerExtraConfig:
keystone::using_domain_config: true
keystone::config::keystone_config:
  identity/domain_specific_drivers_enabled:
value: true
  identity/domain_configurations_from_database:
value: true


And over time,  if there is support put into the templates for the 
values, and we start seeing errors, we can just change from the latter  
approach to the one you posted earlier?

Note the somewhat idiosyncratic syntax, you pass the value via a
"value: foo" map, not directly to the configuration key (don't ask me why!)

Then do openstack overcloud deploy --templates /path/to/templates -e 
controller_extra.yaml

The one gotcha here is if puppet keystone later adds an explicit interface
which conflicts with this, e.g a domain_specific_drivers_enabled variable
in the above referenced init.pp, you will get a duplicate definition error
(because you can't define the same thing twice in the puppet catalog).

This means that long-term use of the generic keystone::config::keystone_config
interface can be fragile, so it's best to add an explicit e.g
keystone::domain_specific_drivers_enabled interface if this is a long-term
requirement.

This is probably something we should add to our docs, I'll look at doing
that.

Hope that helps,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Modifying just a few values on overcloud redeploy

2016-07-26 Thread Adam Young
I worked through how to do a complete clone of the templates to do a 
deploy and change a couple values here:


http://adam.younglogic.com/2016/06/custom-overcloud-deploys/

However, all I want to do is to set two config options in Keystone.  Is 
there a simple way to just modify the two values below?  Ideally, just 
making a single env file and passing it via openstack overcloud deploy 
-e somehow.



|'identity/domain_specific_drivers_enabled'||: value => ||'True'||;
|

|'identity/||domain_configurations_from_database||'||: value => ||'True'||;
|

|
|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting and ask.openstack.org

2016-06-30 Thread Adam Young

On 06/28/2016 08:33 PM, Steve Martinelli wrote:
I'm cool with the existing keystone repo and adding to docs. If we hit 
a huge amount of content then we can migrate to a new repo. I think 
Adam's main concern with this approach is that we reduce the 
contributors down to folks that know the gerrit workflow.


We don't want a static troubleshooting guide.  We want people to be able 
to ask questionss and link them to answers, have community members add 
their own answer...in short, what we hae in "ask.openstack" now, but not 
well done or maintained.


Its often not a Keystone problem, but a Nova, Glance etc problem. We 
can't stick the Answer in a Keystone repo.








On Tue, Jun 28, 2016 at 8:19 PM, Jamie Lennox <jamielen...@gmail.com 
<mailto:jamielen...@gmail.com>> wrote:




On 29 June 2016 at 09:49, Steve Martinelli <s.martine...@gmail.com
<mailto:s.martine...@gmail.com>> wrote:

I think we want something a bit more organized.

Morgan tossed the idea of a keystone-docs repo, which could have:

- The FAQ Adam is asking about
- Install guides (moved over from openstack-manuals)
- A spot for all those neat and unofficial blog posts we do
- How-to guides
- etc...

I think it's a neat idea and warrants some discussion. Of
course, we don't want to be the odd project out.


What would be the advantage of a new repo rather than just using
the keystone/docs folder. My concern is that docs/ already gets
stagnate but a new repo would end up being largely ignored and at
least theoretically you can update docs/ when the relevant code
changes.


On Tue, Jun 28, 2016 at 6:00 PM, Ian Cordasco
<sigmaviru...@gmail.com <mailto:sigmaviru...@gmail.com>> wrote:

    -----Original Message-
From: Adam Young <ayo...@redhat.com
<mailto:ayo...@redhat.com>>
Reply: OpenStack Development Mailing List (not for usage
questions)
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>
Date: June 28, 2016 at 16:47:26
To: OpenStack Development Mailing List
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>
Subject:  [openstack-dev] Troubleshooting and
ask.openstack.org <http://ask.openstack.org>

> Recently, the Keystone team started brainstormin a
troubleshooting
> document. While we could, eventually put this into the
Keystone repo,
> it makes sense to also be gathering troubleshooting
ideas from the
> community at large. How do we do this?
>
> I think we've had a long enough run with the
ask.openstack.org <http://ask.openstack.org> website
> to determine if it is really useful, and if it needs an
update.
>
>
> I know we getting nuked on the Wiki. What I would like
to be able to
> generate is Frequently Asked Questions (FAQ) page, but
as a living
> document.
>
> I think that ask.openstack.org
<http://ask.openstack.org> is the right forum for this,
but we need
> some more help:
>
> It seems to me that keystone Core should be able to
moderate Keystone
> questions on the site. That means that they should be
able to remove
> old dead ones, remove things tagged as Keystone that do
not apply and so
> on. I would assume the same is true for Nova, Glance,
Trove, Mistral
> and all the rest.
>
> We need some better top level interface than just the
tags, though.
> Ideally we would have a page where someone lands when
troubleshooting
> keystone with a series of questions and links to the
discussion pages
> for that question. Like:
>
>
> I get an error that says "cannot authenticate" what do I do?
>
> What is the Engine behind "ask.openstack.org
<http://ask.openstack.org>?" does it have other tools
> we could use?

The engine is linked in the footer: https://askbot.com/

I'm not sure how much of it is reusable but it claims to
be able to do
some of the things I think you're asking for except it doesn't
explicitly mention deleting comments/questions/etc.

   

Re: [openstack-dev] Troubleshooting and ask.openstack.org

2016-06-30 Thread Adam Young

On 06/28/2016 11:13 PM, Tom Fifield wrote:

Quick answers in-line

On 29/06/16 05:44, Adam Young wrote:

It seems to me that keystone Core should be able to moderate Keystone
questions on the site.  That means that they should be able to remove
old dead ones, remove things tagged as Keystone that do not apply and so
on.  I would assume the same is true for Nova, Glance, Trove, Mistral
and all the rest.


If you send a list of ask openstack usernames to 
community...@openstack.org , happy to give them moderator rights. 
Anyone with karma beyond 200 already has them.


The email bounced.






We need some better top level interface than just the tags, though.
Ideally we would have a page where someone lands when troubleshooting
keystone with a series of questions and links to the discussion pages
for that question.  Like:


I get an error that says "cannot authenticate" what do I do?


Example - something like this link for "Common Upstream Development 
Questions"


https://ask.openstack.org/en/questions/tags:common-upstream

?


What is the Engine behind "ask.openstack.org?"  does it have other tools
we could use?


Askbot - https://github.com/ASKBOT/askbot-devel


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][tripleo] Tripleo holding on to old, bad data

2016-06-28 Thread Adam Young

On 06/28/2016 02:58 AM, Pavlo Shchelokovskyy wrote:

Adam,

not only "available", Nova would also not schedule to Ironic nodes 
which have maintenance==True regardless of their provisioning state.

That was not set.



Also, you might have orphaned Ironic nodes, when node is available, 
but still has instance_uuid assigned without actual instance in Nova. 
These AFAIK would also not be scheduled to. To fix it update the node 
resetting this field


ironic node-update  remove instance_uuid


Did that as well, but since the system has been rebuilt, it is hard to 
confirm.  If we do it again, I'll double check all these.  Thanks




Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com <http://www.mirantis.com>

On Tue, Jun 28, 2016 at 1:29 AM, Adam Young <ayo...@redhat.com 
<mailto:ayo...@redhat.com>> wrote:


On 06/26/2016 07:00 PM, Steve Baker wrote:

Assuming the stack is deleted and nova is showing no servers,
you likely have ironic nodes which are not in a state which
can be scheduled.

Do an ironic node-list, you want Power State: Off,
Provisioning State: available, Maintenance: False


Yes, we have that.  First thing we checked.  I assume "available"
is the most important part of that?




On 25/06/16 09:27, Adam Young wrote:

A coworker and I have both had trouble recovering from
failed overcloud deploys.  I've wiped out whatever data I
can, but, even with nothing in the Heat Database, doing an

openstack overcloud deploy

seems to be looking for a specific Nova server by UUID:


heat resource-show 93afc25e-1ab2-4773-9949-6906e2f7c115 0

| resource_status_reason | ResourceInError:
resources[0].resources.Controller: Went to status ERROR
due

t│·
o "Message: No valid host was found. There are not enough
hosts available., Code: 500" |

│·
| resource_type  | OS::TripleO::Controller


Inside the Nova log I see:


2016-06-24 21:05:06.973 15551 DEBUG
nova.api.openstack.wsgi
[req-c8a5179c-2adf-45a6-b186-7d7b29cd8f39

bcd│·fefb36f3ca9a8f3cfa445ab40
ec662f250a85453cb40054f3aff49b58 - - -] Returning 404 to
user: Instance

8f9│·0c961-4609-4c9b-9d62-360a40f88eed
could not be found. __call__

/usr/lib/python2.7/site-packages/nova/api/│·
openstack/wsgi.py:1070


How can I get the undercloud back to a clean state?



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Troubleshooting and ask.openstack.org

2016-06-28 Thread Adam Young
Recently, the Keystone team  started brainstormin a troubleshooting 
document.  While we could, eventually put this into the Keystone repo, 
it makes sense to also be gathering troubleshooting ideas from the 
community at large.  How do we do this?


I think we've had a long enough run with the ask.openstack.org website 
to determine if it is really useful, and if it needs an update.



I know we getting nuked on the Wiki.  What I would like to be able to 
generate is  Frequently Asked Questions (FAQ) page, but as a living 
document.


I think that ask.openstack.org is the right forum for this, but we need 
some more help:


It seems to me that keystone Core should be able to moderate Keystone 
questions on the site.  That means that they should be able to remove 
old dead ones, remove things tagged as Keystone that do not apply and so 
on.  I would assume the same is true for Nova, Glance, Trove, Mistral 
and all the rest.


We need some better top level interface than just the tags, though. 
Ideally we would have a page where someone lands when troubleshooting 
keystone with a series of questions and links to the discussion pages 
for that question.  Like:



I get an error that says "cannot authenticate" what do I do?

What is the Engine behind "ask.openstack.org?"  does it have other tools 
we could use?





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][tripleo] Tripleo holding on to old, bad data

2016-06-27 Thread Adam Young

On 06/26/2016 07:00 PM, Steve Baker wrote:
Assuming the stack is deleted and nova is showing no servers, you 
likely have ironic nodes which are not in a state which can be scheduled.


Do an ironic node-list, you want Power State: Off, Provisioning State: 
available, Maintenance: False


Yes, we have that.  First thing we checked.  I assume "available" is the 
most important part of that?





On 25/06/16 09:27, Adam Young wrote:
A coworker and I have both had trouble recovering from failed 
overcloud deploys.  I've wiped out whatever data I can, but, even 
with nothing in the Heat Database, doing an


openstack overcloud deploy

seems to be looking for a specific Nova server by UUID:


heat resource-show 93afc25e-1ab2-4773-9949-6906e2f7c115 0

| resource_status_reason | ResourceInError: 
resources[0].resources.Controller: Went to status ERROR due 
t│·
o "Message: No valid host was found. There are not enough hosts 
available., Code: 500" | 
│·

| resource_type  | OS::TripleO::Controller


Inside the Nova log I see:


2016-06-24 21:05:06.973 15551 DEBUG nova.api.openstack.wsgi 
[req-c8a5179c-2adf-45a6-b186-7d7b29cd8f39 
bcd│·fefb36f3ca9a8f3cfa445ab40 
ec662f250a85453cb40054f3aff49b58 - - -] Returning 404 to user: 
Instance 
8f9│·0c961-4609-4c9b-9d62-360a40f88eed 
could not be found. __call__ 
/usr/lib/python2.7/site-packages/nova/api/│·

openstack/wsgi.py:1070


How can I get the undercloud back to a clean state?


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Tripleo holding on to old, bad data

2016-06-24 Thread Adam Young
A coworker and I have both had trouble recovering from failed overcloud 
deploys.  I've wiped out whatever data I can, but, even with nothing in 
the Heat Database, doing an


openstack overcloud deploy

seems to be looking for a specific Nova server by UUID:


heat resource-show 93afc25e-1ab2-4773-9949-6906e2f7c115 0

| resource_status_reason | ResourceInError: 
resources[0].resources.Controller: Went to status ERROR due 
t│·
o "Message: No valid host was found. There are not enough hosts 
available., Code: 500" | 
│·

| resource_type  | OS::TripleO::Controller


Inside the Nova log I see:


2016-06-24 21:05:06.973 15551 DEBUG nova.api.openstack.wsgi 
[req-c8a5179c-2adf-45a6-b186-7d7b29cd8f39 
bcd│·fefb36f3ca9a8f3cfa445ab40 
ec662f250a85453cb40054f3aff49b58 - - -] Returning 404 to user: Instance 
8f9│·0c961-4609-4c9b-9d62-360a40f88eed 
could not be found. __call__ 
/usr/lib/python2.7/site-packages/nova/api/│·

openstack/wsgi.py:1070


How can I get the undercloud back to a clean state?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] X509 Management

2016-06-21 Thread Adam Young

On 06/21/2016 02:42 PM, Juan Antonio Osorio wrote:
Adam, this is pretty much the proposal for TLS for the internal 
services (which you were added already as a reviewer for the spec) 
https://review.openstack.org/#/c/282307/
The services in the overcloud fetching their certificates via 
certmonger is actually work in progress, which you could review here: 
https://review.openstack.org/#/q/status:open+topic:bp/tls-via-certmonger
Only thing is that currently this approach assumes FreeIPA, so the 
nodes need to be registered beforehand (and then trigger 
self-enrollment via some hooks).
Also, the spec that I'm pointing to doesn't require the CA to be the 
in undercloud and it could be another node. But hey, if we could 
deploy Anchor on the Undercloud, then that could be used, so we 
wouldn't need another node for this means.



Yes, I was basing my proposal her around your work.  I want there to be 
something guaranteed for Certmonger to talk to, and I realize that Heat 
can probably play that role.


Anchor might also be viable here, I just don't want to force it as a 
dependency.  We are talking the selfsigned use case here, so it should 
be minimal overhead.


Anyway, in each of the service's profiles (the puppet manifests) I'm 
setting up the tracking of the certificates with the certmonger's 
puppet manifest.


BR

On Tue, Jun 21, 2016 at 5:39 PM, Adam Young <ayo...@redhat.com 
<mailto:ayo...@redhat.com>> wrote:


When deploying the overcloud with TLS, the current "no additional
technology" approach is to use opensssl and self signed. While
this works for a Proof of concept, it does not make sense if the
users need to access the resources from remote systems.

It seems to me that the undercloud, as the system of record for
deploying the overcloud, should be responsible for centralizing
the signing of certificates.

When deploying a service, the puppet module sure trigger a getcert
call, which registers the cert with  Certmonger. Certmonger is
responsible for making sure the CSR gets to the signing authority,
and fetching the cert.

Certmonger works via helper apps.  While there is currently a
"self signed" helper, this does not do much if two or more systems
need to have the same CA sign their certs.

It would be fairly simple to write a certmonger helper program
that sends a CSR from a controller or compute node to the
undercloud, has the Heat instance on the undercloud validate the
request, and then pass it on to the signing application.

I'm not really too clear on how callbacks are  done from the
os-collect-config processes to Heat, but I am guessing it is some
form of Rest API that could be reused for this work flow?


I would see this as the lowest level of deployment.  We can make
use of Anchor or Dogtag helper apps already.  This might also
prove a decent middleground for people that need an automated
approach to tie in with a third party CA, where they need some
confirmation from the deployment process that the data in the CSR
is valid and should be signed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com <mailto:jaosor...@gmail.com>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-06-21 Thread Adam Young

On 06/21/2016 08:43 AM, Markus Zoeller wrote:

A reminder that this will happen in ~2 weeks.

Please note that you can spare bug reports if you leave a comment there
which says one of these (case-sensitive flags):
* CONFIRMED FOR: NEWTON
* CONFIRMED FOR: MITAKA
* CONFIRMED FOR: LIBERTY

On 23.05.2016 13:02, Markus Zoeller wrote:

TL;DR: Automatic closing of 185 bug reports which are older than 18
months in the week R-13. Skipping specific bug reports is possible. A
bug report comment explains the reasons.


I'd like to get rid of more clutter in our bug list to make it more
comprehensible by a human being. For this, I'm targeting our ~185 bug
reports which were reported 18 months ago and still aren't in progress.
That's around 37% of open bug reports which aren't in progress. This
post is about *how* and *when* I do it. If you have very strong reasons
to *not* do it, let me hear them.

When

I plan to do it in the week after the non-priority feature freeze.
That's week R-13, at the beginning of July. Until this date you can
comment on bug reports so they get spared from this cleanup (see below).
Beginning from R-13 until R-5 (Newton-3 milestone), we should have
enough time to gain some overview of the rest.

I also think it makes sense to make this a repeated effort, maybe after
each milestone/release or monthly or daily.

How
---
The bug reports which will be affected are:
* in status: [new, confirmed, triaged]
* AND without assignee
* AND created at: > 18 months
A preview of them can be found at [1].

You can spare bug reports if you leave a comment there which says
one of these (case-sensitive flags):
* CONFIRMED FOR: NEWTON
* CONFIRMED FOR: MITAKA
* CONFIRMED FOR: LIBERTY

The expired bug report will have:
* status: won't fix
* assignee: none
* importance: undecided
* a new comment which explains *why* this was done

The comment the expired bug reports will get:
 This is an automated cleanup. This bug report got closed because
 it is older than 18 months and there is no open code change to
 fix this. After this time it is unlikely that the circumstances
 which lead to the observed issue can be reproduced.
 If you can reproduce it, please:
 * reopen the bug report
 * AND leave a comment "CONFIRMED FOR: "
   Only still supported release names are valid.
   valid example: CONFIRMED FOR: LIBERTY
   invalid example: CONFIRMED FOR: KILO
 * AND add the steps to reproduce the issue (if applicable)


Let me know if you think this comment gives enough information how to
handle this situation.


References:
[1] http://45.55.105.55:8082/bugs-dashboard.html#tabExpired




DONT TOUCH BUG 968696!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] X509 Management

2016-06-21 Thread Adam Young

On 06/21/2016 11:26 AM, John Dennis wrote:

On 06/21/2016 10:55 AM, Ian Cordasco wrote:

-Original Message-
From: Adam Young <ayo...@redhat.com>
Reply: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org>
Date: June 21, 2016 at 09:40:39
To: OpenStack Development Mailing List 
<openstack-dev@lists.openstack.org>

Subject:  [openstack-dev] [Tripleo] X509 Management


When deploying the overcloud with TLS, the current "no additional
technology" approach is to use opensssl and self signed. While this
works for a Proof of concept, it does not make sense if the users need
to access the resources from remote systems.

It seems to me that the undercloud, as the system of record for
deploying the overcloud, should be responsible for centralizing the
signing of certificates.

When deploying a service, the puppet module sure trigger a getcert 
call,

which registers the cert with Certmonger. Certmonger is responsible
for making sure the CSR gets to the signing authority, and fetching the
cert.

Certmonger works via helper apps. While there is currently a "self
signed" helper, this does not do much if two or more systems need to
have the same CA sign their certs.

It would be fairly simple to write a certmonger helper program that
sends a CSR from a controller or compute node to the undercloud, has 
the

Heat instance on the undercloud validate the request, and then pass it
on to the signing application.

I'm not really too clear on how callbacks are done from the
os-collect-config processes to Heat, but I am guessing it is some form
of Rest API that could be reused for this work flow?


I would see this as the lowest level of deployment. We can make use of
Anchor or Dogtag helper apps already. This might also prove a decent
middleground for people that need an automated approach to tie in 
with a

third party CA, where they need some confirmation from the deployment
process that the data in the CSR is valid and should be signed.


I'm not familiar with TripleO or it's use of puppet, but I would
strongly advocate for Anchor (or DogTag) to be the recommended
solution. OpenStack Ansible has found it a little bit of an annoyance
to generate and distribute self-signed certificates.


Ah, but the idea is that certmonger is a front to whatever CA you 
chose to use, it provides a consistent interface to a range of CA's as 
well as providing functionality not present in most CA's, for instance 
the ability detect when certs need renewal etc. So the idea would be 
certmonger+Dogtag or certmongner+Anchor, or certmonger+XXX


Exactly.  This allows the interface from Tripleo to be the same 
regardless of the CA.



I am looking for guidance from the Tripleo/Heat team on how to structure 
the certmonger helper app.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] X509 Management

2016-06-21 Thread Adam Young
When deploying the overcloud with TLS, the current "no additional 
technology" approach is to use opensssl and self signed.  While this 
works for a Proof of concept, it does not make sense if the users need 
to access the resources from remote systems.


It seems to me that the undercloud, as the system of record for 
deploying the overcloud, should be responsible for centralizing the 
signing of certificates.


When deploying a service, the puppet module sure trigger a getcert call, 
which registers the cert with  Certmonger.  Certmonger is responsible 
for making sure the CSR gets to the signing authority, and fetching the 
cert.


Certmonger works via helper apps.  While there is currently a "self 
signed" helper, this does not do much if two or more systems need to 
have the same CA sign their certs.


It would be fairly simple to write a certmonger helper program that 
sends a CSR from a controller or compute node to the undercloud, has the 
Heat instance on the undercloud validate the request, and then pass it 
on to the signing application.


I'm not really too clear on how callbacks are  done from the 
os-collect-config processes to Heat, but I am guessing it is some form 
of Rest API that could be reused for this work flow?



I would see this as the lowest level of deployment.  We can make use of 
Anchor or Dogtag helper apps already.  This might also prove a decent 
middleground for people that need an automated approach to tie in with a 
third party CA, where they need some confirmation from the deployment 
process that the data in the CSR is valid and should be signed.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-19 Thread Adam Young

On 06/16/2016 02:19 AM, Jamie Lennox wrote:

Thanks everyone for your input.

I generally agree that there is something that doesn't quite feel 
right about purely trusting this information to be passed from service 
to service, this is why i was keen for outside input and I have been 
rethinking the approach.


They really feel like a variation on Trust tokens.

From the service perspective, they are tokens, just not the one the 
user originally requested.


The "reservation" as I see it is an implicit trust created by the user 
requesting the operation on the initial service.


When the service validates the token, it can get back the,  lets call it 
a "reserved token" in keeping with the term reservation above.  That 
token will have a longer life span than the one the user originally 
requested, but (likely) fewer roles.


When nova calls glance, and then glance calls Swift, we can again 
transition to different reserved tokens if needs be.







To this end i've proposed reservations (a name that doesn't feel 
right): https://review.openstack.org/#/c/330329/


At a gut feeling level i'm much happier with the concept. I think it 
will allow us to handle the distinction between user->service and 
service->service communication much better and has the added bonus of 
potentially opening up some policy options in future.


Please let me know of any concerns/thoughts on the new approach.

Once again i've only written the proposal part of the spec as there 
will be a lot of details to figure out if we go forward. It is also 
fairly rough but it should convey the point.



Thanks

Jamie

On 3 June 2016 at 03:06, Shawn McKinney <smckin...@symas.com 
<mailto:smckin...@symas.com>> wrote:



    > On Jun 2, 2016, at 10:58 AM, Adam Young <ayo...@redhat.com
<mailto:ayo...@redhat.com>> wrote:
>
> Any senseible RBAC setup would support this, but we are not
using a sensible one, we are using a hand rolled one. Replacing
everything with Fortress implies a complete rewrite of what we do
now.  Nuke it from orbit type stuff.
>
> What I would rather focus on is the splitting of the current
policy into two parts:
>
> 1. Scope check done in code
> 2. Role check done in middleware
>
> Role check should be donebased on URL, not on the policy key
like identity:create_user
>
>
> Then, yes, a Fortress style query could be done, or it could be
done by asking the service itself.

Mostly in agreement.  I prefer to focus on the model (RBAC) rather
than a specific impl like Fortress. That is to say support the
model and allow the impl to remain pluggable.  That way you enable
many vendors to participate in your ecosystem and more important,
one isn’t tied to a specific backend (ldapv3, sql, …)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]trusts with federated users

2016-06-07 Thread Adam Young

On 06/07/2016 10:28 AM, Gyorgy Szombathelyi wrote:

Hi!

As an OIDC user, tried to play with Heat and Murano recently. They usually fail 
with a trust creation error, noticing that keystone cannot find the _member_ 
role while creating the trust.
Hmmm...that should not be the case.  The user in question should have a 
role on the project, but getting it via a group is OK.


I suspect the problem is the Ephemeral nature of Federated users. With 
the Shadow user construct (under construction) there would be something 
to use.


Please file a bug on this and assign it to me (or notify me if you can't 
assign).




Since a federated user is not really have a role in a project, but it is a 
member of a group, which has the appropriate role(s), I suspect that this will 
never work with Federation?
Or is it a known/general problem with trusts and groups? I cannot really decide 
if it is a problem at the Heat, or the Keystone side, can you give me some 
advice?
If it is not an error in the code, but in my setup, then please forgive me this 
stupid question.

Br,
György

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-02 Thread Adam Young

On 06/02/2016 07:22 PM, Henry Nash wrote:

Hi

As you know, I have been working on specs that change the way we 
handle the uniqueness of project names in Newton. The goal of this is 
to better support project hierarchies, which as they stand today are 
restrictive in that all project names within a domain must be unique, 
irrespective of where in the hierarchy that projects sits (unlike, 
say, the unix directory structure where a node name only has to be 
unique within its parent). Such a restriction is particularly 
problematic when enterprise start modelling things like test, QA and 
production as branches of a project hierarchy, e.g.:


/mydivsion/projectA/dev
/mydivsion/projectA/QA
/mydivsion/projectA/prod
/mydivsion/projectB/dev
/mydivsion/projectB/QA
/mydivsion/projectB/prod

Obviously the idea of a project name (née tenant) being unique has 
been around since near the beginning of (OpenStack) time, so we must 
be cautions. There are two alternative specs proposed:


1) Relax project name constraints: 
https://review.openstack.org/#/c/310048/

2) Hierarchical project naming: https://review.openstack.org/#/c/318605/

First, here’s what they have in common:

a) They both solve the above problem
b) They both allow an authorization scope to use a path rather than 
just a simple name, hence allowing you to address a project anywhere 
in the hierarchy
c) Neither have any impact if you are NOT using a hierarchy - i.e. if 
you just have a flat layer of projects in a domain, then they have no 
API or semantic impact (since both ensure that a project’s name must 
still be unique within a parent)


Here’s how the differ:

- Relax project name constraints (1), keeps the meaning of the ‘name’ 
attribute of a project to be its node-name in the hierarchy, but 
formally relaxes the uniqueness constraint to say that it only has to 
be unique within its parent. In other words, let’s really model this a 
bit like a unix directory tree.
- Hierarchical project naming (2), formally changes the meaning of the 
‘name’ attribute to include the path to the node as well as the node 
name, and hence ensures that the (new) value of the name attribute 
remains unique.


While whichever approach we chose would only be included in a new 
microversion (3.7) of the Identity API, although some relevant APIs 
can remain unaffected for a client talking 3.6 to a Newton server, not 
all can be. As pointed out be jamielennox, this is a data modelling 
problem - if a Newton server has created multiple projects called 
“dev” in the hierarchy, a 3.6 client trying to scope a token simply to 
“dev” cannot be answered correctly (and it is proposed we would have 
to return an HTTP 409 Conflict error if multiple nodes with the same 
name were detected). This is true for both approaches.


Other comments on the approaches:

- Having a full path as the name seems duplicative with the current 
project entity - since we already return the parent_id (hence 
parent_id + name is, today, sufficient to place a project in the 
hierarchy).


The one thing I like is the ability to specify just the full path for 
the OS_PROJECT_NAME env var, but we could make that a separate 
variable.  Just as DOMAIN_ID + PROJECT_NAME is unique today, 
OS_PROJECT_PATH should be able to fully specify a project 
unambiguously.  I'm not sure which would have a larger impact on users.



- In the past, we have been concerned about the issue of what we do if 
there is a project further up the tree that we do not have any roles 
on. In such cases, APIs like list project parents will not display 
anything other than the project ID for such projects. In the case of 
making the name the full path, we would be effectively exposing the 
name of all projects above us, irrespective of whether we had roles on 
them. Maybe this is OK, maybe it isn’t.


I think it is OK.  If this info needs to be hidden from a user, the 
project should probably be in a different domain.


- While making the name the path keeps it unique, this is fine if 
clients blindly use this attribute to plug back into another API to 
call. However if, for example, you are Horizon and are displaying them 
in a UI then you need to start breaking down the path into its 
components, where you don’t today.
- One area where names as the hierarchical path DOES look right is 
calling the /auth/projects API - where what the caller wants is a list 
of projects they can scope to - so you WANT this to be the path you 
can put in an auth request.


Given that neither can fully protect a 3.6 client, my personal 
preference is to go with the cleaner logical approach which I believe 
is the Relax project name constraints (1), with the addition of 
changing GET /auth/projects to return the path (since this is a 
specialised API that happens before authentication) - but I am open to 
persuasion (as the song goes).


There are those that might say that perhaps we just can’t change this. 
I would argue that since this ONLY affects 

Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-02 Thread Adam Young

On 06/02/2016 11:36 AM, Shawn McKinney wrote:

On Jun 2, 2016, at 10:03 AM, Adam Young <ayo...@redhat.com> wrote:

To do all of this right, however, requires a degree of introspection that we do not have 
in OpenStack.  Trove needs to ask Nova "I want to do X, what role do I need?"  
and there is no where in the system today that this information lives.

So, while we could make something that works for service users as the problem 
is defined by Nova today, that would be, in a word, bad.  We need something 
that works for the larger OpenStack ecosystem, to include less trusted third 
party services, and still deal with the long running tasks.

Hello,

If openstack supported RBAC (ANSI INCITS 359) you would be able to call 
(something like) this API:

List permissionRoles(Permission  perm) throws SecurityException

Return a list of type String of all roles that have granted a particular 
permission.

RBAC Review APIs:
http://directory.apache.org/fortress/gen-docs/latest/apidocs/org/apache/directory/fortress/core/ReviewMgr.html

One of the advantages of pursuing published standards, you enjoy support for 
requirements across a broad spectrum of requirements, and perhaps for things 
you didn’t know was needed (at design time).


Any senseible RBAC setup would support this, but we are not using a 
sensible one, we are using a hand rolled one.  Replacing everything with 
Fortress implies a complete rewrite of what we do now.  Nuke it from 
orbit type stuff.


What I would rather focus on is the splitting of the current policy into 
two parts:


1. Scope check done in code
2. Role check done in middleware

Role check should be donebased on URL, not on the policy key like 
identity:create_user



Then, yes, a Fortress style query could be done, or it could be done by 
asking the service itself.






Hope this helps,

Shawn
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-02 Thread Adam Young

On 06/02/2016 01:23 AM, Jamie Lennox wrote:

Hi All,

I'd like to bring to the attention of the wider security groups and 
OpenStack users the Service Users Permissions [1] spec currently 
proposed against keystonemiddleware.


To summarize quickly OpenStack has long had the problem of token 
expiry happening in the middle of a long running operation and failing 
service to service requests and there have been a number of ways 
proposed around this including trusts and using the service users to 
perform operations.


Ideally in a big system like this we only want to validate a token and 
policy once on a user's first entry to the system, however all 
services only communicate via the public interfaces so we cannot tell 
at validation time whether this is the first, second, or twentieth 
time we are validating a token. (If we ever do OpenStack 2.0 we should 
change this)


Validating the "token" happens only once.

Validating the "user" permissions can happen multiple times, assuming 
that nothing changes, the operation goes through.



The part I have trouble with is not Validating the delegation from the 
end user to the service user.  This is a CVE waiting to happen.


A users token should be short (5 minutes) and just kick off the 
workflow.  But that should then be used to create a delegation for the 
remote service.  THat delegation can last longer than the duration of 
the token, to cover the long running tasks, but should not last forever.


While the usual discussion centers aroun Nova based tasks, think about 
all the *aaService endpoint that are going to have this same need.  If I 
kick off a workflow via Trove or Sahara, that endpoint should only be 
able to do what I ask it to do:  spin up the appropriate number of  vms 
in the corresponding projects, and so on.


The delegation mechanism needs to be lighter weight than trusts, but 
should have the sma constraints (redelegation and so on).



To do all of this right, however, requires a degree of introspection 
that we do not have in OpenStack.  Trove needs to ask Nova "I want to do 
X, what role do I need?"  and there is no where in the system today that 
this information lives.


So, while we could make something that works for service users as the 
problem is defined by Nova today, that would be, in a word, bad.  We 
need something that works for the larger OpenStack ecosystem, to include 
less trusted third party services, and still deal with the long running 
tasks.


S4U2Proxy from the Kerberos world is a decent approximation of what we 
need.  A user with a service ticket goes to a remote service and askes 
for an operation.  That service then gets its own proxy service ticket, 
based on its own identity and the service ticket of the requesting 
user.  This Proxy  service ticket is then used for operations on behalf 
of the real user.  The proxy ticket can have a reduced degree of 
authorization, but does not require a deliberate delegation agreement 
between each user and the service.






The proposed spec provides a way to simulate the at-edge validation 
for service to service communication. If a request has an 
X-Service-Token header (an existing concept) then instead of 
validating the user's token we should trust all the headers sent with 
that request (X_USER_ID, X_PROJECT_ID etc). We would still validate 
the X-Service-Token header. This has the effect that one service 
asserts to another that it has already validated this token and the 
receiving service shouldn't validate it again and bypass the expiry 
problem.


The glaring security issue here is that a user with the service role 
can now emulate any request on behalf of any user by sending the 
expected authenticated headers. This will place an extreme level of 
trust on accounts that up to now have generally only been able to 
validate a token. There is both the concern here that a malicious 
service could craft new requests with bogus credentials as well as 
services deciding that this provides them the ability to do 
non-expiring trusts from a user where it can simply replay the headers 
it received on previous requests to perform future operations on 
behalf of a user. This is _absolutely not_ the intended use case but 
something I expect to come up.


There is a variation of this mentioned in the spec where we pass only 
the user-id, project-id and audit information from service to service 
and then middleware can recreate the token from this information 
similar to how fernet tokens work today. There is additional 
processing here which in the standard case will simply reproduce the 
same headers that the last service already knew and it still allows a 
large amount of emulation from the service.


There are possibly ways we can secure this header bundle via signing 
however the practical result is essentially a secondary expiry time 
and an operational complexity that will make PKI tokens and rotating 
fernet keys appear trivial for the benefit of securing a 

Re: [openstack-dev] [keystone] Who is going to fix the broken non-voting tests?

2016-05-27 Thread Adam Young

On 05/27/2016 02:30 PM, Raildo Mascena wrote:
In addition, I'm the one of the folks who are working with the v3-only 
gates, the main case that we are looking for is when the functional 
job is working and the the v3-only is not, so everything related to 
this jobs, you can just ping me on irc. :)




Thanks, will do.  Just want to make sure that we treat failing tests 
like a fire alarm:  do not get used to seeing red on the tests, even if 
they are non-voting.  Its a sign of a deeper problem.


Yeah...a bit of a hot button topic for me.




Cheers,

Raildo

On Thu, May 26, 2016 at 6:27 PM Rodrigo Duarte 
<rodrigodso...@gmail.com <mailto:rodrigodso...@gmail.com>> wrote:


The function-nv was depending of a first test to be merged =)

The v3 depends directly on it, the difference is that it passes a
flag to deactivate v2.0 in devstack.

On Thu, May 26, 2016 at 5:48 PM, Steve Martinelli
<s.martine...@gmail.com <mailto:s.martine...@gmail.com>> wrote:

On Thu, May 26, 2016 at 12:59 PM, Adam Young
<ayo...@redhat.com <mailto:ayo...@redhat.com>> wrote:

On 05/26/2016 11:36 AM, Morgan Fainberg wrote:



    On Thu, May 26, 2016 at 7:55 AM, Adam Young
<ayo...@redhat.com <mailto:ayo...@redhat.com>> wrote:

Some mix of these three tests is almost always failing:

gate-keystone-dsvm-functional-nv FAILURE in 20m 04s
(non-voting)
gate-keystone-dsvm-functional-v3-only-nv FAILURE in
32m 45s (non-voting)
gate-tempest-dsvm-keystone-uwsgi-full-nv FAILURE in
1h 07m 53s (non-voting)


Are we going to keep them running and failing, or
boot them?  If we are going to keep them, who is
going to commit to fixing them?

We should not live with broken windows.



The uwsgi check should be moved to a proper run utilizing
mod_proxy_uwsgi.

Who wants to own this?  I am not fielding demands for
uwsgi support mysqlf, and kind of think it is just a
novelty, thus would not mind see it going away.  If
someone really cares, please make yourself known.


Brant has a patch (https://review.openstack.org/#/c/291817/)
that adds support in devstack to use uwsgi and mod_proxy_http.
This is blocked until infra moves to Ubuntu Xenial. Once this
merges we can propose a patch that swaps out the uwsgi job for
uwsgi + mod_proxy_http.





The v3 only one is a WIP that a few folks are working on

Fair enough.


The function-nv one was passing somewhere. I think that
one is close.


Yeah, it seems to be intermittant.


These two are actively being worked on.






__
OpenStack Development Mailing List (not for usage questions)

Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rodrigo Duarte Sousa

Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com <http://rodrigods.com>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject

Re: [openstack-dev] [keystone] Who is going to fix the broken non-voting tests?

2016-05-26 Thread Adam Young

On 05/26/2016 11:36 AM, Morgan Fainberg wrote:



On Thu, May 26, 2016 at 7:55 AM, Adam Young <ayo...@redhat.com 
<mailto:ayo...@redhat.com>> wrote:


Some mix of these three tests is almost always failing:

gate-keystone-dsvm-functional-nv FAILURE in 20m 04s (non-voting)
gate-keystone-dsvm-functional-v3-only-nv FAILURE in 32m 45s
(non-voting)
gate-tempest-dsvm-keystone-uwsgi-full-nv FAILURE in 1h 07m 53s
(non-voting)


Are we going to keep them running and failing, or boot them?  If
we are going to keep them, who is going to commit to fixing them?

We should not live with broken windows.



The uwsgi check should be moved to a proper run utilizing mod_proxy_uwsgi.
Who wants to own this?  I am not fielding demands for uwsgi support 
mysqlf, and kind of think it is just a novelty, thus would not mind see 
it going away.  If someone really cares, please make yourself known.




The v3 only one is a WIP that a few folks are working on

Fair enough.


The function-nv one was passing somewhere. I think that one is close.


Yeah, it seems to be intermittant.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] integrating keystone with oauth2 (keycloak)

2016-05-26 Thread Adam Young

On 05/26/2016 11:20 AM, Shtilman, Tomer (Nokia - IL) wrote:


Hi

Does keystone has any plugin/extension for oauth2 authentication 
(keycloak in our case)


We would like to integrate keystone with an external oauth2 system in 
this way:


1/ Credentials / being sent to keystone

2/ Keystone will interact with external oauth2 server to  validate and 
fetch user details,tenant(project),roles etc.. (no endpoints) and will 
generate a token


Keycloak supports SAML2, which I've confirmed works using 
mod_auth_mellon and Federation on the Keystone side. We are working on 
confirming ECP.  I think ECP is the only viable Federation CLI approach 
for Keycloak right now, but we might be pleasantly surprised.


3/ Token will be used from this point , token will need to be 
validated with oauth2 through keystone until expiry


Any thought/insights will be highly appreciated

Thanks



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Who is going to fix the broken non-voting tests?

2016-05-26 Thread Adam Young

Some mix of these three tests is almost always failing:

gate-keystone-dsvm-functional-nv FAILURE in 20m 04s (non-voting)
gate-keystone-dsvm-functional-v3-only-nv FAILURE in 32m 45s (non-voting)
gate-tempest-dsvm-keystone-uwsgi-full-nv FAILURE in 1h 07m 53s (non-voting)


Are we going to keep them running and failing, or boot them?  If we are 
going to keep them, who is going to commit to fixing them?


We should not live with broken windows.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to single sign on with windows authentication with Keystone

2016-05-25 Thread Adam Young

On 05/25/2016 07:26 AM, OpenStack Mailing List Archive wrote:

Link: https://openstack.nimeyo.com/85057/?show=85707#c85707
From: imocha 

I am trying to follow the steps. I am able to install ADFS and would 
like to proceed further.


However, I am having issues with setting up SSL endpoints for Keystone 
V3. I am using Mitaka. Is there any step that I can use.


I am using packstack to install the Mitaka and wanted to enable SSL 
for the identity endpoints to work with ADFS for SAML2 flow.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
We went through a proof of concept for this last summer (FreeIPA and 
Ipsilon, not ADFS)



https://github.com/admiyo/rippowam

Right now I'm working on updating for Keycloak instead of Ipsilon.

The SSL stuff I would like to recommend using Certmonger to manage, but 
I don't know how to tie that in with the ADFS CA. We do it using IPA's 
CA.  You can set up a trust between IPA and and AD, which might be your 
easiest path forward.


With a trust, the Keystone server would be registered as a host on the 
FreeIPA server, but would accept Kerberos tickets from ADFS.  If you 
want to completely federate the two, you can do so as well, and then you 
do not  need the trust, you just let ADFS issue SAML.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: keystone federation user story

2016-05-24 Thread Adam Young

On 05/24/2016 10:30 PM, Adam Young wrote:

On 05/24/2016 01:55 PM, Alexander Makarov wrote:

Colleagues,

here is an actual use case for shadow users assignments, let's 
discuss possible solutions: all suggestions are appreciated.


-- Forwarded message --
From: *Andrey Grebennikov* <agrebenni...@mirantis.com 
<mailto:agrebenni...@mirantis.com>>

Date: Tue, May 24, 2016 at 9:43 AM
Subject: keystone federation user story
To: Alexander Makarov <amaka...@mirantis.com 
<mailto:amaka...@mirantis.com>>



Main production usecase:
As a system administrator I need to create assignments for federated 
users into the projects when the user has not authenticated for the 
first time.


Two different approaches.
1. A user has to be assigned directly into the project with the role 
Role1. Since shadow users were implemented, Keystone database has the 
record of the user when the federated user authenticates for the 
first time. When it happens, the user gets unscoped token and 
Keystone registers the user in the database with generated ID (the 
result of hashing the name and the domain). At this point the user 
cannot get scoped token yet since the user has not been assigned to 
any project.
Nonetheless there was a bug 
https://bugs.launchpad.net/keystone/+bug/1313956 which was abandoned, 
and the reporter says that currently it is possible to assign role in 
the project to non-existing user (API only, no CLI). It doesn't help 
much though since it is barely possible to predict the ID of the user 
if it doesn't exist yet.


Potential solution - allow per-user project auto-creation. This will 
allow the user to get scoped token with a pre-defined role (should be 
either mentioned in config or in mapping) and execute operations 
right away.


What we discussed at the summit was the ability for some power user to 
pre=-create the shadow user record by passing in the data that that 
would be mapped:


Kerberos example
{
Realm: YOUNGLOGIC.NET
Principal: ayo...@younglogic.net
REMOTE_GROUPS: ipausers,demo,bookworms
}


Another API will allow for query if a user exists.




Spec is here

https://review.openstack.org/#/c/313604/






Disadvantages: less control and order (will potentially end up with 
infinite empty projects).

Benefits: user is authorized right away.

Another potential solution - clearly describe a possibility to assign 
shadow user to a project (client should generate the ID correctly), 
even though the user has not been authenticated for the first time yet.


Disadvantages: high risk of administrator's mistake when typing 
user's ID.
Benefits: user doesn't have to execute first dummy authentication in 
order to be registered.


2. Operate with the groups. It means that the user is a member of the 
remote group and we propose the groups to be assigned to the projects 
instead of the users.
There is no concept of shadow groups yet, so it still has to be 
implemented.


Same problem - in order to be able to assign the group to the project 
currently it has to exist in Keystone database.


It should be either allowed to pre-create the project for a group 
(based on some specific flags in mappings), or it should be allowed 
to assign non-existing groups into the projects.


I'd personally prefer to allow some special attribute to be specified 
in either the config or mapping which will allow project auto-creation.
For example, user is added to the group "openstack" in the backend. 
In this case this group is the part of SAML assertions (in case when 
SAML2 is used as the protocol), and Keystone should recognize this 
group through the mapping. When user makes login attempt, Keystone 
should pre-create the project and assign pre-defined role in it. User 
gets access right away.



--
Andrey Grebennikov
Deployment Engineer
Mirantis Inc, Mountain View, CA



--
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: keystone federation user story

2016-05-24 Thread Adam Young

On 05/24/2016 01:55 PM, Alexander Makarov wrote:

Colleagues,

here is an actual use case for shadow users assignments, let's discuss 
possible solutions: all suggestions are appreciated.


-- Forwarded message --
From: *Andrey Grebennikov* >

Date: Tue, May 24, 2016 at 9:43 AM
Subject: keystone federation user story
To: Alexander Makarov >



Main production usecase:
As a system administrator I need to create assignments for federated 
users into the projects when the user has not authenticated for the 
first time.


Two different approaches.
1. A user has to be assigned directly into the project with the role 
Role1. Since shadow users were implemented, Keystone database has the 
record of the user when the federated user authenticates for the first 
time. When it happens, the user gets unscoped token and Keystone 
registers the user in the database with generated ID (the result of 
hashing the name and the domain). At this point the user cannot get 
scoped token yet since the user has not been assigned to any project.
Nonetheless there was a bug 
https://bugs.launchpad.net/keystone/+bug/1313956 which was abandoned, 
and the reporter says that currently it is possible to assign role in 
the project to non-existing user (API only, no CLI). It doesn't help 
much though since it is barely possible to predict the ID of the user 
if it doesn't exist yet.


Potential solution - allow per-user project auto-creation. This will 
allow the user to get scoped token with a pre-defined role (should be 
either mentioned in config or in mapping) and execute operations right 
away.


What we discussed at the summit was the ability for some power user to 
pre=-create the shadow user record by passing in the data that that 
would be mapped:


Kerberos example
{
Realm: YOUNGLOGIC.NET
Principal: ayo...@younglogic.net
REMOTE_GROUPS: ipausers,demo,bookworms
}


Another API will allow for query if a user exists.







Disadvantages: less control and order (will potentially end up with 
infinite empty projects).

Benefits: user is authorized right away.

Another potential solution - clearly describe a possibility to assign 
shadow user to a project (client should generate the ID correctly), 
even though the user has not been authenticated for the first time yet.


Disadvantages: high risk of administrator's mistake when typing user's ID.
Benefits: user doesn't have to execute first dummy authentication in 
order to be registered.


2. Operate with the groups. It means that the user is a member of the 
remote group and we propose the groups to be assigned to the projects 
instead of the users.
There is no concept of shadow groups yet, so it still has to be 
implemented.


Same problem - in order to be able to assign the group to the project 
currently it has to exist in Keystone database.


It should be either allowed to pre-create the project for a group 
(based on some specific flags in mappings), or it should be allowed to 
assign non-existing groups into the projects.


I'd personally prefer to allow some special attribute to be specified 
in either the config or mapping which will allow project auto-creation.
For example, user is added to the group "openstack" in the backend. In 
this case this group is the part of SAML assertions (in case when 
SAML2 is used as the protocol), and Keystone should recognize this 
group through the mapping. When user makes login attempt, Keystone 
should pre-create the project and assign pre-defined role in it. User 
gets access right away.



--
Andrey Grebennikov
Deployment Engineer
Mirantis Inc, Mountain View, CA



--
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-20 Thread Adam Young

On 05/20/2016 08:48 AM, Dean Troyer wrote:
On Fri, May 20, 2016 at 5:42 AM, Thomas Goirand > wrote:


I am *NOT* buying that doing static linking is a progress. We're
back 30
years in the past, before the .so format. It is amazing that some
of us
think it's better. It simply isn't. It's a huge regression, for
package
maintainers, system admins, production/ops, and our final users. The
only group of people who like it are developers, because they just
don't
need to care about shared library API/ABI incompatibilities and
regressions anymore.


I disagree, there are certainly places static linking is appropriate, 
however, I didn't mention that at all. Much of the burden with Python 
dependency at install/run time is due to NO linking.  Even with C, you 
make choices at build time WRT what you link against, either 
statically or dynamically.  Even with shared libs, when the interface 
changes you have to re-link everything that uses that interface.  It 
is not as black and white as you suggest.


And I say that as a user, who so desperately wants an install process 
for OSC to match PuTTY on Windows: 1) copy an .exe; 2) run it.


dt

[Thomas, I have done _EVERY_ one of the jobs above that you listed, as 
a $DAY_JOB, and know exactly what it takes to run production-scale 
services built from everything from vendor packages to house-built 
source.  It would be nice if you refined your argument to stop leaning 
on static linking as the biggest problem since stack overflows.  There 
are other reasons this might be a bad idea, but I sense that you are 
losing traction fixating on only this one.]


Static linking Bad.  We can debate why elsewhere.

Go with dynamic linking is possible, and should be what the 
distributions target. This is a solvable problem.


/me burns bikeshed and installs a Hubcycle/Citibike kiosk.




--

Dean Troyer
dtro...@gmail.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to single sign on with windows authentication with Keystone

2016-05-19 Thread Adam Young

On 05/19/2016 07:40 AM, Rodrigo Duarte wrote:

Hi,

So you are trying to use keystone to authorize your users, but want to 
avoid having to authenticate via keystone, right?


Check if the Federated Identity feature [1] covers your use case.

[1] 
http://docs.openstack.org/security-guide/identity/federated-keystone.html


On Thu, May 19, 2016 at 8:27 AM, OpenStack Mailing List Archive 
> wrote:


Link: https://openstack.nimeyo.com/85057/?show=85057#q85057
From: imocha >

I have to call the keystone APIs and want to use the windows
authentication using Active Directory. Keystone provides
integration with AD at the back end. To get the initial token to
use OpenStack APIs, I need to pass user name and password in the
keystone token creation api.

Since I am already logged on to my windows domain, is there any
way that I can get the token without passing the password in the api.


Yes, use SSSD and Mod_Lookup_Identity:

https://adam.younglogic.com/2014/05/keystone-federation-via-mod_lookup_identity/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-16 Thread Adam Young

On 05/16/2016 05:23 AM, Dmitry Tantsur wrote:

On 05/14/2016 03:00 AM, Adam Young wrote:

On 05/13/2016 08:21 PM, Dieterly, Deklan wrote:

If we allow Go, then we should also consider allowing JVM based
languages.

Nope.  Don't get me wrong, I've written more than my fair share of Java
in my career, and I like it, and I miss automated refactoring and real
threads.  I have nothing against Java (I know a lot of you do).

Java fills the same niche as Python.  We already have one of those, and
its very nice (according to John Cleese).


A couple of folks in this thread already stated that the primary 
reason to switch from Python-based languages is the concurrency story. 
JVM solves it and does it in the same manner as Go (at least that's my 
assumption).


(not advocating for JVM, just trying to understand the objection)



So, what I think we are really saying here is "what is our Native
extension story going to be? Is it the traditional native languages, or
is it something new that has learned from them?"

Go is a complement to Python to fill in the native stuff.  The
alternative is C or C++.  Ok Flapper, or Rust.


C, C++, Rust, yes, I'd call them "native".

A language with a GC and green threads does not fall into "native" 
category for me, rather the same as JVM.


MOre complex than just that.  But Go does not have a VM, just put a lot 
of effort into co-routines without taking context switches. Different 
than green threads.


http://programmers.stackexchange.com/questions/222642/are-go-langs-goroutine-pools-just-green-threads


You can do userland level co-routines in C, C++ and Erlang, probably 
even Rust (even if it is not written yet, no idea).  Which is different 
than putting in a code translation layer.


We are not talking about a replacement for Python here.  We are talking 
about a language to use for native optimizations when Python's 
concurrency model or other overhead gets in the way.


In scientific computing, using a language like R or Python to then call 
into a linear algebra or messaging library is the norm for just this 
reason.  Since we are going to be writing the native code here, the 
question is what language to use for it.


We are not getting rid of python, and we are not bringing in Java. Those 
are different questions.



The question is "How do we take performance sensitive sections in 
OpenStack and optimize to native?"


The list of answers that map to the question here as I see it include:

1.  We are not doing native code, stop asking.
2.  Stick with C
3.  C or C++ is Ok
4.  Fortran (OK I just put this in to see if you are paying attention).
5.  Go
6.  Rust (Only put in to keep Flapper off my back)


We have two teams asking for Go, and Flapper asking for Rust.  No one 
has suggested new native code in C or C++, instead those types of 
projects seem to be kept out of OpenStack proper.







This is coming from someone  that has done Kernel stuff.  I did C++ in
both the Windows and Linux worlds.  I've written inversion of control
stuff in C++ template metaprogramming.  I am not personally afraid of
writing code in either language. But I don't want to inflict that on
OpenStack.  Its a question of reducing complexity, not increasing it.


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-13 Thread Adam Young

On 05/13/2016 08:21 PM, Dieterly, Deklan wrote:

If we allow Go, then we should also consider allowing JVM based languages.
Nope.  Don't get me wrong, I've written more than my fair share of Java 
in my career, and I like it, and I miss automated refactoring and real 
threads.  I have nothing against Java (I know a lot of you do).


Java fills the same niche as Python.  We already have one of those, and 
its very nice (according to John Cleese).


So, what I think we are really saying here is "what is our Native 
extension story going to be? Is it the traditional native languages, or 
is it something new that has learned from them?"


Go is a complement to Python to fill in the native stuff.  The 
alternative is C or C++.  Ok Flapper, or Rust.


This is coming from someone  that has done Kernel stuff.  I did C++ in 
both the Windows and Linux worlds.  I've written inversion of control 
stuff in C++ template metaprogramming.  I am not personally afraid of 
writing code in either language. But I don't want to inflict that on 
OpenStack.  Its a question of reducing complexity, not increasing it.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][designate][zaqar][nova][swift] using pylibmc instead of python-memcached

2016-05-13 Thread Adam Young

On 05/13/2016 12:52 PM, Monty Taylor wrote:

On 05/13/2016 11:38 AM, Eric Larson wrote:

Monty Taylor writes:


On 05/13/2016 08:23 AM, Mehdi Abaakouk wrote:

On Fri, May 13, 2016 at 02:58:08PM +0200, Julien Danjou wrote:

What's wrong with pymemcache, that we picked for tooz and are using
for 2 years now?

  https://github.com/pinterest/pymemcache

Looks like a good alternative.

Honestly, nobody should be using pymemcache or python-memcached or
pylibmc for anything caching related in OpenStack. People should be
using oslo.cache - however, if that needs work before it's usable,
people should be using dogpile.cache, which is what oslo.cache uses on
the backend.

dogpile is pluggable, so it means that the backend used for caching
can be chosen in a much broader manner. As morgan mentions elsewhere,
that means that people who want to use a different memcache library
just need to write a dogpile driver.

Please don't anybody directly use memcache libraries for caching in
OpenStack. Please.


Using dogpile doesn't remove the decision of what caching backend is
used. Dogpile has support (I think) for all the libraries mentioned here:

https://bitbucket.org/zzzeek/dogpile.cache/src/87965ada186f9b3a4eb7ff033a2e31437d5e9bc6/dogpile/cache/backends/memcached.py


Oslo cache would need to be the one making decision as to what backend
is used if we need to have something consistent.

I do not understand why oslo.cache would make a backend decision. It's a
config-driven thing. I could see oslo.cache having a _default_ ... but
having oslo.cache use dogpile.cache and then remove the ability for a
deployer to chose which caching backend dogpile uses seems more than
passing strange to me.


With oslo cache, you say "I want memcache" and Oslo picks the driver.  
Standardizes the implementation within OpenStack.





With that said, it is important that we understand what projects have
specific requirements or have experienced issues, otherwise there is a
good chance teams will hit an issue down the line and have to work
around it.

Yup. Totally agree. I certainly don't want to imply that there aren't
issues with memcache libs nor that they shouldn't be fixed. Merely
trying to point out that individual projects programming to the
interface of any of the libs is a thing that should be fixed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-13 Thread Adam Young
Can we just up and support Go, please?  I'm a C++ and C buff, but I 
would not inflict either of those on other people, nor would I want to 
support their code. Go is designed to be native but readable/writable.



There is nothing perfect in this world.

Python for most things.
Javascript for web out of necessity
Go for native tuning.

Yes, Flapper, I like Rust, too, but we have to pick something, and I am 
not the one trying to code this.


It makes sense. Go is already packaged for Fedora and Debian.  We can 
deal with it.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread Adam Young

On 05/12/2016 06:39 PM, gordon chung wrote:


On 12/05/2016 1:47 PM, Morgan Fainberg wrote:


On Thu, May 12, 2016 at 10:42 AM, Sean Dague > wrote:

 We just had to revert another v3 "fix" because it wasn't verified to
 work correctly in the gate - https://review.openstack.org/#/c/315631/

 While I realize project-config patches are harder to test, you can do so
 with a bogus devstack-gate change that has the same impact in some cases
 (like the case above).

 I think the important bit on moving forward is that every patch here
 which might be disruptive has some manual verification about it working
 posted in review by v3 team members before we approve them.

 I also think we need to largely stay non voting on the v3 only job until
 we're quite confident that the vast majority of things are flipped over
 (for instance there remains an issue in nova <=> ironic communication
 with v3 last time I looked). That allows us to fix things faster because
 we don't wedge some slice of the projects in a gate failure.

  -Sean

 On 05/12/2016 11:08 AM, Raildo Mascena wrote:
  > Hi folks,
  >
  > Although the Identity v2 API is deprecated as of Mitaka [1], some
  > services haven't implemented proper support to v3 yet. For instance,
  > we implemented a patch that made DevStack v3 by default that, when
  > merged, broke a lot of project gates in a few hours [2]. This
  > happened due to specific services incompatibility issues with
 Keystone
  > v3 API, such as hardcoded v2 usage, usage of removed
 keystoneclient CLI,
  > requesting v2 service tokens and the lack of keystoneauth session
 usage.
  >
  > To discuss those points, we did a cross-project work
  > session in the Newton Summit[3]. One point we are working on at this
  > momment is creating gates to ensure the main OpenStack services
  > can live without the Keystone v2 API. Those gates setup devstack with
  > only Identity v3 enabled and run the Tempest suite on this
 environment.
  >
  > We already did that for a few services, like Nova, Cinder, Glance,
  > Neutron, Swift. We are doing the same job for other services such
  > as Ironic, Magnum, Ceilometer, Heat and Barbican [4].
  >
  > In addition, we are creating jobs to run functional tests for the
  > services on this identity v3-only environment[5]. Also, we have a
 couple
  > of other fronts that we are doing like removing some hardcoded v2
 usage
  > [6], implementing keystoneauth sessions support in clients and
 APIs [7].
  >
  > Our plan is to keep tackling as many items from the cross-project
  > session etherpad as we can, so we can achieve more confidence in
 moving
  > to a DevStack working v3-only, making sure everyone is prepared
 to work
  > with Keystone v3 API.
  >
  > Feedbacks and reviews are very appreciated.
  >
  > [1] https://review.openstack.org/#/c/251530/
  > [2] https://etherpad.openstack.org/p/v3-only-devstack
  > [3] https://etherpad.openstack.org/p/newton-keystone-v3-devstack
  > [4]
 
https://review.openstack.org/#/q/project:openstack-infra/project-config+branch:master+topic:v3-only-integrated
  > [5]
 https://review.openstack.org/#/q/topic:v3-only-functionals-tests-gates
  > [6]
 https://review.openstack.org/#/q/topic:remove-hardcoded-keystone-v2
  > [7] https://review.openstack.org/#/q/topic:use-ksa
  >
  > Cheers,
  >
  > Raildo
  >
  >
  >


This  also comes back to the conversation at the summit. We need to
propose the timeline to turn over for V3 (regardless of
voting/non-voting today) so that it is possible to set the timeline that
is expected for everything to get fixed (and where we are
expecting/planning to stop reverting while focusing on fixing the
v3-only changes).

I am going to ask the Keystone team to set forth the timeline and commit
to getting the pieces in order so that we can make v3-only voting rather
than playing the propose/revert game we're currently doing. A proposed
timeline and gameplan will only help at this point.


can anyone confirm when we deprecated keystonev2? i see a bp[1] related
to deprecation that was 'implemented' in 2013.

i realise switching to v3 breaks many gates but it'd be good to at some
point say it's not 'keystonev3 breaking the gate' but rather 'projectx
is breaking the gate because they are using keystonev2 which was
deprecated 4 cycles ago'. given the deprecation period allowed already,
can we say "here's some help, fix/merge this by
, or your gate will be broken until then"?
(assuming all the above items by Raildo doesn't fix everything).


I'd like to say Ocata.


[1] https://blueprints.launchpad.net/keystone/+spec/deprecate-v2-api

cheers,





Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread Adam Young

On 05/12/2016 01:47 PM, Morgan Fainberg wrote:



On Thu, May 12, 2016 at 10:42 AM, Sean Dague > wrote:


We just had to revert another v3 "fix" because it wasn't verified to
work correctly in the gate - https://review.openstack.org/#/c/315631/

While I realize project-config patches are harder to test, you can
do so
with a bogus devstack-gate change that has the same impact in some
cases
(like the case above).

I think the important bit on moving forward is that every patch here
which might be disruptive has some manual verification about it
working
posted in review by v3 team members before we approve them.

I also think we need to largely stay non voting on the v3 only job
until
we're quite confident that the vast majority of things are flipped
over
(for instance there remains an issue in nova <=> ironic communication
with v3 last time I looked). That allows us to fix things faster
because
we don't wedge some slice of the projects in a gate failure.

-Sean

On 05/12/2016 11:08 AM, Raildo Mascena wrote:
> Hi folks,
>
> Although the Identity v2 API is deprecated as of Mitaka [1], some
> services haven't implemented proper support to v3 yet. For instance,
> we implemented a patch that made DevStack v3 by default that, when
> merged, broke a lot of project gates in a few hours [2]. This
> happened due to specific services incompatibility issues with
Keystone
> v3 API, such as hardcoded v2 usage, usage of removed
keystoneclient CLI,
> requesting v2 service tokens and the lack of keystoneauth
session usage.
>
> To discuss those points, we did a cross-project work
> session in the Newton Summit[3]. One point we are working on at this
> momment is creating gates to ensure the main OpenStack services
> can live without the Keystone v2 API. Those gates setup devstack
with
> only Identity v3 enabled and run the Tempest suite on this
environment.
>
> We already did that for a few services, like Nova, Cinder, Glance,
> Neutron, Swift. We are doing the same job for other services such
> as Ironic, Magnum, Ceilometer, Heat and Barbican [4].
>
> In addition, we are creating jobs to run functional tests for the
> services on this identity v3-only environment[5]. Also, we have
a couple
> of other fronts that we are doing like removing some hardcoded
v2 usage
> [6], implementing keystoneauth sessions support in clients and
APIs [7].
>
> Our plan is to keep tackling as many items from the cross-project
> session etherpad as we can, so we can achieve more confidence in
moving
> to a DevStack working v3-only, making sure everyone is prepared
to work
> with Keystone v3 API.
>
> Feedbacks and reviews are very appreciated.
>
> [1] https://review.openstack.org/#/c/251530/
> [2] https://etherpad.openstack.org/p/v3-only-devstack
> [3] https://etherpad.openstack.org/p/newton-keystone-v3-devstack
> [4]

https://review.openstack.org/#/q/project:openstack-infra/project-config+branch:master+topic:v3-only-integrated
> [5]
https://review.openstack.org/#/q/topic:v3-only-functionals-tests-gates
> [6]
https://review.openstack.org/#/q/topic:remove-hardcoded-keystone-v2
> [7] https://review.openstack.org/#/q/topic:use-ksa
>
> Cheers,
>
> Raildo
>
>
>


This  also comes back to the conversation at the summit. We need to 
propose the timeline to turn over for V3 (regardless of 
voting/non-voting today) so that it is possible to set the timeline 
that is expected for everything to get fixed (and where we are 
expecting/planning to stop reverting while focusing on fixing the 
v3-only changes).


I am going to ask the Keystone team to set forth the timeline and 
commit to getting the pieces in order so that we can make v3-only 
voting rather than playing the propose/revert game we're currently 
doing. A proposed timeline and gameplan will only help at this point.


I would like to draw a line in the sand and say that it has to be there 
for Ocata. We should be working through the issues the Newton release, 
and have a firm "Ocata should expect to be run V3 only, with V2.0 an 
optional feature that can be enabled for backwards compatibility if 
required."



To me, Ocata is the finish line:  there have been a lot of features that 
Keystone has needed for a long time, and they are finally starting to 
come together.  V3 support, and all that it enables is on of the big 
ones, and we need to give people enough information to plan, and enough 
time to adjust.



V3 support is essential for the way that most people are deploying: User 
database coming from an external source like LDAP, service users stored 
locally.  We need to treat this as baseline, and make sure that the 
deployment 

Re: [openstack-dev] [tripleo] sharing bits between nodes during deployment

2016-05-12 Thread Adam Young

On 05/12/2016 02:20 PM, Emilien Macchi wrote:

Hi,

During the recent weeks, we've noticed that some features would have a
common challenge to solve:
How to share informations or files between nodes, during a multi-node
deployment.

A few use-cases:

* Deploying Keystone using Fernet tokens

Adam Young started this topic a few weeks ago, we are investigating
how to integrate Fernet in TripleO.
The main challenge is that we want to generate keys periodically for
security purposes.
In multi-node environment, when using HAproxy, you need to make sure
all Fernet keys are the same otherwise you expose the risk of an user
connected to a Keystone server that won't be able to validate Token.
We need a way to:
1) generate tokens periodically. It could be in puppet-keystone, we
already have a crontab example:
https://github.com/openstack/puppet-keystone/tree/master/manifests/cron
2) distribute the key from a node to other nodes <-- that is the challenge.
note: I confirmed with ayoung, and there is no need to restart
Keystone when we change a token.

* Scaling down/up Swift cluster

It's currently impossible to scale down/up a Swift cluster in TripleO
because the ring is built during deployment and never updated
afterwards. It makes Swift not really usable in production without
manual intervention.
Since we don't use a set of classes in puppet-swift that performs this
action because they require PuppetDB, we need to find a way to
redistribute the ring when we add or remove Swift nodes in a TripleO
Cloud.
Maybe we can investigate some Mistral actions or Heat, that would run
the swift commands to re-distribute the ring.

* Dynamic discovery

An example of use-case is: https://review.openstack.org/#/c/304125/
We want to manage Keystone resources for services (ie: nova endpoints,
etc) from the services roles (ie: nova-api), so we stop creating all
endpoints from the keystone profile role.
The current issue with composable services is that until now (tell me
if I'm wrong), keystone role is not aware if whether or not we run
gnocchi-api in our cloud so we don't know if we need to create the
endpoints etc.
On the review, please my (long) comment on Patch Set 12, where we
expose our current challenges.



Policy file distribution also ties in here, especially if we want to be 
able make different policy for different endpoints of the same service.





I hope by this thread that we can boostrap some discussions around
these challenges, because we'll keep having them with the complexity
of OpenStack deployments.
Feel free to comment / correct me / give any feedback on this initial
e-mail, thanks for reading so far.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone] Getting Auth Token from Horizon when using Federation

2016-05-12 Thread Adam Young

On 05/12/2016 09:07 AM, Edmund Rhudy (BLOOMBERG/ 120 PARK) wrote:
+1 on desiring OAuth-style tokens in Keystone. The use cases that come 
up here are people wanting to be able to execute jobs that use the 
APIs (Jenkins, Terraform, Vagrant, etc.) without having to save their 
personal credentials in plaintext somewhere, and also wanting to be 
able to associate credentials with a project instead of a specific 
person, so that if a person leaves or rotates their password it 
doesn't blow up their team's carefully crafted automation.
We can sort of work around it with LDAP service accounts as mentioned 
previously, but the concern around those is the lack of speedy 
revocability in the event of a compromise, and the service accounts 
could possibly be used to get to non-OpenStack places until they get 
shut down. One thought I had to try to keep the auth domain 
constrained to only OpenStack was using the EC2 API because at least 
that means you're not saving LDAP passwords on disk and the access 
keys are useless beyond that particular Keystone installation, but you 
run into impedance mismatches between the Nova API and AWS EC2 API, 
and we'd like people to use the native OpenStack APIs. (Turns out the 
notion of using AWS's EC2 API to talk to a private cloud is strange to 
people not steeped in cloudy things.)
So Service accounts and OAuth consumers are two different names for the 
same abstract construct. IN both cases, the important thing is limiting 
the access each one has.



Horizon is for the interactive use case, though, and should not be using 
either, except as a front to define workflows, and in that case, the 
same work should be possible from the command line.


ECP should make that possible, assuming your IdP supports ECP (EIEIO!).




From: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [horizon][keystone] Getting Auth Token 
from Horizon when using Federation


Hi Dolph, On Mon, 2016-04-18 at 17:50 -0500, Dolph Mathews wrote:
> > On Mon, Apr 18, 2016 at 11:34 AM, Martin Millnert
> > wrote: > Hi, >
> we're deploying Liberty (soon Mitaka) with heavy reliance on >
the SAML2 > Federation system by Keystone where we're a Service
Provider > (SP). > > The problem in this situation is getting a
token for direct > API > access.(*) > > There are conceptually two
methods to use the CLI: > 1) Modify ones (each customer -- in our
case O(100)) IdP to > add support > for a feature called ECP(**),
and then use keystoneauth with > SAML2 > plugin, > 2) Go to (for
example) "Access & Security / API Access / View > Credentials" in
Horizon, and check out a token from there. > > > With a default
configuration, this token would only last a short > period of
time, so this would be incredibly repetitive (and thus > tedious). 

Assuming all that is setup, the user should be unaware of the re-init to 
the SAML IdP to get a new assertion for a new token. Why is this a problem?




Indeed. > So, I assume you mean some sort of long-lived API
tokens? Right. > API tokens, including keystone's UUID, PKI, PKIZ,
and Fernet tokens > are all bearer tokens, so we force a short
lifetime by default, > because there are always multiple parties
capable of compromising the > integrity of a token. OAuth would be
a counter example, where OAuth > access tokens can (theoretically)
live forever. 



Still think that is a security violation.



This does sound very interesting. As long as the end user gets
something useful to plug into the openstack auth libraries/APIs,
we're home free (modulo security considerations, etc). > 2) isn't
implemented. 1) is a complete blocker for many > customers. > >
Are there any principal and fundamental reasons why 2 is not >
doable? > What I imagine needs to happen: > A) User is
authenticated (see *) in Horizon, > B) User uses said
authentication (token) to request another > token from > Keystone,
which is displayed under the "API Access" tab on > "Access & >
Security". > > > The (token) here could be an OAuth access token.
Will look into this (also as per our discussion in Austin). The
one issue that has appeared in our continued discussions at home,
is the contrast against "service user accounts", that seems a
relatively prevalent/common among deployers today, which basically
use username/password as the api key credentials, e.g. the authZ
of the issued token: If AdminNameless is Domain Admin in their
domain, won't their OAuth access token yield keystone tokens with
the same authZ as they otherwise have? My presumptive answer being
'yes', brought me to the realization that, if one wants to avoid
going the way of "service user accounts" but still reduce authZ,
one would like to be able to get OAuth access tokens for a
specific project, with a specific role (e.g. "user", 

Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Adam Young

On 05/10/2016 07:08 PM, Flavio Percoco wrote:

On 10/05/16 13:52 -0400, Adam Young wrote:
Forget package management for a moment; we can figure it out if we 
need to.  The question is "Why Go" which I've pondered for a while.


If you need to write a multithreaded app, Python's GIL makes it very 
hard to do.  It is one reason why I pushed for HTTPD as the Keystone 
front end.


Python has long held the option to optimize to native as the way to 
deal with performance sensitive segments.


The question, then, is what language are you going to use to write 
that perf sensitive native code?


To date, there have been two realistic options, Straight C and C++. 
For numeric algorithms, there is a large body written in Fortran that 
are often pulled over for scientific operations. The rest have been 
largely one-offs.


Go and Rust are interesting in that they are both native, as opposed 
to runtime compiled languages like Java and Python. That makes them 
candidates for writing this kind of performance code.


Rust is not there yet.  I like it, but it is tricky to learn, and the 
packaging and distribution is still getting in place.


I disagree ;)
OK, you know I am a RUSTifarian.  But Packaging is a real issue in 
Fedora, as the Rust compiler is not packaged yet, and thus all the 
downstream.



But really I was attempting to curb my enthusiasm for Rust, as it is not 
the language under discussion here.





Go has been more aggressively integrated into the larger community. 
Probably the most notable and relevant for our world is the 
Kubernetes push toward Go.


In the cosmic scheme of things, I see Go taking on C++ as the "native 
but organized" language, as contrasted with C which is native but 
purely procedural, and thus requires a lot more work to avoid 
security and project scale issues.


I disagree! I don't believe Go will take on C++ but for the sake of 
keeping this

thread a bit sane, I won't go into details here.
This is the core of the matter:  they want to build something native, 
performance, and threadable.  The options thus far are C or C++, and I 
would expect people that got to C to continue to do so; their rationale 
still holds.  In the past, I would have suggested people use C++ for an 
Object oriented and structured code base, but Go is a viable native 
alternative unlike ones we've had in the past.


Yes, yes, Rust would also work, but they are not asking about Rust. Curb 
your enthusiasm, too, Flavio!







So, I can see the desire to not start supporting C++, and to jump 
right to Go.  I think it is a reasonable language to investigate for 
this type of coding, but committing to it is less obvious than 
Javascript was:  with Javascript, there is no alternative for dynamic 
web apps, and for native, there are several.


I honestly don't think the above is a good enough motivation to start 
such a
huge change in the community. I mean, really, I'm not opposed to Go 
and I'm less

opposed to Rust.

As I've mentioned in other replies on this thread, there's more to it 
than just

the service specific needs. Infra, CI, Packagers, OPs, *community*.

Flavio



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Adam Young
Forget package management for a moment;  we can figure it out if we need 
to.  The question is "Why Go" which I've pondered for a while.



If you need to write a multithreaded app, Python's GIL makes it very 
hard to do.  It is one reason why I pushed for HTTPD as the Keystone 
front end.


Python has long held the option to optimize to native as the way to deal 
with performance sensitive segments.


The question, then, is what language are you going to use to write that 
perf sensitive native code?


To date, there have been two realistic options, Straight C and C++. For 
numeric algorithms, there is a large body written in Fortran that are 
often pulled over for scientific operations.  The rest have been largely 
one-offs.


Go and Rust are interesting in that they are both native, as opposed to 
runtime compiled languages like Java and Python.  That makes them 
candidates for writing this kind of performance code.


Rust is not there yet.  I like it, but it is tricky to learn, and the 
packaging and distribution is still getting in place.


Go has been more aggressively integrated into the larger community. 
Probably the most notable and relevant for our world is the Kubernetes 
push toward Go.


In the cosmic scheme of things, I see Go taking on C++ as the "native 
but organized" language, as contrasted with C which is native but purely 
procedural, and thus requires a lot more work to avoid security and 
project scale issues.


So, I can see the desire to not start supporting C++, and to jump right 
to Go.  I think it is a reasonable language to investigate for this type 
of coding, but committing to it is less obvious than Javascript was:  
with Javascript, there is no alternative for dynamic web apps, and for 
native, there are several.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-09 Thread Adam Young

On 05/09/2016 02:14 PM, Hayes, Graham wrote:

On 09/05/2016 19:09, Fox, Kevin M wrote:

I think you'll find that being able to embed a higher performance language 
inside python will be much easier to do for optimizing a function or two rather 
then deal with having a separate server have to be created, authentication be 
added between the two, and marshalling/unmarshalling the data to and from it to 
optimize one little piece. Last I heard, you couldn't just embed go in python. 
C/C++ is pretty easy to do. Maybe I'm wrong and its possible to embed go now. 
Someone, please chime in if you know of a good way.

Thanks,
Kevin

We won't be replacing any particular function, we will be replacing a
whole service.

There is no auth (or inter-service communications) from this component,
all it does it query the DB and spit out DNS packets.

I can't talk for what swift are doing, but we have a very targeted scope
for our Go work.

- Graham
I'm assuming you have a whole body of work discussing Bind and why it is 
not viable for these cases.  Is there a concise version of the discussion?







From: Hayes, Graham [graham.ha...@hpe.com]
Sent: Monday, May 09, 2016 4:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] supporting Go

On 08/05/2016 10:21, Thomas Goirand wrote:

On 05/04/2016 01:29 AM, Hayes, Graham wrote:

On 03/05/2016 17:03, John Dickinson wrote:

TC,

In reference to 
http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html and 
Thierry's reply, I'm currently drafting a TC resolution to update 
http://governance.openstack.org/resolutions/20150901-programming-languages.html 
to include Go as a supported language in OpenStack projects.

As a starting point, what would you like to see addressed in the document I'm 
drafting?

--John




Great - I was about to write a thread like this :)

Designate is looking to move a single component of ours to Go - and we
were wondering what was the best way to do it.

We discussed about this during the summit. You told me that the issue
was a piece of code that needed optimization, to which I replied that
probably, a C++ .so extension in a Python module is probably what you
are looking for (with the advice of not using CFFI which is sometimes
breaking things in distros).

Did you think about this other possibility, and did you discuss it with
your team?

We had a brief discussion about it, and we going to try a new POC in
C/C++ to validate it, but then this thread (and related TC policy) were
proposed.

If Golang is going to be a supported language, we would much rather
stick with one of the official OpenStack languages that suits our
use case instead of getting an exemption for another similar language.

When we spoke at the summit, I was under the impression that the feature
branch in swift was not going to be merged to master, and we would have
to get an exemption from the TC anyway - which we could have used to get
C / C++.

The team also much preferred the idea of Golang - we do not have much
C++ expertise in the Designate dev team, which would slow down the
development cycle for us.

-- Graham


At the Linux distribution level, the main issue that there is with Go,
is that it (still) doesn't support the concept of shared library. We see
this as a bug, rather than a feature. As a consequence, when a library
upgrades, the release team has to trigger rebuilds for each and every
reverse dependencies. As the number of Go stuff increases over time, it
becomes less and less manageable this way (and it may potentially be a
security patching disaster in Stable). I've heard that upstream for
Golang was working on implementing shared libs, but I have no idea what
the status is. Does anyone know?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Re: [openstack-dev] [Keystone][Nova] Any Code Examples of Other Services Using Keystone Policy?

2016-05-05 Thread Adam Young

On 05/05/2016 05:54 PM, Dolph Mathews wrote:
My understanding from the summit session was that we should have a 
specific role defined in keystone's policy.json here:


https://github.com/openstack/keystone/blob/a16287af5b7761c8453b2a8e278d78652497377c/etc/policy.json#L37

Which grants access to nothing in keystone beyond that check. So, the 
new rule could be revised to something as generic as:


  "identity:get_project": "rule:admin_required or 
project_id:%(target.project.id )s or 
role:identity_get_project",


Where the new role name I appended at the end exactly matches the 
policy rule name.
Would we expect the have the implied rule that Member implies 
identity_get_project?





However, unlike the summit discussion, which specified only providing 
access to HEAD /v3/projects/{project_id}, keystone's usage of policy 
unfortunately wraps both HEAD and GET with the same policy check.


On Thu, May 5, 2016 at 3:05 PM Augustina Ragwitz 
> wrote:


I'm currently working on the spec for Project ID Validation in Nova
using Keystone. The outcome of the Design Summit Session was that the
Nova service user would use the Keystone policy to establish
whether the
requester had access to the project at all to verify the id. I was
wondering if there were any code examples of a non-Keystone service
using the Keystone policy in this way?

Also if I misunderstood something, please feel free to correct me
or to
clarify!

Here is the etherpad from the session:
https://etherpad.openstack.org/p/newton-nova-keystone
And here is the current spec: https://review.openstack.org/#/c/294337


--
Augustina Ragwitz
Sr Systems Software Engineer, HPE Cloud
Hewlett Packard Enterprise
---
irc: auggy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
-Dolph


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-03 Thread Adam Young

On 05/03/2016 09:55 AM, Clint Byrum wrote:

Excerpts from Steve Martinelli's message of 2016-05-02 19:56:15 -0700:

Comments inline...

On Mon, May 2, 2016 at 7:39 PM, Matt Fischer  wrote:


On Mon, May 2, 2016 at 5:26 PM, Clint Byrum  wrote:


Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.


This will reduce the interruption and will also as you said possibly catch
bugs. We had bugs in some custom python code that didn't get a new token
when the keystone server returned certain code, but we found all those in
our dev environment.

 From an operational POV, I can't imagine that any operators will go to
work one day and find out that they have a new token provider because of a
new default. Wouldn't the settings in keystone.conf be under some kind of
config management? I don't know what distros do with new defaults however,
maybe that would be the surprise?


With respect to upgrades, assuming we default to Fernet tokens in the
Newton release, it's only an issue if the the deployer has no token format
specified (since it defaulted to UUID pre-Newton), and relied on the
default after the upgrade (since it'll switches to Fernet in Newton).


Assume all users are using defaults.


I'm glad Matt outlines his reasoning above since that is nearly exactly
what Jesse Keating said at the Fernet token work session we had in Austin.
The straw man we come up with of a deployer that just upgrades without
checking then config files is just that, a straw man. Upgrades are well
planned and thought out before being performed. None of the operators in
the room saw this as an issue. We opened a bug to prevent keystone from
starting if fernet setup had not been run, and Fernet is the
selected/defaulted token provider option:
https://bugs.launchpad.net/keystone/+bug/1576315



Right, I responded there, but just to be clear, this is not about
_operators_ being inconvenienced, it is about _users_.


For all new installations, deploying your cloud will now have two extra
steps, running "keystone-manage fernet_setup" and "keystone-manage
fernet_rotate". We will update the install guide docs accordingly.

With all that said, we do intend to default to Fernet tokens for the Newton
release.


Great! They are supremely efficient and I love that we're moving
forward. However, users really do not care about something that just
makes the operator's life easier if it causes all of their stuff to blow
up in non-deterministic ways (since their new jobs won't have that fail,
it will be a really fun day in the debug chair).




I wonder if one could merge UUID and Fernet into a provider which
handles this transition gracefully:

if self._fernet_keys:
   return self._issue_fernet_token()
else:
   return self._issue_uuid_token()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self._fernet_keys:
   try:
 self._validate_fernet_token()
   except InvalidFernetFormatting:
 self._validate_uuid_token()
else:
   self._validate_uuid_token()


This just seems sneaky/wrong to me. I'd rather see a failure here than
switch token formats on the fly.


You say "on the fly" I say "when the operator has configured things
fully".

Perhaps we have different perspectives. How is accepting what we
previously emitted and told the user would be valid sneaky or wrong?
Sounds like common sense due diligence to me.

Anyway, the idea could use a few kicks, and I think perhaps a better
way to state what I'm thinking is this:

When the operator has configured a new token format to emit, they should
also be able to allow any previously emitted formats to be validated to
allow users a smooth transition to the new format. We can then make the
default behavior for one release cycle to emit Fernet, and honor both
Fernet and UUID.

Perhaps ignore the other bit that I put in there about switching formats
just because you have fernet keys. Let's say the new pseudo code only
happens in validation:

try:
   self._validate_fernet_token()
except NotAFernetToken:
   self._validate_uuid_token()


I was actually thinking of a different migration strategy, exactly the 
opposite:  for a while, run with the uuid tokens, but store the Fernet 
body.  After while, switch from validating the uuid token body to the 
stored Fernet.  Finally, 

Re: [openstack-dev] Timeframe for naming the P release?

2016-05-02 Thread Adam Young

On 05/02/2016 08:07 PM, Rochelle Grober wrote:

But, the original spelling of the landing site is Plimoth Rock.  There were still highway 
signs up in the 70's directing folks to "Plimoth Rock"

--Rocky
Who should know about rocks ;-)

-Original Message-
From: Brian Haley [mailto:brian.ha...@hpe.com]
Sent: Monday, May 02, 2016 3:12 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Timeframe for naming the P release?
There seems to be some vaugaries about which aligns with which.  If the 
Q naming alignes with Boston, I think it could be fun.  The obvious one 
is Quincy, but Quabbin (Reservoir) and lots of other Algonquin names are 
in the running, too.


If it is P, there are many other options:

 * Palmer  
 * Paxton  
 * Peabody  
 * Pembroke 
   
 * Pepperell 
   
 * Peru  
 * Phillipston 
 * Pittsfield 
   
 * Plainville  Schools
   
 * Plymouth 
   
 * Plympton 
 * Princeton 
   
 * Provincetown 

And Providnece is, I think cliose enough for inclusion as well.  And 
that is just the towns.



Plymouth is the only County in Mass with a P name, but Penobscott ME 
used to be part of MA, and should probably be in the running as well.








On 05/02/2016 02:53 PM, Shamail Tahir wrote:

Hi everyone,

When will we name the P release of OpenStack?  We named two releases
simultaneously (Newton and Ocata) during the Mitaka release cycle.  This gave us
the names for the N (Mitaka), N+1 (Newton), and N+2 (Ocata) releases.

If we were to vote for the name of the P release soon (since the location is now
known) we would be able to have names associated with the current release cycle
(Newton), N+1 (Ocata), and N+2 (P).  This would also allow us to get back to
only voting for one name per release cycle but consistently have names for N,
N+1, and N+2.

Is there really going to be an option besides Plymouth?  I remember something
important happened there in 1620 ;-)

https://en.wikipedia.org/wiki/Plymouth,_Massachusetts

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Adam Young

On 05/01/2016 05:03 PM, Steven Dake (stdake) wrote:
Ryan had rightly pointed out that when we made the original proposal 
9am morning we had asked folks if they wanted to participate in a 
separate repository.


In Keystone, we are going to more and more repositories all the time.  
We started with everything in Keystone server, then split out the 
python-keystoneclient repo, then keystonemiddleware, and now 
keystoneauth.  Kerberos requires a separate auth repo, too, just due to 
package dependencies.  Multiple repos are not a bad thing.  The Policy 
store, and  the drivers behind identity are all candidates for future 
refactoring.


Splitting a repo is not a big deal, but it is easier to do up front than 
to retool.




I think starting with a separate, but supported repository makes things 
much easier.


Kolla is 2 things:

1.  Creation of Containers for deploying the base openstack services.
2.  Actual deployment of the same

I would argue that the architecture for this should be something like 4 
repos:


1.  Container production.  Assuming a single toolchain here.
2.  Kubernetes deploy
3.  Ansible deploy
4.  Mesos deploy
5.  kolla-deploy-common.

Python in general makes it hard to have more than one upstream library 
from a repo, so you really want to think about what it looks like from 
PyPi first and organize based on that.


If anything can be pulled out into its own repo, it should.

Yeah, it makes development a bit more of a pain, but there are ways to 
mitigate that.  git subprojects might be a painful one, but it is not 
the the only approach.


Over time, I would expect both the Ansible and Kubernetes repos 
themselves to be split into finer repos, with Ansible plugins and 
Kubernetes modules being separately managed.






I don't think a separate repository is the correct approach based upon 
one off private conversations with folks at summit.  Many people from 
that list approached me and indicated they would like to see the work 
integrated in one repository as outlined in my vote proposal email. 
 The reasons I heard were:


  * Better integration of the community
  * Better integration of the code base
  * Doesn't present an us vs them mentality that one could argue
happened during kolla-mesos
  * A second repository makes k8s a second class citizen deployment
architecture without a voice in the full deployment methodology
  * Two gating methods versus one
  * No going back to a unified repository while preserving git history

I favor of the separate repositories I heard

  * It presents a unified workspace for kubernetes alone
  * Packaging without ansible is simpler as the ansible directory need
not be deleted

There were other complaints but not many pros.  Unfortunately I failed 
to communicate these complaints to the core team prior to the vote, so 
now is the time for fixing that.


I'll leave it open to the new folks that want to do the work if they 
want to work on an offshoot repository and open us up to the possible 
problems above.


If you are on this list:

  * Ryan Hallisey
  * Britt Houser

  * mark casey

  * Steven Dake (delta-alpha-kilo-echo)

  * Michael Schmidt

  * Marian Schwarz

  * Andrew Battye

  * Kevin Fox (kfox)

  * Sidharth Surana (ssurana)

  *  Michal Rostecki (mrostecki)

  *   Swapnil Kulkarni (coolsvap)

  *   MD NADEEM (mail2nadeem92)

  *   Vikram Hosakote (vhosakot)

  *   Jeff Peeler (jpeeler)

  *   Martin Andre (mandre)

  *   Ian Main (Slower)

  * Hui Kang (huikang)

  * Serguei Bezverkhi (sbezverk)

  * Alex Polvi (polvi)

  * Rob Mason

  * Alicja Kwasniewska

  * sean mooney (sean-k-mooney)

  * Keith Byrne (kbyrne)

  * Zdenek Janda (xdeu)

  * Brandon Jozsa (v1k0d3n)

  * Rajath Agasthya (rajathagasthya)
  * Jinay Vora
  * Hui Kang
  * Davanum Srinivas



Please speak up if you are in favor of a separate repository or a 
unified repository.


The core reviewers will still take responsibility for determining if 
we proceed on the action of implementing kubernetes in general.


Thank you
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-05-02 Thread Adam Young

On 04/26/2016 08:28 AM, Guangyu Suo wrote:

Hello, oslo team

For now, some sensitive options like password or token are configured 
as plaintext, anyone who has the priviledge to read the configure file 
can get the real password, this may be a security problem that can't 
be unacceptable for some people.


So the first solution comes to my mind is to encrypt these options 
when configuring them and decrypt them when reading them in 
oslo.config. This is a bit like apache/openldap did, but the 
difference is these softwares do a salt hash to the password, this is 
a one-way encryption that can't be decrypted, these softwares can 
recognize the hashed value. But if we do this work in oslo.config, for 
example the admin_password in keystone_middleware section, we must 
feed the keystone with the plaintext password which will be hashed in 
keystone to compare with the stored hashed password, thus the encryped 
value in oslo.config must be decryped to plaintext. So we should 
encrypt these options using symmetrical or unsymmetrical method with a 
key, and put the key in a well secured place, and decrypt them using 
the same key when reading them.


Of course, this feature should be default closed. Any ideas?


PKI.  Each service gets a client certificate that they use, signed by a 
selfsigned CA on the controller node, and uses the Tokenless/X509 
Mapping in Keystone to identify itself.


Do not try to build a crypto system around passwords.  None of us are 
qualified to do that.


We should be able to kill explicit service users and use X509 any way.

Kerberos would work, too, for deployments that prefer that form of 
Authentication.  We can document this, but do not need to implement.


Certmonger can manage the certificates for us.

Anchor can act as the CA for deployments that want something more than 
selfsigned, but don't want to go with a full CA.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Liberty - problem with assignment LDAP backend - Groups

2016-04-20 Thread Adam Young

On 04/20/2016 09:10 PM, Dmitry Sutyagin wrote:
Another correction - the issue is observed in Kilo, not Liberty, sorry 
for messing this up. (though this part of the code is identical in L)


On Wed, Apr 20, 2016 at 5:50 PM, Dmitry Sutyagin 
> wrote:


Correction:

group_dns = [u'CN=GroupX,OU=Groups,OU=SomeOU,DC=zzz']
ra.user_dn.upper() = 'CN=GROUPX,OU=GROUPS,OU=SOMEOU,DC=ZZZ'

So this could work if only:
- string in group_dns was str, not unicode
- text was uppercase

Now the question is - should it be so?

On Wed, Apr 20, 2016 at 5:41 PM, Dmitry Sutyagin
> wrote:

Hi everybody,

I am observing the following issue:

LDAP backend is enabled for identity and assignment, domain
specific configs disabled.
LDAP section configured - users, groups, projects and roles
are mapped.
I am able to use identity v3 api to list users, groups, to
verify that a user is in a group, and also to view role
assignments - everythings looks correct so far.
I am able to create a role for user in LDAP and if I put a
user directly into a role, everything works.
But when I put a group (which contains that user) into a role
- the user get's 401.

I have found a spot in the code which causes the issue:


https://github.com/openstack/keystone/blob/stable/liberty/keystone/assignment/backends/ldap.py#L67

This check returns False, here is why:
===
group_dns = ['cn=GroupX,ou=Groups,ou=YYY,dc=...']
role_assignment.user_dn = 'cn=UserX,ou=Users,ou=YYY,dc=...'
===

Therefore the check:

if role_assignment.user_dn.upper() in group_dns

Will return false. I do not understand how this should work -
why should user_dn match group_dn?



I would not advise using the LDAP assignment backend, but rather use 
LDAP for identity, and put assignments in SQL.  LDAP assignments was 
deprecated a few releases ago and has since been removed.





-- 
Yours sincerely,

Dmitry Sutyagin
OpenStack Escalations Engineer
Mirantis, Inc.




-- 
Yours sincerely,

Dmitry Sutyagin
OpenStack Escalations Engineer
Mirantis, Inc.




--
Yours sincerely,
Dmitry Sutyagin
OpenStack Escalations Engineer
Mirantis, Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO]: landing code faster

2016-04-20 Thread Adam Young

On 04/20/2016 11:44 AM, Dan Prince wrote:

We've had a run of really spotty CI in TripleO. This is making it
really hard to land patches if reviewers aren't online. Specifically we
seem to get better CI results when the queue is less full (nights and
weekends)... often when core reviewers aren't around.

One thing that would help is if core reviews would +2 instead of +1'ing
a patches. If you buy the approach of a gerrit review, the code looks
good, etc. then go on and +2 it. Don't wait for CI to pass before
coming back around to add your final stamp of approval. We all agree
that the tripleo-check jobs should be passing (or have passed once
collectively) before making any final +A to the patch.


Agreed. The rationale for +1 is only if you are a contributor to the 
patch, and someone else has made changes.  +1 Indicates that you are 
happy with the other person's changes.




The case for a core reviewer to +1 a patch is rare I think. If you have
some comments to add but don't want to +2 it then perhaps add those
comments with a +0 (or -1 if you think it needs fixed). Sure there are
some edge cases where +1's are helpful. But if our goal is to land good
code faster I think it would be more helpful to go ahead and +2 and let
the CI results fall where they may.

Dan




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-19 Thread Adam Young

On 04/19/2016 11:03 PM, Dean Troyer wrote:



On Tue, Apr 19, 2016 at 8:17 PM, Adam Young <ayo...@redhat.com 
<mailto:ayo...@redhat.com>> wrote:


Maybe it is time to revamp Devstack.  Is there some way that,
without a major rewrite, it could take better advantage of the
CLI? Could we group commands, or migrate sections to python
scripts that really all need to be done together? For example,
most of the early prep of the Keystone server moved to
keystone-manage bootstrap.  Is there more bootstrap-type behavior
we can and should consolidate?


This is what I was talking about, trying to take advantage of the 
interactive mode that also reads from stdin to do a series of comamnds 
with a single load/auth cycle.  It lacks a LOT of things for a 
resilient use case such as DevStack (error abort or error ignore?, 
branching, everything a DSL would bring).

Right, so lets get those as feature requests into the Client, I think.

The same DSL could be used server side for batch commands?




And if you'd like to replace stach.sh with stack.py, I'll not stop 
you, just don't call it DevStack.  Now you are building yet another 
deployment tool.  We've also been down that road before. It may well 
be time to retire DevStack, be sure to let us know when those willing 
to sponsor that work show up so they can attempt to learn from some of 
our mistakes and not repeat them the hard way.
Nope.  Not gonna do it. I have no desire to do that.  Nope nope nopey 
nope nope.






dt

--

Dean Troyer
dtro...@gmail.com <mailto:dtro...@gmail.com>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-19 Thread Adam Young

On 04/19/2016 07:24 PM, Jamie Lennox wrote:


Rather than ditching python for something like go, I'd rather put 
together a CLI with no plugins and that only depended on keystoneauth 
and os-client-config as libraries. No?
Let me add that if you are doing anything non trivial withe the CLI, you 
might want to think about just coding the whole thing in Python...which 
is, as you recall, we suggested Devstack go back about 5 years ago.  
There was a firm "stay in bash" push at the time, and so, yeah, we have 
a one-CLI call at a time install process.



Maybe it is time to revamp Devstack.  Is there some way that, without a 
major rewrite, it could take better advantage of the CLI? Could we group 
commands, or migrate sections to python scripts that really all need to 
be done together?  For example, most of the early prep of the Keystone 
server moved to keystone-manage bootstrap.  Is there more bootstrap-type 
behavior we can and should consolidate?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-19 Thread Adam Young

On 04/18/2016 09:19 AM, Daniel P. Berrange wrote:

There have been threads in the past about the slowness of the "openstack"
client tool such as this one by Sean last year:

   http://lists.openstack.org/pipermail/openstack-dev/2015-April/061317.html

Sean mentioned a 1.5s fixed overhead on openstack client, and mentions it
is significantly slower than the equivalent nova command. In my testing
I don't see any real speed difference between openstack & nova client
programs, so maybe that differential has been addressed since Sean's
original thread, or maybe nova has got slower.

Overall though, I find it is way too sluggish considering it is running
on a local machine with 12 cpus and 30 GB of RAM.

I had a quick go at trying to profile the tools with cprofile and analyse
with KCacheGrind as per this blog:

   
https://julien.danjou.info/blog/2015/guide-to-python-profiling-cprofile-concrete-case-carbonara

And notice that in profiling 'nova help' for example, the big sink appears
to come from the 'pkg_resource' module and its use of pyparsing. I didn't
spend any real time to dig into this in detail, because it got me wondering
whether we can easily just avoid the big startup penalty by not having to
startup a new python interpretor for each command we run.

I traced devstack and saw it run 'openstack' and 'neutron' commands approx
140 times in my particular configuration. If each one of those has a 1.5s
overhead, we could potentially save 3 & 1/2 minutes off devstack execution
time.

So as a proof of concept I have created an 'openstack-server' command
which listens on a unix socket for requests and then invokes the
OpenStackShell.run / OpenStackComputeShell.main / NeutronShell.run
methods as appropriate.

I then replaced the 'openstack', 'nova' and 'neutron' commands with
versions that simply call to the 'openstack-server' service over the
UNIX socket. Since devstack will always recreate these commands in
/usr/bin, I simply put my replacements in $HOME/bin and then made
sure $HOME/bin was first in the $PATH

You might call this 'command line as a service' :-)

Anyhow, with my devstack setup a traditional install takes

   real 21m34.050s
   user 7m8.649s
   sys  1m57.865s

And when using openstack-server it only takes

   real 17m47.059s
   user 3m51.087s
   sys  1m42.428s

So that has cut 18% off the total running time for devstack, which
is quite considerable really.

I'm attaching the openstack-server & replacement openstack commands
so you can see what I did. You have to manually run the openstack-server
command ahead of time and it'll print out details of every command run
on stdout.

Anyway, I'm not personally planning to take this experiment any further.
I'll probably keep using this wrapper in my own local dev env since it
does cut down on devstack time significantly. This mail is just to see
if it'll stimulate any interesting discussion or motivate someone to
explore things further.


I wonder how much of that is Token caching.  In a typical CLI use 
patter, a new token is created each time a client is called, with no 
passing of a token between services.  Using a session can greatly 
decrease the number of round trips to Keystone.







Regards,
Daniel


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone] Getting Auth Token from Horizon when using Federation

2016-04-18 Thread Adam Young

On 04/18/2016 12:34 PM, Martin Millnert wrote:

Hi,

we're deploying Liberty (soon Mitaka) with heavy reliance on the SAML2
Federation system by Keystone where we're a Service Provider (SP).

The problem in this situation is getting a token for direct API
access.(*)

There are conceptually two methods to use the CLI:
  1) Modify ones (each customer -- in our case O(100)) IdP to add support
for a feature called ECP(**), and then use keystoneauth with SAML2
plugin,
We have this working.  Your SAML provider might not.  What Are you 
working with?




  2) Go to (for example) "Access & Security / API Access / View
Credentials" in Horizon, and check out a token from there.

Insecure as all get out.  Please no.



2) isn't implemented. 1) is a complete blocker for many customers.

Are there any principal and fundamental reasons why 2 is not doable?
What I imagine needs to happen:
   A) User is authenticated (see *) in Horizon,
   B) User uses said authentication (token) to request another token from
Keystone, which is displayed under the "API Access" tab on "Access &
Security".

 From a general perspective, I can't see why this shouldn't work.


Let me explain.   A Token is a symmetric shared secret.  You should not 
be copyuing it around, etc.  You don't make it readable in a Web UI.  
You control it, and ideally, never let it see the light of day.




Whatever scoping the user currently has should be sufficient to check
out a similarly-or-lesser scoped token.

Anyway, we will, if this is at all doable, bolt this onto our local
deployment. I do, A) believe we're not alone with this use case (***),
B) look for input on doability.

We'll be around in Austin for discussion with Horizon/Keystone regarding
this if necessary.

Regards,
Martin Millnert

(* The reason this is a problem: With Federation, there are no local
users and passwords in the Keystone database. When authenticating to
Horizon in this setup, Keystone (I think) redirects the user to an HTTP
page on the home site's Identity Provider (IdP), which performs the
authentication. The IdP then signs a set of entitlements about this
identity, and sends these back to Keystone. Passwords stay at home. Epic
Win.)

(** ECP is a new feature, not supported by all IdP's, that at (second)
best requires reconfiguration of core authentication services at each
customer, and at worst requires customers to change IdP software
completely. This is a varying degree of showstopper for various
customers.)

(***
https://stackoverflow.com/questions/20034143/getting-auth-token-from-keystone-in-horizon
https://ask.openstack.org/en/question/51072/get-keystone-auth-token-via-horizon-url/
)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Adam Young

On 04/18/2016 10:29 AM, Brant Knudson wrote:



On Fri, Apr 15, 2016 at 9:04 PM, Adam Young <ayo...@redhat.com 
<mailto:ayo...@redhat.com>> wrote:


We all want Fernet to be a reality.  We ain't there yet (Except
for mfish who has no patience) but we are getting closer.  The
goal is to get Fernet as the default token provider as soon as
possible. The review to do this has uncovered a few details that
need to be fixed before we can do this.

Trusts for V2 tokens were not working correctly. Relatively easy
fix. https://review.openstack.org/#/c/278693/ Patch is still
failing on Python 3.  The tests are kindof racy due to the
revocation event 1 second granularity. Some of the tests here have
A sleep (1) in them still, but all should be using the time
control aspect of the unit test fixtures.

Some of the tests also use the same user to validate a token as
that have, for example, a role unassigned.  These expose a problem
that the revocation events are catching too many tokens, some of
which should not be treated as revoked.

Also, some of the logic for revocation checking has to change.
Before, if a user had two roles, and had one removed, the token
would be revoked.  Now, however, the token will validate
successful, but the response will only have the single assigned
role in it.


Python 3 tests are failing because the Fernet formatter is
insisting that all project-ids be valid UUIDs, but some of the old
tests have "FOO" and "BAR" as ids.  These either need to be
converted to UUIDS, or the formatter needs to be more forgiving.

Caching of token validations was messing with revocation checking.
Tokens that were valid once were being reported as always valid.
Thus, the current review  removes all caching on token
validations, a change we cannot maintain.  Once all the test are
successfully passing, we will re-introduce the cache, and be far
more aggressive about cache invalidation.

Tempest tests are currently failing due to Devstack not properly
identifying Fernet as the default token provider, and creating the
Fernet key repository.  I'm tempted to just force devstack to
always create the directory, as a user would need it if they ever
switched the token provider post launch anyway.


There's a review to change devstack to default to fernet: 
https://review.openstack.org/#/c/195780/ . This was mostly to show 
that tempest still passes with fernet configured. It uncovered a 
couple of test issues (similar in nature to the revocation checking 
issues mentioned in the original note) that have since been fixed.


We'd prefer to not have devstack overriding config options and instead 
use keystone's defaults. The problem is if fernet is the default in 
keystone then it won't work out of the box since the key database 
won't exist. One option that I think we should investigate is to have 
keystone create the key database on startup if it doesn't exist.


In some deployment, they should be owned by different users.  In 
general, a system/daemon user should not be writing to /etc.  Key 
rotation/etc is likely to be handled by an external Content management 
system, so it might not be the right default.





- Brant



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Adam Young

On 04/18/2016 02:10 PM, Matt Fischer wrote:

Thanks Brant,

I will missing that distinction.

On Mon, Apr 18, 2016 at 9:43 AM, Brant Knudson <b...@acm.org 
<mailto:b...@acm.org>> wrote:




On Mon, Apr 18, 2016 at 10:20 AM, Matt Fischer
<m...@mattfischer.com <mailto:m...@mattfischer.com>> wrote:

On Mon, Apr 18, 2016 at 8:29 AM, Brant Knudson <b...@acm.org
<mailto:b...@acm.org>> wrote:



    On Fri, Apr 15, 2016 at 9:04 PM, Adam Young
<ayo...@redhat.com <mailto:ayo...@redhat.com>> wrote:

We all want Fernet to be a reality.  We ain't there
yet (Except for mfish who has no patience) but we are
getting closer.  The goal is to get Fernet as the
default token provider as soon as possible. The review
to do this has uncovered a few details that need to be
fixed before we can do this.

Trusts for V2 tokens were not working correctly.
Relatively easy fix.
https://review.openstack.org/#/c/278693/ Patch is
still failing on Python 3.  The tests are kindof racy
due to the revocation event 1 second granularity. 
Some of the tests here have A sleep (1) in them still,

but all should be using the time control aspect of the
unit test fixtures.

Some of the tests also use the same user to validate a
token as that have, for example, a role unassigned. 
These expose a problem that the revocation events are

catching too many tokens, some of which should not be
treated as revoked.

Also, some of the logic for revocation checking has to
change. Before, if a user had two roles, and had one
removed, the token would be revoked.  Now, however,
the token will validate successful, but the response
will only have the single assigned role in it.


Python 3 tests are failing because the Fernet
formatter is insisting that all project-ids be valid
UUIDs, but some of the old tests have "FOO" and "BAR"
as ids.  These either need to be converted to UUIDS,
or the formatter needs to be more forgiving.

Caching of token validations was messing with
revocation checking. Tokens that were valid once were
being reported as always valid. Thus, the current
review  removes all caching on token validations, a
change we cannot maintain.  Once all the test are
successfully passing, we will re-introduce the cache,
and be far more aggressive about cache invalidation.

Tempest tests are currently failing due to Devstack
not properly identifying Fernet as the default token
provider, and creating the Fernet key repository.  I'm
tempted to just force devstack to always create the
directory, as a user would need it if they ever
switched the token provider post launch anyway.


There's a review to change devstack to default to fernet:
https://review.openstack.org/#/c/195780/ . This was mostly
to show that tempest still passes with fernet configured.
It uncovered a couple of test issues (similar in nature to
the revocation checking issues mentioned in the original
note) that have since been fixed.

We'd prefer to not have devstack overriding config options
and instead use keystone's defaults. The problem is if
fernet is the default in keystone then it won't work out
of the box since the key database won't exist. One option
that I think we should investigate is to have keystone
create the key database on startup if it doesn't exist.

- Brant



I'm not a devstack user, but as I mentioned before, I assume
devstack called keystone-manage db_sync? Why couldn't it also
call keystone-manage fernet_setup?


When you tell devstack that it's using fernet then it does
keystone-manage fernet_setup. When you tell devstack to use the
default, it doesn't fernet_setup because for now it thinks the
default is UUID and doesn't require keys. One way to have devstack
work when fernet is the default is to have devstack always do
keystone-manage fernet_setup.

My thought was to have this as a temporary fix as the default changes.  
Once we settle in to Fernet, we can swap to "only Fernet if Fernet"


There is no reason Devstack can't read the config option from Keystone, 
but that is a larger c

[openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-15 Thread Adam Young
We all want Fernet to be a reality.  We ain't there yet (Except for 
mfish who has no patience) but we are getting closer.  The goal is to 
get Fernet as the default token provider as soon as possible. The review 
to do this has uncovered a few details that need to be fixed before we 
can do this.


Trusts for V2 tokens were not working correctly.  Relatively easy fix. 
https://review.openstack.org/#/c/278693/ Patch is still failing on 
Python 3.  The tests are kindof racy due to the revocation event 1 
second granularity.  Some of the tests here have A sleep (1) in them 
still, but all should be using the time control aspect of the unit test 
fixtures.


Some of the tests also use the same user to validate a token as that 
have, for example, a role unassigned.  These expose a problem that the 
revocation events are catching too many tokens, some of which should not 
be treated as revoked.


Also, some of the logic for revocation checking has to change. Before, 
if a user had two roles, and had one removed, the token would be 
revoked.  Now, however, the token will validate successful, but the 
response will only have the single assigned role in it.



Python 3 tests are failing because the Fernet formatter is insisting 
that all project-ids be valid UUIDs, but some of the old tests have 
"FOO" and "BAR" as ids.  These either need to be converted to UUIDS, or 
the formatter needs to be more forgiving.


Caching of token validations was messing with revocation checking. 
Tokens that were valid once were being reported as always valid. Thus, 
the current review  removes all caching on token validations, a change 
we cannot maintain.  Once all the test are successfully passing, we will 
re-introduce the cache, and be far more aggressive about cache 
invalidation.


Tempest tests are currently failing due to Devstack not properly 
identifying Fernet as the default token provider, and creating the 
Fernet key repository.  I'm tempted to just force devstack to always 
create the directory, as a user would need it if they ever switched the 
token provider post launch anyway.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Newton midycle planning

2016-04-13 Thread Adam Young

On 04/13/2016 10:07 PM, Morgan Fainberg wrote:
It is that time again, the time to plan the Keystone midcycle! Looking 
at the schedule [1] for Newton, the weeks that make the most sense 
look to be (not in preferential order):


R-14 June 27-01

Might be interesting having one this early in the cycle.

R-12 July 11-15

Won't be able to make this;  planned family vacation this week.

R-11 July 18-22

Prefer this.



As usual this will be a 3 day event (probably Wed, Thurs, Fri), and 
based on previous attendance we can expect ~30 people to attend. Based 
upon all the information (other midcycles, other events, the US 
July4th holiday), I am thinking that week R-12 (the week of the 
newton-2 milestone) would be the best offering. Weeks before or after 
these three tend to push too close to the summit or too far into the 
development cycle.


I am trying to arrange for a venue in the Bay Area (most likely will 
be South Bay, such as Mountain View, Sunnyvale, Palo Alto, San Jose) 
since we have done east coast and central over the last few midcycles.


Please let me know your thoughts / preferences. In summary:

* Venue will be Bay Area (more info to come soon)

* Options of weeks (in general subjective order of preference): R-12, 
R-11, R-14


Cheers,
--Morgan

[1] http://releases.openstack.org/newton/schedule.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Adam Young

On 04/12/2016 03:43 PM, Hongbin Lu wrote:


Hi all,

In short, some Magnum team members proposed to store TLS certificates 
in Keystone credential store. As Magnum PTL, I want to get agreements 
(or non-disagreement) from OpenStack community in general, Keystone 
community in particular, before approving the direction.


In details, Magnum leverages TLS to secure the API endpoint of 
kubernetes/docker swarm. The usage of TLS requires a secure store for 
storing TLS certificates.



No it does not.

Nothing required "secure storing of certificates."

What is required is "secure storing of private keys."  Period. Nothing 
else needs to be securely stored.


Next step is the "signing" of X509 certificates, and this requires a 
CA.  Barbican is the OpenStack abstraction for a CA, but still requires 
a "real" implementation to back to.  Dogtag is available for this role.



Now, what Keystone can and should do is provide a way to map an X509 
Certificate to a user.  This is actually much better done using the 
Federation approach than the Credentials store.


Credentials kinda suck.  They should die in a fire.  They can't, but 
they should. Different rant though.


So, to nail it down specifically:  Keystone's  sole role here is to map  
the Subject from an X509 certificate to a user_id.  If you try to do 
anything more than that with Keystone, you are in a state of sin.


So, if what you want to do is to store an X509 Certificate in the 
Keystone Credentials API, go for it, but I don;'t know what it would buy 
you, as only the "owner" of that cert would then be able to retrieve it.



If, on the other hand, what you want to do is to decouple the 
request/approval of X509 dfrom Barbican, I would suggest you use 
Certmonger.  It is an Operating system level tool for exactly this 
purpose.  And then we should make sure that Barbican can act as a CA for 
Certmonger (I know that Dogtag can already).



There is nothing Magnum specific about this.  We need to solve the Cert 
story for OpenStack in general.  We need TLS for The Message Broker and 
the Database connections as well as any HTTPS servers we have.





Currently, we leverage Barbican for this purpose, but we constantly 
received requests to decouple Magnum from Barbican (because users 
normally don’t have Barbican installed in their clouds). Some Magnum 
team members proposed to leverage Keystone credential store as a 
Barbican alternative [1]. Therefore, I want to confirm what is 
Keystone team position for this proposal (I remembered someone from 
Keystone mentioned this is an inappropriate use of Keystone. Would I 
ask for further clarification?). Thanks in advance.


[1] 
https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store


Best regards,

Hongbin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Adam Young

On 04/06/2016 05:41 PM, Fox, Kevin M wrote:

-1 for man in the middle susceptible solutions. This also doesn't solve all the 
issues listed in the spec, such as suspended nodes, snapshotted nodes, etc.
Self signed MITM  That is only an issue if you are not trusting the 
initial setup.  I wan a real CA, we can't force everyone to do that.



Secure the Message bus is essential anyway.  Let's not piecemeal a 
solution here.





Nova has several back channel mechanisms at its disposal. We should use one or 
more of them to solve the problem properly instead of opening a security hole 
in our solution to a security problem.

Such as:
  * The nova console is one mechanism that could be utilized as a secure back 
channel.
  * The vm based instances could add a virutal serial port as a back channel.
  * Some bare metal bmc's support virtual cd's which could be loaded with fresh 
credentials upon request.
  * The metadata server is reliable in certain situations.

I'm sure there are more options too.

The instance user spec covers a lot of that stuff.

I'm ok if we want to refactor the instance user spec to cover creating phase 1 
credentials that are intended to be used for things other then getting a 
keystone token. It could be used to register/reregister with ipa, chef, puppet, 
etc. We just need to reword the spec to cover that use case too.

I'm also not tied to the implementation listed. it just needs to meet the 
requirements.

Thanks,
Kevin


From: Adam Young [ayo...@redhat.com]
Sent: Wednesday, April 06, 2016 2:09 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Minimal secure identification of a new VM

On 04/06/2016 05:42 AM, Daniel P. Berrange wrote:

On Tue, Apr 05, 2016 at 06:00:55PM -0400, Adam Young wrote:

We have a use case where we want to register a newly spawned Virtual machine
with an identity provider.

Heat also has a need to provide some form of Identity for a new VM.


Looking at the set of utilities right now, there does not seem to be a
secure way to do this.  Injecting files does not provide a path that cannot
be seen by other VMs or machines in the system.

For our use case, a short lived One-Time-Password is sufficient, but for
others, I think asymmetric key generation makes more sense.

Is the following possible:

1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
infrastructure (somehow) that it has done so.

There's no currently secure channel for the guest to push information
to Nova.

We need to secure the message queue from the compute node to conductor.
This is very achievable:

1.  Each compute node gets its own rabbit user
2.  Messages from compute node to Conductor are validated as to what
node sent them

We should enable TLS on the network as well, or password can be
sniffed.  Self signed is crappy, but probably sufficient for a baseline
deployment. Does not defend against MITM.  Puppet based deployments can
mitigate.
X509 client cert is a better auth mechanism than password, but not
essential.




   The best we have is the metadata service, but we'd need to
secure that with https, because the metadata server cannot be assumed
to be running on the same host as the VM & so the channel is not protected
against MITM attacks.

Also currently the metadata server is readonly with the guest pulling
information from it - it doesn't currently allow guests to push information
into it. This is nice because the metadata servers could theoretically be
locked down to prevent may interactions with the rest of nova - it should
only need read-only access to info about the guests it is serving. If we
turn the metadata server into a bi-directional service which can update
information about guests, then it opens it up as a more attractive avenue
of attack for guest OS trying breach the host infra. This is a fairly
general concern with any approach where the guest has to have the ability
to push information back into Nova.


2.  Nova Compute reads the public Key off the device and sends it to
conductor, which would then associate the public key with the server?

3.  A third party system could then validate the association of the public
key and the server, and build a work flow based on some signed document from
the VM?

Regards,
Daniel


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing 

Re: [openstack-dev] [tc][ptl][keystone] Proposal to split authentication part out of Keystone to separated project

2016-04-06 Thread Adam Young

On 04/06/2016 04:56 PM, Dolph Mathews wrote:
For some historical perspective, that's basically how v2 was designed. 
The "public" service (port 5000) did nothing but the auth flow. The 
"admin" service (port 35357) was identity management.


Unfortunately, there are (perhaps uncommon) authentication flows 
where, for example, you need to 1) authenticate for an unscoped token, 
2) retrieve a list of tenants where you have some authorization, 3) 
re-authenticate for a scoped token. There was a lot of end-user 
confusion over what port was used for what operations (Q: "Why is my 
curl request returning 404? I'm doing what the docs say to do!" A: 
"You're calling the wrong port."). More and more calls straddled the 
line between the two APIs, blurring their distinction.


The approach we took in v3 was to consolidate the APIs into a single, 
functional superset, and use RBAC to distinguish between use cases in 
a more flexible manner.


On Wed, Apr 6, 2016 at 2:26 PM, Boris Pavlovic > wrote:


Hi stackers,

I would like to suggest very simple idea of splitting out of
Keystone authentication
part in the separated project.

Such change has 2 positive outcomes:
1) It will be quite simple to create scalable service with high
performance for authentication based on very mature projects like:
Kerberos[1] and OpenLDAP[2].


You can basically do this today if you just focus on implementing 
drivers for the few bits of keystone you need, and disable the rest.


We should deprecate the userid/password in the token Body and use the 
BasicAuth mechanism in its place.  Then Password could be a Federated 
call like anything else.  We could do that logic in Middleware instead 
of an apache module.


A comparble middleware/apache module could also be used in other 
services, allowing the identity inside of Keystone to be used with 
remote services.


Ideally, we would get out of the business of distributing tokens 
altogether, and use the standar Mechanism for authentication that the 
web has when talking to the services directly.  Keystone then reduces to 
a service catalog look up for end users.






2) This will reduce scope of Keystone, which means 2 things
2.1) Smaller code base that has less issues and is simpler for testing
2.2) Keystone team would be able to concentrate more on fixing
perf/scalability issues of authorization, which is crucial at the
moment for large clouds.


(2.2) is particularly untrue, because this will cause at least 2 
releases worth of refactoring work for everyone, and another 6 
releases justifying to deployers why their newfound headaches are 
worthwhile. Perhaps after burning those ~4 years of productivity, we'd 
be able to get back to "fixing perf/scalability issues of authorization."



Thoughts?

[1] http://web.mit.edu/kerberos/
[2] http://ldapcon.org/2011/downloads/hummel-slides.pdf

Best regards,
Boris Pavlovic

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Adam Young

On 04/06/2016 05:42 AM, Daniel P. Berrange wrote:

On Tue, Apr 05, 2016 at 06:00:55PM -0400, Adam Young wrote:

We have a use case where we want to register a newly spawned Virtual machine
with an identity provider.

Heat also has a need to provide some form of Identity for a new VM.


Looking at the set of utilities right now, there does not seem to be a
secure way to do this.  Injecting files does not provide a path that cannot
be seen by other VMs or machines in the system.

For our use case, a short lived One-Time-Password is sufficient, but for
others, I think asymmetric key generation makes more sense.

Is the following possible:

1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
infrastructure (somehow) that it has done so.

There's no currently secure channel for the guest to push information
to Nova.
We need to secure the message queue from the compute node to conductor.  
This is very achievable:


1.  Each compute node gets its own rabbit user
2.  Messages from compute node to Conductor are validated as to what 
node sent them


We should enable TLS on the network as well, or password can be 
sniffed.  Self signed is crappy, but probably sufficient for a baseline 
deployment. Does not defend against MITM.  Puppet based deployments can 
mitigate.
X509 client cert is a better auth mechanism than password, but not 
essential.





  The best we have is the metadata service, but we'd need to
secure that with https, because the metadata server cannot be assumed
to be running on the same host as the VM & so the channel is not protected
against MITM attacks.

Also currently the metadata server is readonly with the guest pulling
information from it - it doesn't currently allow guests to push information
into it. This is nice because the metadata servers could theoretically be
locked down to prevent may interactions with the rest of nova - it should
only need read-only access to info about the guests it is serving. If we
turn the metadata server into a bi-directional service which can update
information about guests, then it opens it up as a more attractive avenue
of attack for guest OS trying breach the host infra. This is a fairly
general concern with any approach where the guest has to have the ability
to push information back into Nova.


2.  Nova Compute reads the public Key off the device and sends it to
conductor, which would then associate the public key with the server?

3.  A third party system could then validate the association of the public
key and the server, and build a work flow based on some signed document from
the VM?

Regards,
Daniel



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-06 Thread Adam Young

On 04/06/2016 10:44 AM, Dan Prince wrote:

On Tue, 2016-04-05 at 19:19 -0600, Rich Megginson wrote:

On 04/05/2016 07:06 PM, Dan Prince wrote:

On Sat, 2016-04-02 at 17:28 -0400, Adam Young wrote:

I finally have enough understanding of what is going on with
Tripleo
to
reasonably discuss how to implement solutions for some of the
main
security needs of a deployment.


FreeIPA is an identity management solution that can provide
support
for:

1. TLS on all network communications:
  A. HTTPS for web services
  B. TLS for the message bus
  C. TLS for communication with the Database.
2. Identity for all Actors in the system:
 A.  API services
 B.  Message producers and consumers
 C.  Database consumers
 D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)

Of these, the CA is the most critical.  Without a centralized CA,
we
have no reasonable way to do certificate management.

Would using Barbican to provide an API to manage the certificates
make
more sense for our deployment tooling? This could be useful for
both
undercloud and overcloud cases.

As for the rest of this, how invasive is the implementation of
FreeIPA.? Is this something that we can layer on top of an existing
deployment such that users wishing to use FreeIPA can opt-in.


Now, I know a lot of people have an allergic reaction to some,
maybe
all, of these technologies. They should not be required to be
running
in
a development or testbed setup.  But we need to make it possible
to
secure an end deployment, and FreeIPA was designed explicitly for
these
kinds of distributed applications.  Here is what I would like to
implement.

Assuming that the Undercloud is installed on a physical machine,
we
want
to treat the FreeIPA server as a managed service of the
undercloud
that
is then consumed by the rest of the overcloud. Right now, there
are
conflicts for some ports (8080 used by both swift and Dogtag)
that
prevent a drop-in run of the server on the undercloud
controller.  Even
if we could deconflict, there is a possible battle between
Keystone
and
the FreeIPA server on the undercloud.  So, while I would like to
see
the
ability to run the FreeIPA server on the Undercloud machine
eventuall, I
think a more realistic deployment is to build a separate virtual
machine, parallel to the overcloud controller, and install
FreeIPA
there. I've been able to modify Tripleo Quickstart to provision
this
VM.

I was also able to run FreeIPA in a container on the undercloud
machine,
but this is, I think, not how we want to migrate to a container
based
strategy. It should be more deliberate.


While the ideal setup would be to install the IPA layer first,
and
create service users in there, this produces a different install
path
between with-FreeIPA and without-FreeIPA. Thus, I suspect the
right
approach is to run the overcloud deploy, then "harden" the
deployment
with the FreeIPA steps.


The IdM team did just this last summer in preparing for the Tokyo
summit, using Ansible and Packstack.  The Rippowam project
https://github.com/admiyo/rippowam was able to fully lock down a
Packstack based install.  I'd like to reuse as much of Rippowam
as
possible, but called from Heat Templates as part of an overcloud
deploy.  I do not really want to re implement Rippowam in Puppet.

As we are using Puppet for our configuration I think this is
currently
a requirement. There are many good puppet examples out there of
various
servers and a quick google search showed some IPA modules are
available
as well.

I think most TripleO users are quite happy in using puppet modules
for
configuration in that the puppet openstack modules are quite mature
and
well tested. Making a one-off exception for FreeIPA at this point
doesn't make sense to me.

What about calling an ansible playbook from a puppet module?

Given our current toolset in TripleO having the ability to manage all
service configurations with a common language overrides any short cuts
that calling Ansible from Puppet would give you I think.

The best plan I think for IPA integration into the Over and underclouds
would be a puppet-freeipa module.
Puppet is fine. I have some feedback from the IPA side that the 
https://github.com/purpleidea/puppet-ipa/  works ok.  Work on it seems 
to have tapered off last June, but we could revive.





So, big question: is Heat->ansible (instead of Puppet) for an
overcloud
deployment an acceptable path?  We are talking Ansible 1.0
Playbooks,
which should be relatively straightforward ports to 2.0 when the
time
comes.

Thus, the sequence would be:

1. Run existing overcloud deploy steps.
2. Install IPA server on the

Re: [openstack-dev] [horizon] - oAuth tab proposal

2016-04-06 Thread Adam Young

On 04/06/2016 03:20 PM, Brad Pokorny wrote:

The last I heard, oauth is likely to be deprecated in Keystone [1].

If you're interested in having it stay around, please let the Keystone 
team know. It would only make sense to add it to Horizon if it's going 
to stay.


[1] http://openstack.markmail.org/message/ihqbetack26g5gmg

Thanks,
Brad



We are looking to unify all of the delegations mechanisms:  Role 
assigntments, trust, and OAuth.  Its going to be a topic at the Austin 
summit.  A unified UI for these would be awesome.





From: "Rob Cresswell (rcresswe)" >
Reply-To: "OpenStack Development Mailing List (not for usage 
questions)" >

Date: Thursday, March 31, 2016 at 8:31 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>

Subject: Re: [openstack-dev] [horizon] - oAuth tab proposal

Could you put up a blueprint for discussion? We have a weekly meeting 
to review blueprints: 
https://wiki.openstack.org/wiki/Meetings/HorizonDrivers


The blueprint template is here: 
https://blueprints.launchpad.net/horizon/+spec/template


Thanks!

Rob

On 31 Mar 2016, at 10:57, Marcos Fermin Lobo 
> wrote:


Hi all,

I would like to propose a new tab in "Access and security" web page.

As you know, keystone offers an OAUTH plugin for authentication. This 
means that third party applications could access to OpenStack cloud 
resources using OAUTH. Now, this is possible using the CLI but there 
is nothing (AFAIK) in Horizon.


I would propose a new tab in "Access and security" web page to manage 
OAUTH credentials. As usual, this new tab would have a list of OAUTH 
crendentials with buttons to approve and remove them.


Please see a simple mockup 
herehttps://mferminl.web.cern.ch/mferminl/mockups/horizon-oauth-mockup.png


Comments, suggestions... are very welcome!

Cheers,
Marcos.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org 
?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Adam Young

On 04/05/2016 09:06 PM, Dan Prince wrote:

On Sat, 2016-04-02 at 17:28 -0400, Adam Young wrote:

I finally have enough understanding of what is going on with Tripleo
to
reasonably discuss how to implement solutions for some of the main
security needs of a deployment.


FreeIPA is an identity management solution that can provide support
for:

1. TLS on all network communications:
 A. HTTPS for web services
 B. TLS for the message bus
 C. TLS for communication with the Database.
2. Identity for all Actors in the system:
A.  API services
B.  Message producers and consumers
C.  Database consumers
D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)

Of these, the CA is the most critical.  Without a centralized CA, we
have no reasonable way to do certificate management.

Would using Barbican to provide an API to manage the certificates make
more sense for our deployment tooling? This could be useful for both
undercloud and overcloud cases.
Barbican is not a CA.  However, it can use the KRA deployed with Dogtag 
to store its secrets, so this actually supports Barbican nicely.




As for the rest of this, how invasive is the implementation of
FreeIPA.? Is this something that we can layer on top of an existing
deployment such that users wishing to use FreeIPA can opt-in.
Yep.  The big thing it gives you is the Cert management, and I don't 
want to rewrite that, but the rest can stay out of the way.


I do suspect that, once it is there, we will want to use more of IPA, 
but that is not the goal.







Now, I know a lot of people have an allergic reaction to some, maybe
all, of these technologies. They should not be required to be running
in
a development or testbed setup.  But we need to make it possible to
secure an end deployment, and FreeIPA was designed explicitly for
these
kinds of distributed applications.  Here is what I would like to
implement.

Assuming that the Undercloud is installed on a physical machine, we
want
to treat the FreeIPA server as a managed service of the undercloud
that
is then consumed by the rest of the overcloud. Right now, there are
conflicts for some ports (8080 used by both swift and Dogtag) that
prevent a drop-in run of the server on the undercloud
controller.  Even
if we could deconflict, there is a possible battle between Keystone
and
the FreeIPA server on the undercloud.  So, while I would like to see
the
ability to run the FreeIPA server on the Undercloud machine
eventuall, I
think a more realistic deployment is to build a separate virtual
machine, parallel to the overcloud controller, and install FreeIPA
there. I've been able to modify Tripleo Quickstart to provision this
VM.

I was also able to run FreeIPA in a container on the undercloud
machine,
but this is, I think, not how we want to migrate to a container
based
strategy. It should be more deliberate.


While the ideal setup would be to install the IPA layer first, and
create service users in there, this produces a different install
path
between with-FreeIPA and without-FreeIPA. Thus, I suspect the right
approach is to run the overcloud deploy, then "harden" the
deployment
with the FreeIPA steps.


The IdM team did just this last summer in preparing for the Tokyo
summit, using Ansible and Packstack.  The Rippowam project
https://github.com/admiyo/rippowam was able to fully lock down a
Packstack based install.  I'd like to reuse as much of Rippowam as
possible, but called from Heat Templates as part of an overcloud
deploy.  I do not really want to re implement Rippowam in Puppet.

As we are using Puppet for our configuration I think this is currently
a requirement. There are many good puppet examples out there of various
servers and a quick google search showed some IPA modules are available
as well.

I think most TripleO users are quite happy in using puppet modules for
configuration in that the puppet openstack modules are quite mature and
well tested. Making a one-off exception for FreeIPA at this point
doesn't make sense to me.


Yeah, and I think I am fine with that.  It just means I have to rewrite 
some stuff, and that makes sense in keeping thing consistent.  Just 
figured I'd ask first before I had to star getting deep into Puppet.





So, big question: is Heat->ansible (instead of Puppet) for an
overcloud
deployment an acceptable path?  We are talking Ansible 1.0
Playbooks,
which should be relatively straightforward ports to 2.0 when the time
comes.

Thus, the sequence would be:

1. Run existing overcloud deploy steps.
2. Install IPA server on the allocated VM
3. Register the compute nodes and the controller as IPA clien

Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Adam Young

On 04/05/2016 08:02 AM, Hayes, Graham wrote:

On 02/04/2016 22:33, Adam Young wrote:

I finally have enough understanding of what is going on with Tripleo to
reasonably discuss how to implement solutions for some of the main
security needs of a deployment.


FreeIPA is an identity management solution that can provide support for:

1. TLS on all network communications:
  A. HTTPS for web services
  B. TLS for the message bus
  C. TLS for communication with the Database.
2. Identity for all Actors in the system:
 A.  API services
 B.  Message producers and consumers
 C.  Database consumers
 D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)






There are a couple ongoing efforts that will tie in with this:

1. Designate should be able to use the DNS from FreeIPA.  That was the
original implementation.

Designate cannot use FreeIPA - we haven't had a driver for it since
Kilo.

There have been various efforts since to support FreeIPA, but it
requires that it is the point of truth for DNS information, as does
Designate.

If FreeIPA supported the traditional Notify and Zone Transfer mechanisms
then we would be fine, but unfortunately it does not.

[1] Actually points out that the goal of FreeIPA's DNS integration
"... is NOT to provide general-purpose DNS server. Features beyond
easing FreeIPA deployment and maintenance are explicitly out of scope."

1 - http://www.freeipa.org/page/DNS#Goals



Lets table that for now. No reason they should not be able to 
interoperate somehow.






2.  Juan Antonio Osorio  has been working on TLS everywhere.  The issue
thus far has been Certificate management.  This provides a Dogtag server
for Certs.

3. Rob Crittenden has been working on auto-registration of virtual
machines with an Identity Provider upon launch.  This gives that efforts
an IdM to use.

4. Keystone can make use of the Identity store for administrative users
in their own domain.

5. Many of the compliance audits have complained about cleartext
passwords in config files. This removes most of them.  MySQL supports
X509 based authentication today, and there is Kerberos support in the
works, which should remove the last remaining cleartext Passwords.

I mentioned Centralized SUDO and HBAC.  These are both tools that may be
used by administrators if so desired on the install. I would recommend
that they be used, but there is no requirement to do so.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Adam Young

On 04/05/2016 11:42 AM, Fox, Kevin M wrote:
Yeah, and they just deprecated vendor data plugins too, which 
eliminates my other workaround. :/


We need to really discuss this problem at the summit and get a viable 
path forward. Its just getting worse. :/


Thanks,
Kevin

*From:* Juan Antonio Osorio [jaosor...@gmail.com]
*Sent:* Tuesday, April 05, 2016 5:16 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [TripleO] FreeIPA integration



On Tue, Apr 5, 2016 at 2:45 PM, Fox, Kevin M > wrote:


This sounds suspiciously like, "how do you get a secret to the
instance to get a secret from the secret store" issue :)

Yeah, sounds pretty familiar. We were using the nova hooks mechanism 
for this means, but it was deprecated recently. So bummer :/



Nova instance user spec again?

Thanks,
Kevin



Yep, and we need a solution.  I think the right solution is a keypair 
generated on the instance, public key posted by the instace to the 
hypervisor and stored with the instance data in the database.  I wrote 
that to the mailing list earlier today.


A basic rule of a private key is that it never leaves the machine on 
which it is generated.  The rest falls out from there.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-05 Thread Adam Young

On 04/05/2016 09:01 AM, Steven Hardy wrote:

On Tue, Apr 05, 2016 at 02:07:06PM +0300, Juan Antonio Osorio wrote:

On Tue, Apr 5, 2016 at 11:36 AM, Steven Hardy <sha...@redhat.com> wrote:

  On Sat, Apr 02, 2016 at 05:28:57PM -0400, Adam Young wrote:
  > I finally have enough understanding of what is going on with Tripleo
  to
  > reasonably discuss how to implement solutions for some of the main
  security
  > needs of a deployment.
  >
  >
  > FreeIPA is an identity management solution that can provide support
  for:
  >
  > 1. TLS on all network communications:
  >Â  Â  A. HTTPS for web services
  >Â  Â  B. TLS for the message bus
  >Â  Â  C. TLS for communication with the Database.
  > 2. Identity for all Actors in the system:
  >   A.  API services
  >   B.  Message producers and consumers
  >   C.  Database consumers
  >   D.  Keystone service users
  > 3. Secure  DNS DNSSEC
  > 4. Federation Support
  > 5. SSH Access control to Hosts for both undercloud and overcloud
  > 6. SUDO management
  > 7. Single Sign On for Applications running in the overcloud.
  >
  >
  > The main pieces of FreeIPA are
  > 1. LDAP (the 389 Directory Server)
  > 2. Kerberos
  > 3. DNS (BIND)
  > 4. Certificate Authority (CA) server (Dogtag)
  > 5. WebUI/Web Service Management Interface (HTTPD)
  >
  > Of these, the CA is the most critical.  Without a centralized CA, we
  have no
  > reasonable way to do certificate management.
  >
  > Now, I know a lot of people have an allergic reaction to some, maybe
  all, of
  > these technologies. They should not be required to be running in a
  > development or testbed setup.  But we need to make it possible to
  secure an
  > end deployment, and FreeIPA was designed explicitly for these kinds of
  > distributed applications.  Here is what I would like to implement.
  >
  > Assuming that the Undercloud is installed on a physical machine, we
  want to
  > treat the FreeIPA server as a managed service of the undercloud that
  is then
  > consumed by the rest of the overcloud. Right now, there are conflicts
  for
  > some ports (8080 used by both swift and Dogtag) that prevent a drop-in
  run
  > of the server on the undercloud controller.  Even if we could
  deconflict,
  > there is a possible battle between Keystone and the FreeIPA server on
  the
  > undercloud.  So, while I would like to see the ability to run the
  FreeIPA
  > server on the Undercloud machine eventuall, I think a more realistic
  > deployment is to build a separate virtual machine, parallel to the
  overcloud
  > controller, and install FreeIPA there. I've been able to modify
  Tripleo
  > Quickstart to provision this VM.

  IMO these services shouldn't be deployed on the undercloud - we only
  support a single node undercloud, and atm it's completely possible to
  take
  the undercloud down without any impact to your deployed cloud (other
  than
  losing the ability to manage it temporarily).

This is fair enough, however, for CI purposes, would it be acceptable to
deploy it there? Or where do you recommend we have it?

We're already well beyond capacity in CI, so to me this seems like
something that's probably appropriate for a third-party CI job?

To me it just doesn't make sense to integrate these pieces on the
undercloud, and integrating it there just because we need it available for
CI purposes seems like a poor argument, because we're not testing a
representative/realistic environment.

If we have to wire this in to TripleO CI I'd favor spinning up an extra
node with the FreeIPA pieces in, e.g a separate Heat stack (so, e.g the
nonha job takes 3 nodes, a "freeipa" stack of 1 and an overcloud of 2).
So, this is actually what I proposed.  If you reread my original, put 
the emphasis on the first part:


"Assuming that the Undercloud is installed on a physical machine, we 
want to  treat the FreeIPA server as a managed service of the undercloud 
that is then  consumed by the rest of the overcloud." Running it on the 
Undercloud machine was only


"I would like ...  ability ... eventually"

As I said, with quickstart, I have the ability to deploy an additional 
VM and throw IPA on there.  I have it all set with Ansible.  This 
machine could avoid using Puppet itself.


But, it is possible to install IPA using Puppet, and we could do that 
too, its just new code to be written.


The ability to run with an existing IPA server is also important. Either 
way, though, what is important is that IPA be available, or

[openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-05 Thread Adam Young
We have a use case where we want to register a newly spawned Virtual 
machine with an identity provider.


Heat also has a need to provide some form of Identity for a new VM.


Looking at the set of utilities right now, there does not seem to be a 
secure way to do this.  Injecting files does not provide a path that 
cannot be seen by other VMs or machines in the system.


For our use case, a short lived One-Time-Password is sufficient, but for 
others, I think asymmetric key generation makes more sense.


Is the following possible:

1.  In cloud-init, the VM generates a Keypair, then notifies the No0va 
infrastructure (somehow) that it has done so.


2.  Nova Compute reads the public Key off the device and sends it to 
conductor, which would then associate the public key with the server?


3.  A third party system could then validate the association of the 
public key and the server, and build a work flow based on some signed 
document from the VM?






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] FreeIPA integration

2016-04-02 Thread Adam Young
I finally have enough understanding of what is going on with Tripleo to 
reasonably discuss how to implement solutions for some of the main 
security needs of a deployment.



FreeIPA is an identity management solution that can provide support for:

1. TLS on all network communications:
   A. HTTPS for web services
   B. TLS for the message bus
   C. TLS for communication with the Database.
2. Identity for all Actors in the system:
  A.  API services
  B.  Message producers and consumers
  C.  Database consumers
  D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)

Of these, the CA is the most critical.  Without a centralized CA, we 
have no reasonable way to do certificate management.


Now, I know a lot of people have an allergic reaction to some, maybe 
all, of these technologies. They should not be required to be running in 
a development or testbed setup.  But we need to make it possible to 
secure an end deployment, and FreeIPA was designed explicitly for these 
kinds of distributed applications.  Here is what I would like to implement.


Assuming that the Undercloud is installed on a physical machine, we want 
to treat the FreeIPA server as a managed service of the undercloud that 
is then consumed by the rest of the overcloud. Right now, there are 
conflicts for some ports (8080 used by both swift and Dogtag) that 
prevent a drop-in run of the server on the undercloud controller.  Even 
if we could deconflict, there is a possible battle between Keystone and 
the FreeIPA server on the undercloud.  So, while I would like to see the 
ability to run the FreeIPA server on the Undercloud machine eventuall, I 
think a more realistic deployment is to build a separate virtual 
machine, parallel to the overcloud controller, and install FreeIPA 
there. I've been able to modify Tripleo Quickstart to provision this VM.


I was also able to run FreeIPA in a container on the undercloud machine, 
but this is, I think, not how we want to migrate to a container based 
strategy. It should be more deliberate.



While the ideal setup would be to install the IPA layer first, and 
create service users in there, this produces a different install path 
between with-FreeIPA and without-FreeIPA. Thus, I suspect the right 
approach is to run the overcloud deploy, then "harden" the deployment 
with the FreeIPA steps.



The IdM team did just this last summer in preparing for the Tokyo 
summit, using Ansible and Packstack.  The Rippowam project 
https://github.com/admiyo/rippowam was able to fully lock down a 
Packstack based install.  I'd like to reuse as much of Rippowam as 
possible, but called from Heat Templates as part of an overcloud 
deploy.  I do not really want to re implement Rippowam in Puppet.


So, big question: is Heat->ansible (instead of Puppet) for an overcloud 
deployment an acceptable path?  We are talking Ansible 1.0 Playbooks, 
which should be relatively straightforward ports to 2.0 when the time comes.


Thus, the sequence would be:

1. Run existing overcloud deploy steps.
2. Install IPA server on the allocated VM
3. Register the compute nodes and the controller as IPA clients
4. Convert service users over to LDAP backed services, complete with 
necessary kerberos steps to do password-less authentication.
5. Register all web services with IPA and allocate X509 certificates for 
HTTPS.
6. Set up Host based access control (HBAC) rules for SSH access to 
overcloud machines.



When we did the Rippowam demo, we used the Proton driver and Kerberos 
for securing the message broker.  Since Rabbit seems to be the tool of 
choice,  we would use X509 authentication and TLS for encryption.  ACLs, 
for now, would stay in the flat file format.  In the future, we might 
chose to use the LDAP backed ACLs for Rabbit, as they seem far more 
flexible.  Rabbit does not currently support Kerberos for either 
authentication or encryption, but we can engage the upstream team to 
implement it if desired in the future, or we can shift to a Proton based 
deployment if Kerberos is essential for a deployment.



There are a couple ongoing efforts that will tie in with this:

1. Designate should be able to use the DNS from FreeIPA.  That was the 
original implementation.


2.  Juan Antonio Osorio  has been working on TLS everywhere.  The issue 
thus far has been Certificate management.  This provides a Dogtag server 
for Certs.


3. Rob Crittenden has been working on auto-registration of virtual 
machines with an Identity Provider upon launch.  This gives that efforts 
an IdM to use.


4. Keystone can make use of the Identity store for administrative users 
in their own domain.


5. 

Re: [openstack-dev] [nova] API priorities in Newton

2016-03-30 Thread Adam Young

On 03/30/2016 04:16 PM, Andrew Laski wrote:


On Wed, Mar 30, 2016, at 03:54 PM, Matt Riedemann wrote:


On 3/30/2016 2:42 PM, Andrew Laski wrote:


On Wed, Mar 30, 2016, at 03:26 PM, Sean Dague wrote:

During the Nova API meeting we had some conversations about priorities,
but this feels like the thing where a mailing list conversation is more
inclusive to get agreement on things. I think we need to remain focused
on what API related work will have the highest impact on our users.
(some brain storming was here -
https://etherpad.openstack.org/p/newton-nova-api-ideas). Here is a
completely straw man proposal on priorities for the Newton cycle.

* Top Priority Items *

1. API Reference docs in RST which include microversions (drivers: me,
auggy, annegentle) -
https://blueprints.launchpad.net/nova/+spec/api-ref-in-rst
2. Discoverable Policy (drivers: laski, claudio) -

Selfishly I'd like Laski to be as focused on cells v2 as possible, but
he does have a spec up related to this.

At the midcycle I agreed to look into the oslo.policy work involved with
this because I have some experience there. I wasn't planning to be much
involved beyond that, and Claudiu has a spec up for the API side of it.
But in my mind there's a chain backwards from capabilities API to
discoverable policy and I want to move the oslo.policy work ahead
quickly if I can to unblock that.


There is a CLI that does something like what you want already:

https://review.openstack.org/#/c/170978/

You basically want a server based version of that that returns all the 
"true" values.






https://review.openstack.org/#/c/289405/
3. ?? (TBD)

I think realistically 3 priority items is about what we can sustain, and
I'd like to keep it there. Item #3 has a couple of options.

Agree to keep the priority list as small as possible, because this is
just a part of our overall backlog of priorities.


* Lower Priority Background Work *

- POC of Gabbi for additional API validation

I'm assuming cdent would be driving this, and he's also working on the
resource providers stuff for the scheduler, but might be a decent side
project for him to stay sane.


- Microversion Testing in Tempest (underway)

How much coverage do we have today? This could be like novaclient where
people just start hacking on adding tests for each microversion
(assuming gmann would be working on this).


- Some of the API WG recommendations

* Things we shouldn't do this cycle *

- Tasks API - not because it's not a good idea, but because I think
until we get ~ 3 core team members agreed that it's their number #1 item
for the cycle, it's just not going to get enough energy to go somewhere
useful. There are some other things on deck that we just need to clear
first.

Agreed. I would love to drive this forward but there are just too many
other areas to focus on right now.

+1




- API wg changes for error codes - we should fix that eventually, but
that should come as a single microversion to minimize churn. That's
coordination we don't really have the bandwidth for this cycle.

+1


* Things we need to decide this cycle *

- When are we deleting the legacy v2 code base in tree?

Do you have some high-level milestone thoughts here? I thought there was
talk about not even thinking about this until Barcelona?


* Final priority item *

For the #3 priority item one of the things that came up today was the
structured errors spec by the API working group. That would be really
nice... but in some ways really does need the entire new API reference
docs in place. And maybe is better in O.

One other issue that we've been blocking on for a while has been
Capabilities discovery. Some API proposed adds like live resize have
been conceptually blocked behind this one. Once upon a time there was a
theory that JSON Home was a thing, and would slice our bread and julien
our fries, and solve all this. But it's a big thing to get right, and
JSON Home has an unclear future. And, we could server our users pretty
well with a much simpler take on capabilities. For instance

   GET /servers/{id}/capabilities

{
  "capabilities" : {
  "resize": True,
  "live-resize": True,
  "live-migrate": False
  ...
   }
}

Effectively an actions map for servers. Lots of details would have to be
sorted out on this one, clearly needs a spec, however I think that this
would help unstick some other things people would like in Nova, without
making our interop story terrible. This would need a driver for this
effort.

I think this ties directly into the discoverable policy item above. I
may be misunderstanding this proposal but I would expect that it has
some link with what a user is allowed to do. Without some improvements
to the policy handling within Nova this is not currently possible.

Agree with Laski here.




Every thing here is up for discussion. This is a summary of some of what
was in the meeting, plus some of my own thoughts. Please chime in on any
of this. It would be good 

Re: [openstack-dev] [puppet-keystone] Setting additional config options:

2016-03-29 Thread Adam Young

On 03/29/2016 06:21 PM, Rich Megginson wrote:

On 03/29/2016 04:19 PM, Adam Young wrote:


Somewhere in here:

http://git.openstack.org/cgit/openstack/puppet-keystone/tree/spec/classes/keystone_spec.rb 



spec is for the rspec unit testing.  Do you mean 
http://git.openstack.org/cgit/openstack/puppet-keystone/tree/manifests/init.pp 
?


DOH.  I've done that a few times.  That really should be renamed to test 
somehow.


so, yes, I mean manifests/init.pp





I need to set these options:


admin_project_name
admin_project_domain_name

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n450 

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n453 




If they are unset, we should default them to 'admin' and 'Default' on 
new

installs, and leave them blank on old installs.


Can anyone point me in the right direction?


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet-keystone] Setting additional config options:

2016-03-29 Thread Adam Young

On 03/29/2016 07:43 PM, Emilien Macchi wrote:

On Tue, Mar 29, 2016 at 6:19 PM, Adam Young <ayo...@redhat.com> wrote:

Somewhere in here:

http://git.openstack.org/cgit/openstack/puppet-keystone/tree/spec/classes/keystone_spec.rb

I need to set these options:


admin_project_name
admin_project_domain_name

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n450
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n453


If they are unset, we should default them to 'admin' and 'Default' on new
installs, and leave them blank on old installs.


Can anyone point me in the right direction?

You'll need to patch puppet-keystone/manifests/init.pp (and unit
tests) (using $::os_service_default for the default value, which will
take the default in keystone, blank).

Important note:
If for whatever reason, puppet-keystone providers need these 2 options
loaded in the environment, please also patch [1]. Because after
initial deployment, Puppet catalog will read from /root/openrc file to
connect to Keystone API.

Ignore my last comment if you don't need these 2 params during
authentication when using openstackclient (in our providers).

SO, while they do, it is for completely unrelated reason.

The two values above are for making it possible to limit which "admin" 
role assignments are for Cloud-wide administrator as opposed to project 
specific.  See https://bugs.launchpad.net/keystone/+bug/968696  for context.








[1] 
https://github.com/openstack/puppet-openstack_extras/blob/master/manifests/auth_file.pp

Let us know if you need help,

Thanks!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet-keystone] Setting additional config options:

2016-03-29 Thread Adam Young


Somewhere in here:

http://git.openstack.org/cgit/openstack/puppet-keystone/tree/spec/classes/keystone_spec.rb

I need to set these options:


admin_project_name
admin_project_domain_name

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n450
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n453


If they are unset, we should default them to 'admin' and 'Default' on new
installs, and leave them blank on old installs.


Can anyone point me in the right direction?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Policy Managment and distribution.

2016-03-29 Thread Adam Young

Keystone has a policy API, but no one uses it.  It allows us to associate a
policy file with an endpoint. Upload a json blob, it gets a uuid.  Associate
the UUID with the endpoint.  It could also be associated with a service, and
then it is associated with all endpoint for that service unless explicitly
overwritten.

Assuming all of the puppet modules for all of the services support managing
the policy files, how hard would it be to synchronize between the database
and what we distribute to the nodes?  Something along the lines of:  if I
put a file in this directory, I want puppet to use it the next time I do a
redeploy, and I also want it uploaded to Keystone and associate with the
endpoint?

As a start, I want to be able to replace the baseline policy.json file with
the cloudsample.  We ship both.


We have policy.pp in Puppet Keystone for this use case.
In tripleO, we could create a parameter that uses would use to
configure specific policies. It would be an hash and puppet will
manage the policies.  This would handle the Keystone case, but we need
to customize all of the policy files, for all of the services, for
example, to add the is_admin_project check.  I'd like to get this mechanism
in place before I start that work, so I can easily test changes.



The workflow needs to be something like this:

Bring up Keystone with Bootstrap.

For each service:
Fetch its  policy file from the RPM location.
Upload to Keystone.
Set the service-policy association in Keystone.
Deploy the service.
Copy over the policy file from Keystone.


In order to make a change, say to specialize for an endpoint:

Upload new policy file to Keystone
Set the Endpoint Association for the Policy File
run overcloud deploy and sync all of the policy files down again.

We don't have to use the Policy API, but we would end up re-implementing some 
aspect of it.
By using the Keystone API, we also provide a way to query "what is the policy for 
this endpoint?"

I don't think this would be a radical departure from what the Rest of OpenStack 
would do.

I can see Kolla using the same approach, or something like it.

Feedback, before I write this up as a spec?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][kolla][release] Deploying the big tent

2016-03-26 Thread Adam Young

On 03/26/2016 12:27 PM, Steven Dake (stdake) wrote:

Hey fellow PTLs and core reviewers of  those projects,

Kolla at present deploys  the compute kit, and some other services 
that folks have added over time including other projects like Ironic, 
Heat, Mistral, Murano, Magnum, Manilla, and Swift.


One of my objectives from my PTL candidacy was to deploy the big tent, 
and I  saw how we were not super successful as I planned in Mitaka at 
filling out the big tent.


While the Kolla core team is large, and we can commit to maintaining 
big tent projects that are deployed, we are at capacity every 
milestone of every cycle implementing new features that the various 
big tent services should conform to.  The idea of a plugin 
architecture for Kolla where projects could provide their own plugins 
has been floated, but before we try that, I'd prefer that the various 
teams in OpenStack with an interest in having their projects consumed 
by Operators involve themselves in containerizing their projects.


Again, once the job is done, the Kolla community will continue to 
maintain these projects, and we hope you will stay involved in that 
process.


It takes roughly four 4 hour blocks to learn the implementation 
architecture of Kolla and probably another 2 4 hour blocks to get a 
good understanding of the Kolla deployment workflow.  Some projects 
(like Neutron for example) might fit outside this norm because 
containerizing them and deploying them is very complex.  But we have 
already finished the job on what we believe are the hard projects.


My ask is that PTLs take responsibility or recruit someone from their 
respective community to participate in the implementation of Kolla 
deployment for their specific project.


I'll take on Keystone, but only if you promise to stop using "ask" as a 
noun and instead use ye olde English "Request" in its place.




Only with your help can we make the vision of a deployment system that 
can deploy the big tent a reality.


Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Manila] BP https://blueprints.launchpad.net/manila/+spec/access-group

2016-03-25 Thread Adam Young

On 03/25/2016 08:43 AM, nidhi.h...@wipro.com wrote:


Hi All,

A gentle reminder..

Could you please share your thoughts on the approach proposed here ..

https://etherpad.openstack.org/p/access_group_nidhimittalhada

Thanks

Nidhi

*From:* Nidhi Mittal Hada (Product Engineering Service)
*Sent:* Wednesday, March 09, 2016 2:22 PM
*To:* 'OpenStack Development Mailing List (not for usage questions)' 

*Cc:* 'bswa...@netapp.com' ; 'Ben Swartzlander' 

*Subject:* RE: [OpenStack-Dev][Manila] BP 
https://blueprints.launchpad.net/manila/+spec/access-groups


Hi All,

This is just a gentle reminder to the previous mail ..

PFA is revised doc.

Same is pasted here also.

https://etherpad.openstack.org/p/access_group_nidhimittalhada

Kindly share your thoughts on this..

Thanks

Nidhi



Nidhi,


It seems like this is the resource level access control that people have 
been asking for in many projects.  A few thoughts:


Deny rules are tricky.  I would prefer an access control approach that 
denied all by default, and then only allowed explicit adds.



The idea of access groups is much like the roles we have in Keystone.  
With domain specific roles, we have the potential to do some of this, 
but at a courser level.  I wonder if we could unify the approach, such 
that the roles are managed in Keystone, and then could apply to things 
other than items in Manila?


In general, I do not like to have individual users in access lists, 
especially when they might be the only person that can clean up a 
resource, and then they leave.  That means things fall back on "Admin".  
Ideally, all access would be controlled via groups membership.



What you are writing is really similar to the oslo-policy enforcement 
rules.  Are you planning on using Oslo to enforce?


Sorry for jumping in to the middle here, but you did ask for feedback!


*From:* Nidhi Mittal Hada (Product Engineering Service)
*Sent:* Friday, February 26, 2016 3:22 PM
*To:* 'OpenStack Development Mailing List (not for usage questions)' 
>
*Cc:* 'bswa...@netapp.com' >; 'Ben Swartzlander' >
*Subject:* [OpenStack-Dev][Manila] BP 
https://blueprints.launchpad.net/manila/+spec/access-groups


Hi Manila Team,

I am working on

https://blueprints.launchpad.net/manila/+spec/access-groups

For this I have created initial document as attached with the mail.

It contains DB CLI REST API related changes.

Could you please have a look and share your opinion.

Kindly let me know, if there is some understanding gap,

or something I have missed to document or

share your comments in general to make it better.

*Thank you.*

*Nidhi Mittal Hada*

*Architect | PES / COE*– *Kolkata India*

*Wipro Limited*

*M*+91 74 3910 9883 | *O* +91 33 3095 4767 | *VOIP* +91 33 3095 4767

The information contained in this electronic message and any 
attachments to this message are intended for the exclusive use of the 
addressee(s) and may contain proprietary, confidential or privileged 
information. If you are not the intended recipient, you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately and destroy all copies of this message and any 
attachments. WARNING: Computer viruses can be transmitted via email. 
The recipient should check this email and any attachments for the 
presence of viruses. The company accepts no liability for any damage 
caused by any virus transmitted by this email. www.wipro.com



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   >