On 10/09/2014 09:31 AM, John Dennis wrote:
On 10/08/2014 04:58 PM, Adam Young wrote:
When gyee posted his X509 server-side auth plugin patch,  the feedback
we gave was that it should be using the mapping code from Federation to
transform the environment variables set by the web server to the
Keystone userid, username, domain name, and so forth.

The PKI token format currently allows for a single signing cert.  I have
a proposal to allow for multiple signers.  One issue, though, is how to
map from the certificates signer-info to the Keystone server that signed
the data.   signer-data is part of the CMS message format, and can be
used to uniquely identify the certificate that signed the document.
  From the signer-data, we can fetch the certificate.

SO, we could build a system that allowed us to fetch multiple certs for
checking signatures.  But then the question is, which cert maps to "the
entity authorized to sign for this data."

OpenStack lacks a way to enumerate the systems, endpoints or otherwise.
I'm going to propose that we create a service domain and that any system
responsible for signing a document have a user created inside that
domain.  I think we want to make the endpoint id match the user ID for
endpoints, and probably something comparable for Nova Compute services.

This means we can use the associated keystone user to determine what
Compute node signed a message.  It gives us PKI based, asymetric Oslo
message signing.

This same abstraction should be extended to Kite for symmetric keys.

In order to convert the certificate data to the Keystone User ID, we can
use the Mapping mechanism from Federation, just like we are planning on
for the X509 Auth Plugin.

One thing I want to make explicit, and get some validation on from the
community:  is it acceptable to say that there needs to be a mappable
link between AL  X509 certificates distributed by a certain CA, for a
certain Domain and the users in there?  It seems to me to be comparable
to the LDAP constraints.  Is this a reasonable assumption?  If not, it
seems like the X509 mechanism is really not much more than a naked
Public Key.
I don't fully understand your proposal, perhaps because a few details
were omitted. But here are my thoughts and let me know where I might
have misunderstood.

The mapping seems pretty straight forward to me thus I'm not sure I see
the need for an extra service and the associated complexity.
No added service. The token validation is code to run in auth_token middleware, so a remote service like Nova.

  You should
be able extract the signer subject from the signing data as well as the
signer's issuer information. Given that it's a simple lookup whose
mapping can be trivially managed using any of the key/value tables
available to Keystone, one only needs to populate that table in some
authoritative way, but that's mostly a deployment issue.
Yeah, so first step is an extension to the current cert api to fetch based on signer info. Now we can validate the token or other document

Second is to lookup user from cert, which will use the same mapping as the Federation based X509 plugin gyee is working on.

Third is to let the service user query the roles for that user in the domain as signed by the document.

the last two steps can be combined by reusing the existing token api...or maybe the validate-token call. Bascially, we need to be able to get the set of roles for the user-in-domain that signed the document.

We have the ability to fetch policy per endpoint, so the service user can fetch the policy to determine if the signer has the authority to acutally sign for the data in the token.

I also don't follow what you mean by "not much more than a naked Public
Key", can you elaborate?
Sure. If I can upload an X5098 cert to the record for Keystone user "jdennis" but that cert was assigned to me, then I can authenticate to Keystone as "jdennis". It puts all of the weight of authentication onto Keystone, without making use of the signing mechanism in the cert itself.

OpenStack-dev mailing list

Reply via email to