On Wed, Jan 11, 2017 at 4:21 PM, Matt Wringe <[email protected]> wrote:
> ----- Original Message ----- > > From: "Clayton Coleman" <[email protected]> > > To: "Matt Wringe" <[email protected]> > > Cc: "John Mazzitelli" <[email protected]>, "users" < > [email protected]> > > Sent: Wednesday, 11 January, 2017 3:47:30 PM > > Subject: Re: cluster-reader and secrets > > > > We're actively looking to remove client certificates > > I think removing it from the OpenShift components makes sense, forcing > everything to use tokens is a better idea. > Let me correct my statement slightly: we're trying to use client certificates *only where we have to*. I.e. nodes, a few core components. Everyone running in a pod should *always* use a client token unless they have a weird case, because we have a delivery mechanism AND a rotation mechanism in place, and can centrally manage it. > > Outside of OpenShift, mutual authentication based on certificates is an > industry standard and is widely used. It would be awesome to be able to > have OpenShift generate these types of certificates for our pods instead of > relying on creating these certificates in an external manner. > Yes, this has been discussed as a pair for service serving certs. However, it's not going to happen in the near term so I don't want to get over dependent on it. > > Note: it sounds like this might not be a good idea for the agent, so we > can rely on passwords only. But for things like securing communication > between a server and a database, and between the nodes in a database > cluster, being able to use mutual authentication with autogenerated > certificates would make things a bit easier with regards to how they are > deployed. > Agreed - where software already works well with certs there are advantages. Maciej has a card to look at how we can make rotation of service serving certs easier, so definitely follow along with his work. > > > - if you are running > > in a container (you should be) you should be using your service account > > token to connect to internal APIs. > > We only use tokens to communicate with internal APIs, nothing is changing > with this. > > > We aren't likely to want to create > > multiple mechanisms of client certificate distribution. > > If you want to prevent the prometheus endpoint to only trust connections > > from the agent, why wouldn't you use a unique shared secret? > > We can configure it so that the prometheus endpoint has a secret which is > required to access its endpoint and have the agent read that. That is a > viable option. > > > On Wed, Jan 11, 2017 at 2:29 PM, Matt Wringe <[email protected]> wrote: > > > > > ----- Original Message ----- > > > > From: "Clayton Coleman" <[email protected]> > > > > To: "Matt Wringe" <[email protected]> > > > > Cc: "John Mazzitelli" <[email protected]>, "users" < > > > [email protected]> > > > > Sent: Wednesday, 11 January, 2017 1:55:43 PM > > > > Subject: Re: cluster-reader and secrets > > > > > > > > Why would you not use the service account token? > > > > > > I don't understand what the service account token has anything to do > with > > > a client certificate. > > > > > > Perhaps you mean service certificates? The certificates generated by > > > either the oc commands or the auto generated ones are not valid for > client > > > authentication. > > > > > > As such we cannot have a pod configure its jolokia/prometheus endpoint > to > > > only trust connections from the agent. > > > > > > Client certificate authentication is already being used in various > places > > > within OpenShift (the API Proxy with the middleware Jolokia instances, > > > Heapster with the HPA, etc). It would be much better if we could > generate a > > > client certificate via the existing tools, but it looks like this is > not > > > possible or planed. So the only path forward for this would be to add > it > > > into the OpenShift code directly and pass the certificate to the agent. > > > > > > The current server certificates are invalid for client authentication > and > > > cannot be used. > > > > > > > > > > > On Wed, Jan 11, 2017 at 1:24 PM, Matt Wringe <[email protected]> > wrote: > > > > > > > > > We are also in a situation where we would like to have a client > > > > > certificate for the agent. > > > > > > > > > > Ideally this could have been something which would have been > provided > > > by > > > > > OpenShift (either through the oc commands, or via the system > generated > > > > > certificates: https://github.com/openshift/origin/issues/10085) > > > > > > > > > > Is there any option to be able to do this nicely? Other than > having to > > > > > modify the OpenShift code to generate the client certificate like > it > > > does > > > > > for other components? > > > > > > > > > > ----- Original Message ----- > > > > > > From: "Clayton Coleman" <[email protected]> > > > > > > To: "John Mazzitelli" <[email protected]> > > > > > > Cc: "users" <[email protected]> > > > > > > Sent: Wednesday, 11 January, 2017 11:26:12 AM > > > > > > Subject: Re: cluster-reader and secrets > > > > > > > > > > > > We would create a special role specifically for the agent. > > > > > > > > > > > > On Wed, Jan 11, 2017 at 10:19 AM, John Mazzitelli < > [email protected] > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > OK, so let me ask for suggestions. The use-case is as follows: > > > > > > > > > > > > The Hawkular OpenShift Agent has one job - collect metrics from > > > Jolokia > > > > > and > > > > > > Prometheus endpoints deployed within all pods in all projects on > a > > > node > > > > > and > > > > > > stores the metric data in the Hawkular-Metrics instance. As new > pods > > > are > > > > > > deployed, and old pods are deleted, the agent automatically > > > starts/stops > > > > > its > > > > > > collect-and-store for those pods. A pod that has metrics to be > > > collected > > > > > > will associate itself with a configmap that contains details to > tell > > > the > > > > > > agent what to monitor. So, for example, a pod containing a > WildFly > > > server > > > > > > can associate itself with a configmap that defines what JMX > > > attributes to > > > > > > collect, what its Jolokia port/path is, etc, etc - the agent > (which > > > has > > > > > > cluster-reader role) detects when this pod comes online, the > agent > > > reads > > > > > > that pod's associated configmap information, and thus knows what > the > > > pod > > > > > > wants monitored and starts doing it as soon as the pod is > deployed. > > > > > > > > > > > > Most likely, the Jolokia endpoints are going to be secured, > perhaps > > > with > > > > > > username/password credentials. Rather than force the pod to > declare > > > its > > > > > > credentials in the clear within the configmap, we have it declare > > > them by > > > > > > referring to OpenShift secrets. So, for example, that WildFly > pod can > > > > > define > > > > > > in its configmap the following: > > > > > > > > > > > > endpoints: > > > > > > - type: jolokia > > > > > > port: 8080 > > > > > > credentials: > > > > > > username: secret:my-os-secret-name/username > > > > > > password: secret:my-os-secret-name/password > > > > > > > > > > > > Assuming someone created that secret called "my-os-secret-name" > and > > > it > > > > > has > > > > > > "username" and "password" keys, the agent can look up > > > > > "my-os-secret-name", > > > > > > retrieve the username and password and thus be able to make > > > authenticated > > > > > > http requests to that jolokia endpoint. > > > > > > > > > > > > But since cluster-reader does not have "get/secrets" > permissions, the > > > > > agent > > > > > > needs some other role to do this. I could create a new cluster > role > > > that > > > > > > only has "get/secrets" and assign it to the agent user. But I'm > > > assuming > > > > > > this is not ideal?? > > > > > > > > > > > > What other options are there to do this kind of thing? Is there a > > > > > different > > > > > > way pods can share credentials to a cluster-wide agent like this? > > > > > > > > > > > > ----- Original Message ----- > > > > > > > Correct, the cluster-reader role is intentionally > non-escalating, > > > so it > > > > > > > does not have access to read secrets. > > > > > > > > > > > > > > Global read access to secrets is not typically something you'd > > > give a > > > > > > > read-only user. > > > > > > > > > > > > > > On Wed, Jan 11, 2017 at 9:33 AM, John Mazzitelli < > [email protected] > > > > > > > > > wrote: > > > > > > > > > > > > > > > I'm looking for a cluster role that has "get" "secrets" > enabled, > > > and > > > > > > > > there > > > > > > > > seems to be very few - system:node is one but it has other > perms > > > I > > > > > do not > > > > > > > > need. I assumed cluster-reader would be able to read secrets > but > > > that > > > > > > > > does > > > > > > > > not seem to be the case. I was hoping I'm just doing > something > > > > > wrong, but > > > > > > > > I > > > > > > > > figured ask here to confirm if cluster-reader really can NOT > get > > > > > secrets. > > > > > > > > > > > > _______________________________________________ > > > > > > users mailing list > > > > > > [email protected] > > > > > > http://lists.openshift.redhat.com/openshiftmm/listinfo/users > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > users mailing list > > > > > > [email protected] > > > > > > http://lists.openshift.redhat.com/openshiftmm/listinfo/users > > > > > > > > > > > > > > > > > > > > >
_______________________________________________ users mailing list [email protected] http://lists.openshift.redhat.com/openshiftmm/listinfo/users
