Kevin, We currently do not use SSH for any of our orchestration. You have highlighted a good reason for us to avoid that wherever possible. Good catch!
Cheers, Adrian > On Jun 15, 2015, at 3:59 PM, Fox, Kevin M <kevin....@pnnl.gov> wrote: > > No, I was confused by your statement: > "When we create a bay, we have an ssh keypair that we use to inject the ssh > public key onto the nova instances we create." > > It sounded like you were using that keypair to inject a public key. I just > misunderstood. > > It does raise the question though, are you using ssh between the controller > and the instance anywhere? If so, we will still run into issues when we go to > try and test it at our site. Sahara does currently, and we're forced to put a > floating ip on every instance. Its less then ideal... > > Thanks, > Kevin > ________________________________________ > From: Adrian Otto [adrian.o...@rackspace.com] > Sent: Monday, June 15, 2015 3:17 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum > > Kevin, > >> On Jun 15, 2015, at 1:25 PM, Fox, Kevin M <kevin....@pnnl.gov> wrote: >> >> Why not just push the ssh keypair via cloud-init? Its more firewall friendly. > > Nova already handles the injection the SSH key for us. I think you meant to > suggest that we use cloud-init to inject the TLS keys, right? > > Thanks, > > Adrian > >> Having the controller -> instance via ssh has proven very problematic for us >> for a lot of projects. :/ >> >> Thanks, >> Kevin >> ________________________________________ >> From: Adrian Otto [adrian.o...@rackspace.com] >> Sent: Monday, June 15, 2015 11:18 AM >> To: OpenStack Development Mailing List (not for usage questions) >> Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum >> >> Tom, >> >>> On Jun 15, 2015, at 10:59 AM, Tom Cammann <tom.camm...@hp.com> wrote: >>> >>> My main issue with having the user generate the keys/certs for the kube >>> nodes >>> is that the keys have to be insecurely moved onto the kube nodes. Barbican >>> can >>> talk to heat but heat must still copy them across to the nodes, exposing the >>> keys on the wire. Perhaps there are ways of moving secrets correctly which I >>> have missed. >> >> When we create a bay, we have an ssh keypair that we use to inject the ssh >> public key onto the nova instances we create. We can use scp to securely >> transfer the keys over the wire using that keypair. >> >>> I also agree that we should opt for a non-Barbican deployment first. >>> >>> At the summit we talked about using Magnum as a CA and signing the >>> certificates, and we seemed to have some consensus about doing this with the >>> possibility of using Anchor. This would take a lot of the onus off of the >>> user to >>> fiddle around with openssl and craft the right signed certs safely. Using >>> Magnum as a CA the user would generate a key/cert pair, and then get the >>> cert signed by Magnum, and the kube node would do the same. The main >>> downside of this technique is that the user MUST trust Magnum and the >>> administrator as they would have access to the CA signing cert. >>> >>> An alternative to that where the user holds the CA cert/key, is to have the >>> user: >>> >>> - generate a CA cert/key (or use existing corp one etc) >>> - generate own cert/key >>> - sign their cert with their CA cert/key >>> - spin up kubecluster >>> - each node would generate key/cert >>> - each node exposes this cert to be signed >>> - user signs each cert and returns it to the node. >>> >>> This is going quite manual unless they have a CA that the kube nodes can >>> call >>> into. However this is the most secure way I could come up with. >> >> Perhaps we can expose a “replace keys” feature that could be used to >> facilitate this after initial setup of the bay. This way you could establish >> a trust that excludes the administrator. This approach potentially lends >> itself to additional automation to make the replacement process a bit less >> manual. >> >> Thanks, >> >> Adrian >> >>> >>> Tom >>> >>> On 15/06/15 17:52, Egor Guz wrote: >>>> +1 for non-Barbican support first, unfortunately Barbican is not very well >>>> adopted in existing installation. >>>> >>>> Madhuri, also please keep in mind we should come with solution which >>>> should work with Swarm and Mesos as well in further. >>>> >>>> — >>>> Egor >>>> >>>> From: Madhuri Rai <madhuri.ra...@gmail.com<mailto:madhuri.ra...@gmail.com>> >>>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" >>>> <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>> >>>> Date: Monday, June 15, 2015 at 0:47 >>>> To: "OpenStack Development Mailing List (not for usage questions)" >>>> <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>> >>>> Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum >>>> >>>> Hi, >>>> >>>> Thanks Adrian for the quick response. Please find my response inline. >>>> >>>> On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto >>>> <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote: >>>> Madhuri, >>>> >>>> On Jun 14, 2015, at 10:30 PM, Madhuri Rai >>>> <madhuri.ra...@gmail.com<mailto:madhuri.ra...@gmail.com>> wrote: >>>> >>>> Hi All, >>>> >>>> This is to bring the blueprint >>>> secure-kubernetes<https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes> >>>> in discussion. I have been trying to figure out what could be the >>>> possible change area to support this feature in Magnum. Below is just a >>>> rough idea on how to proceed further on it. >>>> >>>> This task can be further broken in smaller pieces. >>>> >>>> 1. Add support for TLS in python-k8sclient. >>>> The current auto-generated code doesn't support TLS. So this work will be >>>> to add TLS support in kubernetes python APIs. >>>> >>>> 2. Add support for Barbican in Magnum. >>>> Barbican will be used to store all the keys and certificates. >>>> >>>> Keep in mind that not all clouds will support Barbican yet, so this >>>> approach could impair adoption of Magnum until Barbican is universally >>>> supported. It might be worth considering a solution that would generate >>>> all keys on the client, and copy them to the Bay master for communication >>>> with other Bay nodes. This is less secure than using Barbican, but would >>>> allow for use of Magnum before Barbican is adopted. >>>> >>>> +1, I agree. One question here, we are trying to secure the communication >>>> between magnum-conductor and kube-apiserver. Right? >>>> >>>> >>>> If both methods were supported, the Barbican method should be the default, >>>> and we should put warning messages in the config file so that when the >>>> administrator relaxes the setting to use the non-Barbican configuration >>>> he/she is made aware that it requires a less secure mode of operation. >>>> >>>> In non-Barbican support, client will generate the keys and pass the >>>> location of the key to the magnum services. Then again heat template will >>>> copy and configure the kubernetes services on master node. Same as the >>>> below step. >>>> >>>> >>>> My suggestion is to completely implement the Barbican support first, and >>>> follow up that implementation with a non-Barbican option as a second >>>> iteration for the feature. >>>> >>>> How about implementing the non-Barbican support first as this would be >>>> easy to implement, so that we can first concentrate on Point 1 and 3. And >>>> then after it, we can work on Barbican support with more insights. >>>> >>>> Another possibility would be for Magnum to use its own private >>>> installation of Barbican in cases where it is not available in the service >>>> catalog. I dislike this option because it creates an operational burden >>>> for maintaining the private Barbican service, and additional complexities >>>> with securing it. >>>> >>>> In my opinion, installation of Barbican should be independent of Magnum. >>>> My idea here is, if user wants to store his/her keys in Barbican then >>>> he/she will install it. >>>> We will have a config paramter like "store_secure" when True means we have >>>> to store the keys in Barbican or else not. >>>> What do you think? >>>> >>>> 3. Add support of TLS in Magnum. >>>> This work mainly involves supporting the use of key and certificates in >>>> magnum to support TLS. >>>> >>>> The user generates the keys, certificates and store them in Barbican. Now >>>> there is two way to access these keys while creating a bay. >>>> >>>> Rather than "the user generates the keys…", perhaps it might be better to >>>> word that as "the magnum client library code generates the keys for the >>>> user…”. >>>> >>>> It is "user" here. In my opinion, there could be users who don't want to >>>> use magnum client rather the APIs directly, in that case the user will >>>> generate the key themselves. >>>> >>>> In our first implementation, we can support the user generating the keys >>>> and then later client generating the keys. >>>> >>>> 1. Heat will access Barbican directly. >>>> While creating bay, the user will provide this key and heat templates will >>>> fetch this key from Barbican. >>>> >>>> I think you mean that Heat will use the Barbican key to fetch the TLS key >>>> for accessing the native API service running on the Bay. >>>> Yes. >>>> >>>> 2. Magnum-conductor access Barbican. >>>> While creating bay, the user will provide this key and then >>>> Magnum-conductor will fetch this key from Barbican and provide this key to >>>> heat. >>>> >>>> Then heat will copy this files on kubernetes master node. Then bay will >>>> use this key to start a Kubernetes services signed with these keys. >>>> >>>> Make sure that the Barbican keys used by Heat and magnum-conductor to >>>> store the various TLS certificates/keys are unique per tenant and per bay, >>>> and are not shared among multiple tenants. We don’t want it to ever be >>>> possible to trick Magnum into revealing secrets belonging to other tenants. >>>> >>>> Yes, I will take care of it. >>>> >>>> After discussion when we all come to same point, I will create separate >>>> blueprints for each task. >>>> I am currently working on configuring Kubernetes services with TLS keys. >>>> >>>> Please provide your suggestions if any. >>>> >>>> Thanks for kicking off this discussion. >>>> >>>> Regards, >>>> >>>> Adrian >>>> >>>> >>>> >>>> Regards, >>>> Madhuri >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> Regards, >>>> Madhuri >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev