Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-17 Thread Clark, Robert Graham
I think this is an interesting if somewhat difficult to follow thread.

It’s worth keeping in mind that there are more ways to handle certificates in 
OpenStack than just Barbican, though there are often good reasons to use it.

Is there a blueprint or scheduled IRC meeting to discuss the options? If useful 
I’d be happy to arrange for some folks from the Security Project to take a 
look, we spend a lot of time collectively dealing with TLS issues and might be 
able to help with the path-finding for TLS in Magnum.

-Rob

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: 17 June 2015 06:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Clint,

Hi! It’s good to hear from you!

On Jun 16, 2015, at 8:58 PM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com wrote:

I don't understand at all what you said there.

If my kubernetes minions are attached to a gateway which has a direct
route to Magnum, let's say they're at, 192.0.2.{100,101,102}, and
Magnum is at 198.51.100.1, then as long as the minions' gateway knows
how to find 198.51.100.0/24, and Magnum's gateway knows how to route to
192.0.2.0/24, then you can have two-way communication and no floating
ips or NAT. This seems orthogonal to how external users find the minions.

That’s correct. Keep in mind that large clouds use layer 3 routing protocols to 
get packets around, especially for north/south traffic where public IP 
addresses are typically used. Injecting new routes into the network fabric each 
time we create a bay might cause reluctance from network administrators to 
allow the adoption of Magnum. Pre-allocating tons of RFC-1918 addresses to 
Magnum may also be impractical on networks that use those addresses 
extensively. Steve’s explanation of using routable addresses as floating IP 
addresses is one approach to leverage the prevailing SDN in the cloud’s network 
to address this concern.

Let’s not get too far off topic on this thread. We are discussing the 
implementation of TLS as a mechanism of access control for API services that 
run on networks that are reachable by the public. We got a good suggestion to 
use an approach that can work regardless of network connectivity between the 
Magnum control plane and the Nova instances (Magnum Nodes) and the containers 
that run on them. I’d like to see if we could use cloud-init to get the keys 
into the bay nodes (docker hosts). That way we can avoid the requirement for 
end-to-end network connectivity between bay nodes and the Magnum control plane.

Thanks,

Adrian

Excerpts from Steven Dake (stdake)'s message of 2015-06-16 19:40:25 -0700:

Clint,

Answering Clint’s question, yes there is a reason all nodes must expose a 
floating IP address.

In a Kubernetes cluster, each minion has a port address space.  When an 
external service contacts the floating IP’s port, the request is routed over 
the internal network to the correct container using a proxy mechanism.  The 
problem then is, how do you know which minion to connect to with your external 
service?  The answer is you can connect to any of them.  Kubernetes only has 
one port address space, so Kubernetes suffers from a single namespace problem 
(which Magnum solves with Bays).

Longer term it may make sense to put the minion external addresses on a RFC1918 
network, and put a floating VIF with a load balancer to connect to them.  Then 
no need for floating address per node.  We are blocked behind kubernetes 
implementing proper support for load balancing in OpenStack to even consider 
this work.

Regards
-steve

From: Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.govmailto:kevin@pnnl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 16, 2015 at 6:36 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Out of the box, vms usually can contact the controllers though the routers nat, 
but not visa versa. So its preferable for guest agents to make the connection, 
not the controller connect to the guest agents. No floating ips, security group 
rules or special networks are needed then.

Thanks,
Kevin


From: Clint Byrum
Sent: Monday, June 15, 2015 6:10:27 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:

No, I was confused by your statement:
When we create a bay, we have an ssh keypair that we use to inject the ssh 
public key onto the nova instances we create.

It sounded like you were using that keypair to inject a public key. I just 
misunderstood.

It does raise

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-17 Thread Fox, Kevin M
Do consider another use case, that of a private docker cluster...

I may want to use magnum to deploy a docker cluster in a private neutron 
network for a mid/backend tier as a component of a larger scalable cloud 
application. Floating ip's would not be used in this case since the machines 
that would need to talk to the docker cluster would be on the same private 
neutron network. So I'd rather use RFC-1918 space in the private network and 
ensure the public networks never can reach it.

Thanks,
Kevin

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Tuesday, June 16, 2015 10:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Clint,

Hi! It’s good to hear from you!

On Jun 16, 2015, at 8:58 PM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com wrote:

I don't understand at all what you said there.

If my kubernetes minions are attached to a gateway which has a direct
route to Magnum, let's say they're at, 192.0.2.{100,101,102}, and
Magnum is at 198.51.100.1, then as long as the minions' gateway knows
how to find 198.51.100.0/24, and Magnum's gateway knows how to route to
192.0.2.0/24, then you can have two-way communication and no floating
ips or NAT. This seems orthogonal to how external users find the minions.

That’s correct. Keep in mind that large clouds use layer 3 routing protocols to 
get packets around, especially for north/south traffic where public IP 
addresses are typically used. Injecting new routes into the network fabric each 
time we create a bay might cause reluctance from network administrators to 
allow the adoption of Magnum. Pre-allocating tons of RFC-1918 addresses to 
Magnum may also be impractical on networks that use those addresses 
extensively. Steve’s explanation of using routable addresses as floating IP 
addresses is one approach to leverage the prevailing SDN in the cloud’s network 
to address this concern.

Let’s not get too far off topic on this thread. We are discussing the 
implementation of TLS as a mechanism of access control for API services that 
run on networks that are reachable by the public. We got a good suggestion to 
use an approach that can work regardless of network connectivity between the 
Magnum control plane and the Nova instances (Magnum Nodes) and the containers 
that run on them. I’d like to see if we could use cloud-init to get the keys 
into the bay nodes (docker hosts). That way we can avoid the requirement for 
end-to-end network connectivity between bay nodes and the Magnum control plane.

Thanks,

Adrian

Excerpts from Steven Dake (stdake)'s message of 2015-06-16 19:40:25 -0700:
Clint,

Answering Clint’s question, yes there is a reason all nodes must expose a 
floating IP address.

In a Kubernetes cluster, each minion has a port address space.  When an 
external service contacts the floating IP’s port, the request is routed over 
the internal network to the correct container using a proxy mechanism.  The 
problem then is, how do you know which minion to connect to with your external 
service?  The answer is you can connect to any of them.  Kubernetes only has 
one port address space, so Kubernetes suffers from a single namespace problem 
(which Magnum solves with Bays).

Longer term it may make sense to put the minion external addresses on a RFC1918 
network, and put a floating VIF with a load balancer to connect to them.  Then 
no need for floating address per node.  We are blocked behind kubernetes 
implementing proper support for load balancing in OpenStack to even consider 
this work.

Regards
-steve

From: Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.govmailto:kevin@pnnl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 16, 2015 at 6:36 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Out of the box, vms usually can contact the controllers though the routers nat, 
but not visa versa. So its preferable for guest agents to make the connection, 
not the controller connect to the guest agents. No floating ips, security group 
rules or special networks are needed then.

Thanks,
Kevin


From: Clint Byrum
Sent: Monday, June 15, 2015 6:10:27 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
No, I was confused by your statement:
When we create a bay, we have an ssh keypair that we use to inject the ssh 
public key onto the nova instances we create.

It sounded like you were using that keypair to inject a public key. I just 
misunderstood

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-16 Thread Fox, Kevin M
Out of the box, vms usually can contact the controllers though the routers nat, 
but not visa versa. So its preferable for guest agents to make the connection, 
not the controller connect to the guest agents. No floating ips, security group 
rules or special networks are needed then.

Thanks,
Kevin


From: Clint Byrum
Sent: Monday, June 15, 2015 6:10:27 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
 No, I was confused by your statement:
 When we create a bay, we have an ssh keypair that we use to inject the ssh 
 public key onto the nova instances we create.

 It sounded like you were using that keypair to inject a public key. I just 
 misunderstood.

 It does raise the question though, are you using ssh between the controller 
 and the instance anywhere? If so, we will still run into issues when we go to 
 try and test it at our site. Sahara does currently, and we're forced to put a 
 floating ip on every instance. Its less then ideal...


Why not just give each instance a port on a network which can route
directly to the controller's network? Is there some reason you feel
forced to use a floating IP?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-16 Thread Clint Byrum
I don't understand at all what you said there.

If my kubernetes minions are attached to a gateway which has a direct
route to Magnum, let's say they're at, 192.0.2.{100,101,102}, and
Magnum is at 198.51.100.1, then as long as the minions' gateway knows
how to find 198.51.100.0/24, and Magnum's gateway knows how to route to
192.0.2.0/24, then you can have two-way communication and no floating
ips or NAT. This seems orthogonal to how external users find the minions.

Excerpts from Steven Dake (stdake)'s message of 2015-06-16 19:40:25 -0700:
 Clint,
 
 Answering Clint’s question, yes there is a reason all nodes must expose a 
 floating IP address.
 
 In a Kubernetes cluster, each minion has a port address space.  When an 
 external service contacts the floating IP’s port, the request is routed over 
 the internal network to the correct container using a proxy mechanism.  The 
 problem then is, how do you know which minion to connect to with your 
 external service?  The answer is you can connect to any of them.  Kubernetes 
 only has one port address space, so Kubernetes suffers from a single 
 namespace problem (which Magnum solves with Bays).
 
 Longer term it may make sense to put the minion external addresses on a 
 RFC1918 network, and put a floating VIF with a load balancer to connect to 
 them.  Then no need for floating address per node.  We are blocked behind 
 kubernetes implementing proper support for load balancing in OpenStack to 
 even consider this work.
 
 Regards
 -steve
 
 From: Fox, Kevin M kevin@pnnl.govmailto:kevin@pnnl.gov
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Tuesday, June 16, 2015 at 6:36 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum
 
 Out of the box, vms usually can contact the controllers though the routers 
 nat, but not visa versa. So its preferable for guest agents to make the 
 connection, not the controller connect to the guest agents. No floating ips, 
 security group rules or special networks are needed then.
 
 Thanks,
 Kevin
 
 
 From: Clint Byrum
 Sent: Monday, June 15, 2015 6:10:27 PM
 To: openstack-dev
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum
 
 Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
  No, I was confused by your statement:
  When we create a bay, we have an ssh keypair that we use to inject the ssh 
  public key onto the nova instances we create.
 
  It sounded like you were using that keypair to inject a public key. I just 
  misunderstood.
 
  It does raise the question though, are you using ssh between the controller 
  and the instance anywhere? If so, we will still run into issues when we go 
  to try and test it at our site. Sahara does currently, and we're forced to 
  put a floating ip on every instance. Its less then ideal...
 
 
 Why not just give each instance a port on a network which can route
 directly to the controller's network? Is there some reason you feel
 forced to use a floating IP?
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-16 Thread Adrian Otto
Clint,

Hi! It’s good to hear from you!

On Jun 16, 2015, at 8:58 PM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com wrote:

I don't understand at all what you said there.

If my kubernetes minions are attached to a gateway which has a direct
route to Magnum, let's say they're at, 192.0.2.{100,101,102}, and
Magnum is at 198.51.100.1, then as long as the minions' gateway knows
how to find 198.51.100.0/24, and Magnum's gateway knows how to route to
192.0.2.0/24, then you can have two-way communication and no floating
ips or NAT. This seems orthogonal to how external users find the minions.

That’s correct. Keep in mind that large clouds use layer 3 routing protocols to 
get packets around, especially for north/south traffic where public IP 
addresses are typically used. Injecting new routes into the network fabric each 
time we create a bay might cause reluctance from network administrators to 
allow the adoption of Magnum. Pre-allocating tons of RFC-1918 addresses to 
Magnum may also be impractical on networks that use those addresses 
extensively. Steve’s explanation of using routable addresses as floating IP 
addresses is one approach to leverage the prevailing SDN in the cloud’s network 
to address this concern.

Let’s not get too far off topic on this thread. We are discussing the 
implementation of TLS as a mechanism of access control for API services that 
run on networks that are reachable by the public. We got a good suggestion to 
use an approach that can work regardless of network connectivity between the 
Magnum control plane and the Nova instances (Magnum Nodes) and the containers 
that run on them. I’d like to see if we could use cloud-init to get the keys 
into the bay nodes (docker hosts). That way we can avoid the requirement for 
end-to-end network connectivity between bay nodes and the Magnum control plane.

Thanks,

Adrian

Excerpts from Steven Dake (stdake)'s message of 2015-06-16 19:40:25 -0700:
Clint,

Answering Clint’s question, yes there is a reason all nodes must expose a 
floating IP address.

In a Kubernetes cluster, each minion has a port address space.  When an 
external service contacts the floating IP’s port, the request is routed over 
the internal network to the correct container using a proxy mechanism.  The 
problem then is, how do you know which minion to connect to with your external 
service?  The answer is you can connect to any of them.  Kubernetes only has 
one port address space, so Kubernetes suffers from a single namespace problem 
(which Magnum solves with Bays).

Longer term it may make sense to put the minion external addresses on a RFC1918 
network, and put a floating VIF with a load balancer to connect to them.  Then 
no need for floating address per node.  We are blocked behind kubernetes 
implementing proper support for load balancing in OpenStack to even consider 
this work.

Regards
-steve

From: Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.govmailto:kevin@pnnl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 16, 2015 at 6:36 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Out of the box, vms usually can contact the controllers though the routers nat, 
but not visa versa. So its preferable for guest agents to make the connection, 
not the controller connect to the guest agents. No floating ips, security group 
rules or special networks are needed then.

Thanks,
Kevin


From: Clint Byrum
Sent: Monday, June 15, 2015 6:10:27 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
No, I was confused by your statement:
When we create a bay, we have an ssh keypair that we use to inject the ssh 
public key onto the nova instances we create.

It sounded like you were using that keypair to inject a public key. I just 
misunderstood.

It does raise the question though, are you using ssh between the controller and 
the instance anywhere? If so, we will still run into issues when we go to try 
and test it at our site. Sahara does currently, and we're forced to put a 
floating ip on every instance. Its less then ideal...


Why not just give each instance a port on a network which can route
directly to the controller's network? Is there some reason you feel
forced to use a floating IP?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-16 Thread Steven Dake (stdake)
Clint,

Answering Clint’s question, yes there is a reason all nodes must expose a 
floating IP address.

In a Kubernetes cluster, each minion has a port address space.  When an 
external service contacts the floating IP’s port, the request is routed over 
the internal network to the correct container using a proxy mechanism.  The 
problem then is, how do you know which minion to connect to with your external 
service?  The answer is you can connect to any of them.  Kubernetes only has 
one port address space, so Kubernetes suffers from a single namespace problem 
(which Magnum solves with Bays).

Longer term it may make sense to put the minion external addresses on a RFC1918 
network, and put a floating VIF with a load balancer to connect to them.  Then 
no need for floating address per node.  We are blocked behind kubernetes 
implementing proper support for load balancing in OpenStack to even consider 
this work.

Regards
-steve

From: Fox, Kevin M kevin@pnnl.govmailto:kevin@pnnl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 16, 2015 at 6:36 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Out of the box, vms usually can contact the controllers though the routers nat, 
but not visa versa. So its preferable for guest agents to make the connection, 
not the controller connect to the guest agents. No floating ips, security group 
rules or special networks are needed then.

Thanks,
Kevin


From: Clint Byrum
Sent: Monday, June 15, 2015 6:10:27 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
 No, I was confused by your statement:
 When we create a bay, we have an ssh keypair that we use to inject the ssh 
 public key onto the nova instances we create.

 It sounded like you were using that keypair to inject a public key. I just 
 misunderstood.

 It does raise the question though, are you using ssh between the controller 
 and the instance anywhere? If so, we will still run into issues when we go to 
 try and test it at our site. Sahara does currently, and we're forced to put a 
 floating ip on every instance. Its less then ideal...


Why not just give each instance a port on a network which can route
directly to the controller's network? Is there some reason you feel
forced to use a floating IP?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-16 Thread 大塚元央
Hi, Tom

2015年6月16日(火) 3:00 Tom Cammann tom.camm...@hp.com:


 At the summit we talked about using Magnum as a CA and signing the
 certificates, and we seemed to have some consensus about doing this with
 the
 possibility of using Anchor. This would take a lot of the onus off of
 the user to
 fiddle around with openssl and craft the right signed certs safely. Using
 Magnum as a CA the user would generate a key/cert pair, and then get the
 cert signed by Magnum, and the kube node would do the same. The main
 downside of this technique is that the user MUST trust Magnum and the
 administrator as they would have access to the CA signing cert.


I'm not sure about Anchor.
You mean, Anchor can be used for implementation of Magnum as a CA.
Right?

Thanks
-yuanying
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Fox, Kevin M
If your asking the cloud provider to go through the effort to install Magnum, 
its not that much extra effort to install Barbican at the same time. Making it 
a dependency isn't too bad then IMHO.

Thanks,
Kevin

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Sunday, June 14, 2015 11:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Madhuri,

On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible 
change area to support this feature in Magnum. Below is just a rough idea on 
how to proceed further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

Keep in mind that not all clouds will support Barbican yet, so this approach 
could impair adoption of Magnum until Barbican is universally supported. It 
might be worth considering a solution that would generate all keys on the 
client, and copy them to the Bay master for communication with other Bay nodes. 
This is less secure than using Barbican, but would allow for use of Magnum 
before Barbican is adopted.

If both methods were supported, the Barbican method should be the default, and 
we should put warning messages in the config file so that when the 
administrator relaxes the setting to use the non-Barbican configuration he/she 
is made aware that it requires a less secure mode of operation.

My suggestion is to completely implement the Barbican support first, and follow 
up that implementation with a non-Barbican option as a second iteration for the 
feature.

Another possibility would be for Magnum to use its own private installation of 
Barbican in cases where it is not available in the service catalog. I dislike 
this option because it creates an operational burden for maintaining the 
private Barbican service, and additional complexities with securing it.

3. Add support of TLS in Magnum.
This work mainly involves supporting the use of key and certificates in magnum 
to support TLS.

The user generates the keys, certificates and store them in Barbican. Now there 
is two way to access these keys while creating a bay.

Rather than the user generates the keys…, perhaps it might be better to word 
that as the magnum client library code generates the keys for the user…”.

1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat templates will 
fetch this key from Barbican.

I think you mean that Heat will use the Barbican key to fetch the TLS key for 
accessing the native API service running on the Bay.

2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then Magnum-conductor 
will fetch this key from Barbican and provide this key to heat.

Then heat will copy this files on kubernetes master node. Then bay will use 
this key to start a Kubernetes services signed with these keys.

Make sure that the Barbican keys used by Heat and magnum-conductor to store the 
various TLS certificates/keys are unique per tenant and per bay, and are not 
shared among multiple tenants. We don’t want it to ever be possible to trick 
Magnum into revealing secrets belonging to other tenants.

After discussion when we all come to same point, I will create separate 
blueprints for each task.
I am currently working on configuring Kubernetes services with TLS keys.

Please provide your suggestions if any.

Thanks for kicking off this discussion.

Regards,

Adrian



Regards,
Madhuri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Fox, Kevin M
Please see https://review.openstack.org/#/c/186617 - Nova Instance Users and 
review.

We're working hard on trying to get heat - nova - instance - barbican secret 
storage workflow working smoothly.

Also related are: https://review.openstack.org/#/c/190404/ - Barbican ACL's and 
https://review.openstack.org/#/c/190732/ - Unscoped Service Catalog.

Thanks,
Kevin

From: Madhuri Rai [madhuri.ra...@gmail.com]
Sent: Sunday, June 14, 2015 10:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Magnum] TLS Support in Magnum

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible 
change area to support this feature in Magnum. Below is just a rough idea on 
how to proceed further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

3. Add support of TLS in Magnum.
This work mainly involves supporting the use of key and certificates in magnum 
to support TLS.

The user generates the keys, certificates and store them in Barbican. Now there 
is two way to access these keys while creating a bay.

1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat templates will 
fetch this key from Barbican.


2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then Magnum-conductor 
will fetch this key from Barbican and provide this key to heat.

Then heat will copy this files on kubernetes master node. Then bay will use 
this key to start a Kubernetes services signed with these keys.


After discussion when we all come to same point, I will create separate 
blueprints for each task.
I am currently working on configuring Kubernetes services with TLS keys.

Please provide your suggestions if any.


Regards,
Madhuri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Madhuri
Adrian,

On Tue, Jun 16, 2015 at 2:39 AM, Adrian Otto adrian.o...@rackspace.com
wrote:

  Madhuri,

  On Jun 15, 2015, at 12:47 AM, Madhuri Rai madhuri.ra...@gmail.com
 wrote:

  Hi,

 Thanks Adrian for the quick response. Please find my response inline.

 On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 Madhuri,

  On Jun 14, 2015, at 10:30 PM, Madhuri Rai madhuri.ra...@gmail.com
 wrote:

Hi All,

 This is to bring the blueprint  secure-kubernetes
 https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in 
 discussion.
 I have been trying to figure out what could be the possible change area
 to support this feature in Magnum. Below is just a rough idea on how to
 proceed further on it.

 This task can be further broken in smaller pieces.

 *1. Add support for TLS in python-k8sclient.*
 The current auto-generated code doesn't support TLS. So this work will be
 to add TLS support in kubernetes python APIs.

 *2. Add support for Barbican in Magnum.*
  Barbican will be used to store all the keys and certificates.


  Keep in mind that not all clouds will support Barbican yet, so this
 approach could impair adoption of Magnum until Barbican is universally
 supported. It might be worth considering a solution that would generate all
 keys on the client, and copy them to the Bay master for communication with
 other Bay nodes. This is less secure than using Barbican, but would allow
 for use of Magnum before Barbican is adopted.


 +1, I agree. One question here, we are trying to secure the communication
 between magnum-conductor and kube-apiserver. Right?


  We need API services that are on public networks to be secured with TLS,
 or another approach that will allow us to implement access control so that
 these API’s can only be accessed by those with the correct keys. This need
 extends to all places in Magnum where we are exposing native API’s.


Ok, I understand.


  If both methods were supported, the Barbican method should be the
 default, and we should put warning messages in the config file so that when
 the administrator relaxes the setting to use the non-Barbican configuration
 he/she is made aware that it requires a less secure mode of operation.


  In non-Barbican support, client will generate the keys and pass the
 location of the key to the magnum services. Then again heat template will
 copy and configure the kubernetes services on master node. Same as the
 below step.


  Good!

  My suggestion is to completely implement the Barbican support first,
 and follow up that implementation with a non-Barbican option as a second
 iteration for the feature.


  How about implementing the non-Barbican support first as this would be
 easy to implement, so that we can first concentrate on Point 1 and 3. And
 then after it, we can work on Barbican support with more insights.


  Another possibility would be for Magnum to use its own private
 installation of Barbican in cases where it is not available in the service
 catalog. I dislike this option because it creates an operational burden for
 maintaining the private Barbican service, and additional complexities with
 securing it.


 In my opinion, installation of Barbican should be independent of Magnum.
 My idea here is, if user wants to store his/her keys in Barbican then
 he/she will install it.
 We will have a config paramter like store_secure when True means we have
 to store the keys in Barbican or else not.
  What do you think?


*3. Add support of TLS in Magnum.*
  This work mainly involves supporting the use of key and certificates in
 magnum to support TLS.

 The user generates the keys, certificates and store them in Barbican. Now
 there is two way to access these keys while creating a bay.


  Rather than the user generates the keys…, perhaps it might be better
 to word that as the magnum client library code generates the keys for the
 user…”.


 It is user here. In my opinion, there could be users who don't want to
 use magnum client rather the APIs directly, in that case the user will
 generate the key themselves.


  Good point.

In our first implementation, we can support the user generating the
 keys and then later client generating the keys.


  Users should not require any knowledge of how TLS works, or related
 certificate management tools in order to use Magnum. Let’s aim for this.

  I do agree that’s a good logical first step, but I am reluctant to agree
 to it without confidence that we will add the additional security later. I
 want to achieve a secure-by-default configuration in Magnum. I’m happy to
 take measured forward progress toward this, but I don’t want the less
 secure option(s) to be the default once more secure options come along. By
 doing the more secure one first, and making it the default, we allow other
 options only when the administrator makes a conscious action to relax
 security to meet their constraints.


Barbican will be the default option.


  So, if 

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Adam Young

On 06/15/2015 08:45 PM, Madhuri wrote:
+1 Kevin. We will make Barbican a dependency to make it the default 
option to secure keys.


Regards,
Madhuri

On Tue, Jun 16, 2015 at 12:48 AM, Fox, Kevin M kevin@pnnl.gov 
mailto:kevin@pnnl.gov wrote:


If your asking the cloud provider to go through the effort to
install Magnum, its not that much extra effort to install Barbican
at the same time. Making it a dependency isn't too bad then IMHO.



Please use Certmonger on the the Magnum side, with an understanding that 
the Barbican team is writing a Certmonger plugin.


Certmonger can do self signed, and can talk to Dogtag if you need a real 
CA.  If we need to talk to other CAs, you write a helper script that 
Certmonger calls to post the CSR and fetch the signed Cert, but 
certmonger does the openssl/NSS work to properly mange the signing requests.




Thanks,
Kevin

*From:* Adrian Otto [adrian.o...@rackspace.com
mailto:adrian.o...@rackspace.com]
*Sent:* Sunday, June 14, 2015 11:09 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Magnum] TLS Support in Magnum

Madhuri,


On Jun 14, 2015, at 10:30 PM, Madhuri Rai
madhuri.ra...@gmail.com mailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint secure-kubernetes
https://blueprints.launchpad.net/magnum/+spec/secure-kubernetesin
discussion. I have been trying to figure out what could be the
possible change area to support this feature in Magnum. Below is
just a rough idea on how to proceed further on it.

This task can be further broken in smaller pieces.

*1. Add support for TLS in python-k8sclient.*
The current auto-generated code doesn't support TLS. So this work
will be to add TLS support in kubernetes python APIs.

*2. Add support for Barbican in Magnum.*
Barbican will be used to store all the keys and certificates.


Keep in mind that not all clouds will support Barbican yet, so
this approach could impair adoption of Magnum until Barbican is
universally supported. It might be worth considering a solution
that would generate all keys on the client, and copy them to the
Bay master for communication with other Bay nodes. This is less
secure than using Barbican, but would allow for use of Magnum
before Barbican is adopted.

If both methods were supported, the Barbican method should be the
default, and we should put warning messages in the config file so
that when the administrator relaxes the setting to use the
non-Barbican configuration he/she is made aware that it requires a
less secure mode of operation.

My suggestion is to completely implement the Barbican support
first, and follow up that implementation with a non-Barbican
option as a second iteration for the feature.

Another possibility would be for Magnum to use its own private
installation of Barbican in cases where it is not available in the
service catalog. I dislike this option because it creates an
operational burden for maintaining the private Barbican service,
and additional complexities with securing it.


*3. Add support of TLS in Magnum.*
This work mainly involves supporting the use of key and
certificates in magnum to support TLS.

The user generates the keys, certificates and store them in
Barbican. Now there is two way to access these keys while
creating a bay.


Rather than the user generates the keys…, perhaps it might be
better to word that as the magnum client library code generates
the keys for the user…”.


1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat
templates will fetch this key from Barbican.


I think you mean that Heat will use the Barbican key to fetch the
TLS key for accessing the native API service running on the Bay.


2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then
Magnum-conductor will fetch this key from Barbican and provide
this key to heat.

Then heat will copy this files on kubernetes master node. Then
bay will use this key to start a Kubernetes services signed with
these keys.


Make sure that the Barbican keys used by Heat and magnum-conductor
to store the various TLS certificates/keys are unique per tenant
and per bay, and are not shared among multiple tenants. We don’t
want it to ever be possible to trick Magnum into revealing secrets
belonging to other tenants.


After discussion when we all come to same point, I will create
separate blueprints for each task.
I am currently working on configuring Kubernetes services with
TLS keys.

Please provide your suggestions if any.


Thanks for kicking off

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Madhuri
+1 Kevin. We will make Barbican a dependency to make it the default option
to secure keys.

Regards,
Madhuri

On Tue, Jun 16, 2015 at 12:48 AM, Fox, Kevin M kevin@pnnl.gov wrote:

  If your asking the cloud provider to go through the effort to install
 Magnum, its not that much extra effort to install Barbican at the same
 time. Making it a dependency isn't too bad then IMHO.

 Thanks,
 Kevin
  --
 *From:* Adrian Otto [adrian.o...@rackspace.com]
 *Sent:* Sunday, June 14, 2015 11:09 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Magnum] TLS Support in Magnum

  Madhuri,

  On Jun 14, 2015, at 10:30 PM, Madhuri Rai madhuri.ra...@gmail.com
 wrote:

Hi All,

 This is to bring the blueprint  secure-kubernetes
 https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in 
 discussion.
 I have been trying to figure out what could be the possible change area
 to support this feature in Magnum. Below is just a rough idea on how to
 proceed further on it.

 This task can be further broken in smaller pieces.

 *1. Add support for TLS in python-k8sclient.*
 The current auto-generated code doesn't support TLS. So this work will be
 to add TLS support in kubernetes python APIs.

 *2. Add support for Barbican in Magnum.*
  Barbican will be used to store all the keys and certificates.


  Keep in mind that not all clouds will support Barbican yet, so this
 approach could impair adoption of Magnum until Barbican is universally
 supported. It might be worth considering a solution that would generate all
 keys on the client, and copy them to the Bay master for communication with
 other Bay nodes. This is less secure than using Barbican, but would allow
 for use of Magnum before Barbican is adopted.

  If both methods were supported, the Barbican method should be the
 default, and we should put warning messages in the config file so that when
 the administrator relaxes the setting to use the non-Barbican configuration
 he/she is made aware that it requires a less secure mode of operation.

  My suggestion is to completely implement the Barbican support first, and
 follow up that implementation with a non-Barbican option as a second
 iteration for the feature.

  Another possibility would be for Magnum to use its own private
 installation of Barbican in cases where it is not available in the service
 catalog. I dislike this option because it creates an operational burden for
 maintaining the private Barbican service, and additional complexities with
 securing it.

*3. Add support of TLS in Magnum.*
  This work mainly involves supporting the use of key and certificates in
 magnum to support TLS.

 The user generates the keys, certificates and store them in Barbican. Now
 there is two way to access these keys while creating a bay.


  Rather than the user generates the keys…, perhaps it might be better to
 word that as the magnum client library code generates the keys for the
 user…”.

 1. Heat will access Barbican directly.
 While creating bay, the user will provide this key and heat templates will
 fetch this key from Barbican.


  I think you mean that Heat will use the Barbican key to fetch the TLS key
 for accessing the native API service running on the Bay.

 2. Magnum-conductor access Barbican.
 While creating bay, the user will provide this key and then
 Magnum-conductor will fetch this key from Barbican and provide this key to
 heat.

 Then heat will copy this files on kubernetes master node. Then bay will
 use this key to start a Kubernetes services signed with these keys.


  Make sure that the Barbican keys used by Heat and magnum-conductor to
 store the various TLS certificates/keys are unique per tenant and per bay,
 and are not shared among multiple tenants. We don’t want it to ever be
 possible to trick Magnum into revealing secrets belonging to other tenants.

 After discussion when we all come to same point, I will create
 separate blueprints for each task.
 I am currently working on configuring Kubernetes services with TLS keys.

  Please provide your suggestions if any.


  Thanks for kicking off this discussion.

  Regards,

  Adrian



  Regards,
  Madhuri
  __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
 No, I was confused by your statement:
 When we create a bay, we have an ssh keypair that we use to inject the ssh 
 public key onto the nova instances we create.
 
 It sounded like you were using that keypair to inject a public key. I just 
 misunderstood.
 
 It does raise the question though, are you using ssh between the controller 
 and the instance anywhere? If so, we will still run into issues when we go to 
 try and test it at our site. Sahara does currently, and we're forced to put a 
 floating ip on every instance. Its less then ideal...
 

Why not just give each instance a port on a network which can route
directly to the controller's network? Is there some reason you feel
forced to use a floating IP?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Tom Cammann
My main issue with having the user generate the keys/certs for the kube 
nodes
is that the keys have to be insecurely moved onto the kube nodes. 
Barbican can

talk to heat but heat must still copy them across to the nodes, exposing the
keys on the wire. Perhaps there are ways of moving secrets correctly which I
have missed.

I also agree that we should opt for a non-Barbican deployment first.

At the summit we talked about using Magnum as a CA and signing the
certificates, and we seemed to have some consensus about doing this with the
possibility of using Anchor. This would take a lot of the onus off of 
the user to

fiddle around with openssl and craft the right signed certs safely. Using
Magnum as a CA the user would generate a key/cert pair, and then get the
cert signed by Magnum, and the kube node would do the same. The main
downside of this technique is that the user MUST trust Magnum and the
administrator as they would have access to the CA signing cert.

An alternative to that where the user holds the CA cert/key, is to have 
the user:


- generate a CA cert/key (or use existing corp one etc)
- generate own cert/key
- sign their cert with their CA cert/key
- spin up kubecluster
- each node would generate key/cert
- each node exposes this cert to be signed
- user signs each cert and returns it to the node.

This is going quite manual unless they have a CA that the kube nodes can 
call

into. However this is the most secure way I could come up with.

Tom

On 15/06/15 17:52, Egor Guz wrote:

+1 for non-Barbican support first, unfortunately Barbican is not very well 
adopted in existing installation.

Madhuri, also please keep in mind we should come with solution which should 
work with Swarm and Mesos as well in further.

—
Egor

From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 15, 2015 at 0:47
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Hi,

Thanks Adrian for the quick response. Please find my response inline.

On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
Madhuri,

On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible change 
area to support this feature in Magnum. Below is just a rough idea on how to proceed 
further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

Keep in mind that not all clouds will support Barbican yet, so this approach 
could impair adoption of Magnum until Barbican is universally supported. It 
might be worth considering a solution that would generate all keys on the 
client, and copy them to the Bay master for communication with other Bay nodes. 
This is less secure than using Barbican, but would allow for use of Magnum 
before Barbican is adopted.

+1, I agree. One question here, we are trying to secure the communication 
between magnum-conductor and kube-apiserver. Right?


If both methods were supported, the Barbican method should be the default, and 
we should put warning messages in the config file so that when the 
administrator relaxes the setting to use the non-Barbican configuration he/she 
is made aware that it requires a less secure mode of operation.

In non-Barbican support, client will generate the keys and pass the location of 
the key to the magnum services. Then again heat template will copy and 
configure the kubernetes services on master node. Same as the below step.


My suggestion is to completely implement the Barbican support first, and follow 
up that implementation with a non-Barbican option as a second iteration for the 
feature.

How about implementing the non-Barbican support first as this would be easy to 
implement, so that we can first concentrate on Point 1 and 3. And then after 
it, we can work on Barbican support with more insights.

Another possibility would be for Magnum to use its own private installation of 
Barbican in cases where it is not available in the service catalog. I dislike 
this option because it creates an operational burden for maintaining the 
private Barbican service, and additional complexities with securing it.

In my opinion, installation of Barbican should be independent of Magnum. My 
idea here

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Adrian Otto
Tom,

 On Jun 15, 2015, at 10:59 AM, Tom Cammann tom.camm...@hp.com wrote:
 
 My main issue with having the user generate the keys/certs for the kube nodes
 is that the keys have to be insecurely moved onto the kube nodes. Barbican can
 talk to heat but heat must still copy them across to the nodes, exposing the
 keys on the wire. Perhaps there are ways of moving secrets correctly which I
 have missed.

When we create a bay, we have an ssh keypair that we use to inject the ssh 
public key onto the nova instances we create. We can use scp to securely 
transfer the keys over the wire using that keypair.

 I also agree that we should opt for a non-Barbican deployment first.
 
 At the summit we talked about using Magnum as a CA and signing the
 certificates, and we seemed to have some consensus about doing this with the
 possibility of using Anchor. This would take a lot of the onus off of the 
 user to
 fiddle around with openssl and craft the right signed certs safely. Using
 Magnum as a CA the user would generate a key/cert pair, and then get the
 cert signed by Magnum, and the kube node would do the same. The main
 downside of this technique is that the user MUST trust Magnum and the
 administrator as they would have access to the CA signing cert.
 
 An alternative to that where the user holds the CA cert/key, is to have the 
 user:
 
 - generate a CA cert/key (or use existing corp one etc)
 - generate own cert/key
 - sign their cert with their CA cert/key
 - spin up kubecluster
 - each node would generate key/cert
 - each node exposes this cert to be signed
 - user signs each cert and returns it to the node.
 
 This is going quite manual unless they have a CA that the kube nodes can call
 into. However this is the most secure way I could come up with.

Perhaps we can expose a “replace keys” feature that could be used to facilitate 
this after initial setup of the bay. This way you could establish a trust that 
excludes the administrator. This approach potentially lends itself to 
additional automation to make the replacement process a bit less manual.

Thanks,

Adrian

 
 Tom
 
 On 15/06/15 17:52, Egor Guz wrote:
 +1 for non-Barbican support first, unfortunately Barbican is not very well 
 adopted in existing installation.
 
 Madhuri, also please keep in mind we should come with solution which should 
 work with Swarm and Mesos as well in further.
 
 —
 Egor
 
 From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Monday, June 15, 2015 at 0:47
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum
 
 Hi,
 
 Thanks Adrian for the quick response. Please find my response inline.
 
 On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
 adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
 Madhuri,
 
 On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
 madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:
 
 Hi All,
 
 This is to bring the blueprint  
 secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
  in discussion. I have been trying to figure out what could be the possible 
 change area to support this feature in Magnum. Below is just a rough idea on 
 how to proceed further on it.
 
 This task can be further broken in smaller pieces.
 
 1. Add support for TLS in python-k8sclient.
 The current auto-generated code doesn't support TLS. So this work will be to 
 add TLS support in kubernetes python APIs.
 
 2. Add support for Barbican in Magnum.
 Barbican will be used to store all the keys and certificates.
 
 Keep in mind that not all clouds will support Barbican yet, so this approach 
 could impair adoption of Magnum until Barbican is universally supported. It 
 might be worth considering a solution that would generate all keys on the 
 client, and copy them to the Bay master for communication with other Bay 
 nodes. This is less secure than using Barbican, but would allow for use of 
 Magnum before Barbican is adopted.
 
 +1, I agree. One question here, we are trying to secure the communication 
 between magnum-conductor and kube-apiserver. Right?
 
 
 If both methods were supported, the Barbican method should be the default, 
 and we should put warning messages in the config file so that when the 
 administrator relaxes the setting to use the non-Barbican configuration 
 he/she is made aware that it requires a less secure mode of operation.
 
 In non-Barbican support, client will generate the keys and pass the location 
 of the key to the magnum services. Then again heat template will copy and 
 configure the kubernetes services on master node. Same as the below step.
 
 
 My suggestion is to completely implement the Barbican support first, and 
 follow

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Egor Guz
+1 for non-Barbican support first, unfortunately Barbican is not very well 
adopted in existing installation.

Madhuri, also please keep in mind we should come with solution which should 
work with Swarm and Mesos as well in further.

—
Egor

From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 15, 2015 at 0:47
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Hi,

Thanks Adrian for the quick response. Please find my response inline.

On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
Madhuri,

On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible 
change area to support this feature in Magnum. Below is just a rough idea on 
how to proceed further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

Keep in mind that not all clouds will support Barbican yet, so this approach 
could impair adoption of Magnum until Barbican is universally supported. It 
might be worth considering a solution that would generate all keys on the 
client, and copy them to the Bay master for communication with other Bay nodes. 
This is less secure than using Barbican, but would allow for use of Magnum 
before Barbican is adopted.

+1, I agree. One question here, we are trying to secure the communication 
between magnum-conductor and kube-apiserver. Right?


If both methods were supported, the Barbican method should be the default, and 
we should put warning messages in the config file so that when the 
administrator relaxes the setting to use the non-Barbican configuration he/she 
is made aware that it requires a less secure mode of operation.

In non-Barbican support, client will generate the keys and pass the location of 
the key to the magnum services. Then again heat template will copy and 
configure the kubernetes services on master node. Same as the below step.


My suggestion is to completely implement the Barbican support first, and follow 
up that implementation with a non-Barbican option as a second iteration for the 
feature.

How about implementing the non-Barbican support first as this would be easy to 
implement, so that we can first concentrate on Point 1 and 3. And then after 
it, we can work on Barbican support with more insights.

Another possibility would be for Magnum to use its own private installation of 
Barbican in cases where it is not available in the service catalog. I dislike 
this option because it creates an operational burden for maintaining the 
private Barbican service, and additional complexities with securing it.

In my opinion, installation of Barbican should be independent of Magnum. My 
idea here is, if user wants to store his/her keys in Barbican then he/she will 
install it.
We will have a config paramter like store_secure when True means we have to 
store the keys in Barbican or else not.
What do you think?

3. Add support of TLS in Magnum.
This work mainly involves supporting the use of key and certificates in magnum 
to support TLS.

The user generates the keys, certificates and store them in Barbican. Now there 
is two way to access these keys while creating a bay.

Rather than the user generates the keys…, perhaps it might be better to word 
that as the magnum client library code generates the keys for the user…”.

It is user here. In my opinion, there could be users who don't want to use 
magnum client rather the APIs directly, in that case the user will generate the 
key themselves.

In our first implementation, we can support the user generating the keys and 
then later client generating the keys.

1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat templates will 
fetch this key from Barbican.

I think you mean that Heat will use the Barbican key to fetch the TLS key for 
accessing the native API service running on the Bay.
Yes.

2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then Magnum-conductor 
will fetch this key from Barbican and provide this key to heat.

Then heat will copy this files on kubernetes master node. Then bay will use 
this key to start a Kubernetes services signed

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Adrian Otto
Madhuri,

On Jun 15, 2015, at 12:47 AM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi,

Thanks Adrian for the quick response. Please find my response inline.

On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
Madhuri,

On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible 
change area to support this feature in Magnum. Below is just a rough idea on 
how to proceed further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

Keep in mind that not all clouds will support Barbican yet, so this approach 
could impair adoption of Magnum until Barbican is universally supported. It 
might be worth considering a solution that would generate all keys on the 
client, and copy them to the Bay master for communication with other Bay nodes. 
This is less secure than using Barbican, but would allow for use of Magnum 
before Barbican is adopted.

+1, I agree. One question here, we are trying to secure the communication 
between magnum-conductor and kube-apiserver. Right?

We need API services that are on public networks to be secured with TLS, or 
another approach that will allow us to implement access control so that these 
API’s can only be accessed by those with the correct keys. This need extends to 
all places in Magnum where we are exposing native API’s.

If both methods were supported, the Barbican method should be the default, and 
we should put warning messages in the config file so that when the 
administrator relaxes the setting to use the non-Barbican configuration he/she 
is made aware that it requires a less secure mode of operation.

In non-Barbican support, client will generate the keys and pass the location of 
the key to the magnum services. Then again heat template will copy and 
configure the kubernetes services on master node. Same as the below step.

Good!

My suggestion is to completely implement the Barbican support first, and follow 
up that implementation with a non-Barbican option as a second iteration for the 
feature.

How about implementing the non-Barbican support first as this would be easy to 
implement, so that we can first concentrate on Point 1 and 3. And then after 
it, we can work on Barbican support with more insights.

Another possibility would be for Magnum to use its own private installation of 
Barbican in cases where it is not available in the service catalog. I dislike 
this option because it creates an operational burden for maintaining the 
private Barbican service, and additional complexities with securing it.

In my opinion, installation of Barbican should be independent of Magnum. My 
idea here is, if user wants to store his/her keys in Barbican then he/she will 
install it.
We will have a config paramter like store_secure when True means we have to 
store the keys in Barbican or else not.
What do you think?

3. Add support of TLS in Magnum.
This work mainly involves supporting the use of key and certificates in magnum 
to support TLS.

The user generates the keys, certificates and store them in Barbican. Now there 
is two way to access these keys while creating a bay.

Rather than the user generates the keys…, perhaps it might be better to word 
that as the magnum client library code generates the keys for the user…”.

It is user here. In my opinion, there could be users who don't want to use 
magnum client rather the APIs directly, in that case the user will generate the 
key themselves.

Good point.

In our first implementation, we can support the user generating the keys and 
then later client generating the keys.

Users should not require any knowledge of how TLS works, or related certificate 
management tools in order to use Magnum. Let’s aim for this.

I do agree that’s a good logical first step, but I am reluctant to agree to it 
without confidence that we will add the additional security later. I want to 
achieve a secure-by-default configuration in Magnum. I’m happy to take measured 
forward progress toward this, but I don’t want the less secure option(s) to be 
the default once more secure options come along. By doing the more secure one 
first, and making it the default, we allow other options only when the 
administrator makes a conscious action to relax security to meet their 
constraints.

So, if our team agrees that doing simple key management without Barbican should 
be our first step, I will agree to that under the condition that we adjust the 

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Fox, Kevin M
Why not just push the ssh keypair via cloud-init? Its more firewall friendly.

Having the controller - instance via ssh has proven very problematic for us 
for a lot of projects. :/

Thanks,
Kevin

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Monday, June 15, 2015 11:18 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Tom,

 On Jun 15, 2015, at 10:59 AM, Tom Cammann tom.camm...@hp.com wrote:

 My main issue with having the user generate the keys/certs for the kube nodes
 is that the keys have to be insecurely moved onto the kube nodes. Barbican can
 talk to heat but heat must still copy them across to the nodes, exposing the
 keys on the wire. Perhaps there are ways of moving secrets correctly which I
 have missed.

When we create a bay, we have an ssh keypair that we use to inject the ssh 
public key onto the nova instances we create. We can use scp to securely 
transfer the keys over the wire using that keypair.

 I also agree that we should opt for a non-Barbican deployment first.

 At the summit we talked about using Magnum as a CA and signing the
 certificates, and we seemed to have some consensus about doing this with the
 possibility of using Anchor. This would take a lot of the onus off of the 
 user to
 fiddle around with openssl and craft the right signed certs safely. Using
 Magnum as a CA the user would generate a key/cert pair, and then get the
 cert signed by Magnum, and the kube node would do the same. The main
 downside of this technique is that the user MUST trust Magnum and the
 administrator as they would have access to the CA signing cert.

 An alternative to that where the user holds the CA cert/key, is to have the 
 user:

 - generate a CA cert/key (or use existing corp one etc)
 - generate own cert/key
 - sign their cert with their CA cert/key
 - spin up kubecluster
 - each node would generate key/cert
 - each node exposes this cert to be signed
 - user signs each cert and returns it to the node.

 This is going quite manual unless they have a CA that the kube nodes can call
 into. However this is the most secure way I could come up with.

Perhaps we can expose a “replace keys” feature that could be used to facilitate 
this after initial setup of the bay. This way you could establish a trust that 
excludes the administrator. This approach potentially lends itself to 
additional automation to make the replacement process a bit less manual.

Thanks,

Adrian


 Tom

 On 15/06/15 17:52, Egor Guz wrote:
 +1 for non-Barbican support first, unfortunately Barbican is not very well 
 adopted in existing installation.

 Madhuri, also please keep in mind we should come with solution which should 
 work with Swarm and Mesos as well in further.

 —
 Egor

 From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Monday, June 15, 2015 at 0:47
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

 Hi,

 Thanks Adrian for the quick response. Please find my response inline.

 On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
 adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
 Madhuri,

 On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
 madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

 Hi All,

 This is to bring the blueprint  
 secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
  in discussion. I have been trying to figure out what could be the possible 
 change area to support this feature in Magnum. Below is just a rough idea on 
 how to proceed further on it.

 This task can be further broken in smaller pieces.

 1. Add support for TLS in python-k8sclient.
 The current auto-generated code doesn't support TLS. So this work will be to 
 add TLS support in kubernetes python APIs.

 2. Add support for Barbican in Magnum.
 Barbican will be used to store all the keys and certificates.

 Keep in mind that not all clouds will support Barbican yet, so this approach 
 could impair adoption of Magnum until Barbican is universally supported. It 
 might be worth considering a solution that would generate all keys on the 
 client, and copy them to the Bay master for communication with other Bay 
 nodes. This is less secure than using Barbican, but would allow for use of 
 Magnum before Barbican is adopted.

 +1, I agree. One question here, we are trying to secure the communication 
 between magnum-conductor and kube-apiserver. Right?


 If both methods were supported, the Barbican method should be the default, 
 and we should put warning messages in the config file so that when the 
 administrator relaxes the setting to use

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Madhuri
Thanks Egor.

On Tue, Jun 16, 2015 at 1:52 AM, Egor Guz e...@walmartlabs.com wrote:

 +1 for non-Barbican support first, unfortunately Barbican is not very well
 adopted in existing installation.

 Madhuri, also please keep in mind we should come with solution which
 should work with Swarm and Mesos as well in further.


Good point. It will be the same, just the difference will be configuring
the respective services with signed certs and keys.


 —
 Egor

 From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
 
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
 Date: Monday, June 15, 2015 at 0:47
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

 Hi,

 Thanks Adrian for the quick response. Please find my response inline.

 On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto adrian.o...@rackspace.com
 mailto:adrian.o...@rackspace.com wrote:
 Madhuri,

 On Jun 14, 2015, at 10:30 PM, Madhuri Rai madhuri.ra...@gmail.commailto:
 madhuri.ra...@gmail.com wrote:

 Hi All,

 This is to bring the blueprint  secure-kubernetes
 https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in
 discussion. I have been trying to figure out what could be the possible
 change area to support this feature in Magnum. Below is just a rough idea
 on how to proceed further on it.

 This task can be further broken in smaller pieces.

 1. Add support for TLS in python-k8sclient.
 The current auto-generated code doesn't support TLS. So this work will be
 to add TLS support in kubernetes python APIs.

 2. Add support for Barbican in Magnum.
 Barbican will be used to store all the keys and certificates.

 Keep in mind that not all clouds will support Barbican yet, so this
 approach could impair adoption of Magnum until Barbican is universally
 supported. It might be worth considering a solution that would generate all
 keys on the client, and copy them to the Bay master for communication with
 other Bay nodes. This is less secure than using Barbican, but would allow
 for use of Magnum before Barbican is adopted.

 +1, I agree. One question here, we are trying to secure the communication
 between magnum-conductor and kube-apiserver. Right?


 If both methods were supported, the Barbican method should be the default,
 and we should put warning messages in the config file so that when the
 administrator relaxes the setting to use the non-Barbican configuration
 he/she is made aware that it requires a less secure mode of operation.

 In non-Barbican support, client will generate the keys and pass the
 location of the key to the magnum services. Then again heat template will
 copy and configure the kubernetes services on master node. Same as the
 below step.


 My suggestion is to completely implement the Barbican support first, and
 follow up that implementation with a non-Barbican option as a second
 iteration for the feature.

 How about implementing the non-Barbican support first as this would be
 easy to implement, so that we can first concentrate on Point 1 and 3. And
 then after it, we can work on Barbican support with more insights.

 Another possibility would be for Magnum to use its own private
 installation of Barbican in cases where it is not available in the service
 catalog. I dislike this option because it creates an operational burden for
 maintaining the private Barbican service, and additional complexities with
 securing it.

 In my opinion, installation of Barbican should be independent of Magnum.
 My idea here is, if user wants to store his/her keys in Barbican then
 he/she will install it.
 We will have a config paramter like store_secure when True means we have
 to store the keys in Barbican or else not.
 What do you think?

 3. Add support of TLS in Magnum.
 This work mainly involves supporting the use of key and certificates in
 magnum to support TLS.

 The user generates the keys, certificates and store them in Barbican. Now
 there is two way to access these keys while creating a bay.

 Rather than the user generates the keys…, perhaps it might be better to
 word that as the magnum client library code generates the keys for the
 user…”.

 It is user here. In my opinion, there could be users who don't want to
 use magnum client rather the APIs directly, in that case the user will
 generate the key themselves.

 In our first implementation, we can support the user generating the keys
 and then later client generating the keys.

 1. Heat will access Barbican directly.
 While creating bay, the user will provide this key and heat templates will
 fetch this key from Barbican.

 I think you mean that Heat will use the Barbican key to fetch the TLS key
 for accessing the native API service running on the Bay.
 Yes.

 2. Magnum-conductor access Barbican

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Fox, Kevin M
No, I was confused by your statement:
When we create a bay, we have an ssh keypair that we use to inject the ssh 
public key onto the nova instances we create.

It sounded like you were using that keypair to inject a public key. I just 
misunderstood.

It does raise the question though, are you using ssh between the controller and 
the instance anywhere? If so, we will still run into issues when we go to try 
and test it at our site. Sahara does currently, and we're forced to put a 
floating ip on every instance. Its less then ideal...

Thanks,
Kevin

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Monday, June 15, 2015 3:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Kevin,

 On Jun 15, 2015, at 1:25 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 Why not just push the ssh keypair via cloud-init? Its more firewall friendly.

Nova already handles the injection the SSH key for us. I think you meant to 
suggest that we use cloud-init to inject the TLS keys, right?

Thanks,

Adrian

 Having the controller - instance via ssh has proven very problematic for us 
 for a lot of projects. :/

 Thanks,
 Kevin
 
 From: Adrian Otto [adrian.o...@rackspace.com]
 Sent: Monday, June 15, 2015 11:18 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

 Tom,

 On Jun 15, 2015, at 10:59 AM, Tom Cammann tom.camm...@hp.com wrote:

 My main issue with having the user generate the keys/certs for the kube nodes
 is that the keys have to be insecurely moved onto the kube nodes. Barbican 
 can
 talk to heat but heat must still copy them across to the nodes, exposing the
 keys on the wire. Perhaps there are ways of moving secrets correctly which I
 have missed.

 When we create a bay, we have an ssh keypair that we use to inject the ssh 
 public key onto the nova instances we create. We can use scp to securely 
 transfer the keys over the wire using that keypair.

 I also agree that we should opt for a non-Barbican deployment first.

 At the summit we talked about using Magnum as a CA and signing the
 certificates, and we seemed to have some consensus about doing this with the
 possibility of using Anchor. This would take a lot of the onus off of the 
 user to
 fiddle around with openssl and craft the right signed certs safely. Using
 Magnum as a CA the user would generate a key/cert pair, and then get the
 cert signed by Magnum, and the kube node would do the same. The main
 downside of this technique is that the user MUST trust Magnum and the
 administrator as they would have access to the CA signing cert.

 An alternative to that where the user holds the CA cert/key, is to have the 
 user:

 - generate a CA cert/key (or use existing corp one etc)
 - generate own cert/key
 - sign their cert with their CA cert/key
 - spin up kubecluster
 - each node would generate key/cert
 - each node exposes this cert to be signed
 - user signs each cert and returns it to the node.

 This is going quite manual unless they have a CA that the kube nodes can call
 into. However this is the most secure way I could come up with.

 Perhaps we can expose a “replace keys” feature that could be used to 
 facilitate this after initial setup of the bay. This way you could establish 
 a trust that excludes the administrator. This approach potentially lends 
 itself to additional automation to make the replacement process a bit less 
 manual.

 Thanks,

 Adrian


 Tom

 On 15/06/15 17:52, Egor Guz wrote:
 +1 for non-Barbican support first, unfortunately Barbican is not very well 
 adopted in existing installation.

 Madhuri, also please keep in mind we should come with solution which should 
 work with Swarm and Mesos as well in further.

 —
 Egor

 From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Monday, June 15, 2015 at 0:47
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

 Hi,

 Thanks Adrian for the quick response. Please find my response inline.

 On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
 adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
 Madhuri,

 On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
 madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

 Hi All,

 This is to bring the blueprint  
 secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
  in discussion. I have been trying to figure out what could be the possible 
 change area to support this feature in Magnum. Below is just a rough idea 
 on how to proceed further

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Adrian Otto
Kevin,

We currently do not use SSH for any of our orchestration. You have highlighted 
a good reason for us to avoid that wherever possible. Good catch!

Cheers,

Adrian

 On Jun 15, 2015, at 3:59 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 
 No, I was confused by your statement:
 When we create a bay, we have an ssh keypair that we use to inject the ssh 
 public key onto the nova instances we create.
 
 It sounded like you were using that keypair to inject a public key. I just 
 misunderstood.
 
 It does raise the question though, are you using ssh between the controller 
 and the instance anywhere? If so, we will still run into issues when we go to 
 try and test it at our site. Sahara does currently, and we're forced to put a 
 floating ip on every instance. Its less then ideal...
 
 Thanks,
 Kevin
 
 From: Adrian Otto [adrian.o...@rackspace.com]
 Sent: Monday, June 15, 2015 3:17 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum
 
 Kevin,
 
 On Jun 15, 2015, at 1:25 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 
 Why not just push the ssh keypair via cloud-init? Its more firewall friendly.
 
 Nova already handles the injection the SSH key for us. I think you meant to 
 suggest that we use cloud-init to inject the TLS keys, right?
 
 Thanks,
 
 Adrian
 
 Having the controller - instance via ssh has proven very problematic for us 
 for a lot of projects. :/
 
 Thanks,
 Kevin
 
 From: Adrian Otto [adrian.o...@rackspace.com]
 Sent: Monday, June 15, 2015 11:18 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum
 
 Tom,
 
 On Jun 15, 2015, at 10:59 AM, Tom Cammann tom.camm...@hp.com wrote:
 
 My main issue with having the user generate the keys/certs for the kube 
 nodes
 is that the keys have to be insecurely moved onto the kube nodes. Barbican 
 can
 talk to heat but heat must still copy them across to the nodes, exposing the
 keys on the wire. Perhaps there are ways of moving secrets correctly which I
 have missed.
 
 When we create a bay, we have an ssh keypair that we use to inject the ssh 
 public key onto the nova instances we create. We can use scp to securely 
 transfer the keys over the wire using that keypair.
 
 I also agree that we should opt for a non-Barbican deployment first.
 
 At the summit we talked about using Magnum as a CA and signing the
 certificates, and we seemed to have some consensus about doing this with the
 possibility of using Anchor. This would take a lot of the onus off of the 
 user to
 fiddle around with openssl and craft the right signed certs safely. Using
 Magnum as a CA the user would generate a key/cert pair, and then get the
 cert signed by Magnum, and the kube node would do the same. The main
 downside of this technique is that the user MUST trust Magnum and the
 administrator as they would have access to the CA signing cert.
 
 An alternative to that where the user holds the CA cert/key, is to have the 
 user:
 
 - generate a CA cert/key (or use existing corp one etc)
 - generate own cert/key
 - sign their cert with their CA cert/key
 - spin up kubecluster
 - each node would generate key/cert
 - each node exposes this cert to be signed
 - user signs each cert and returns it to the node.
 
 This is going quite manual unless they have a CA that the kube nodes can 
 call
 into. However this is the most secure way I could come up with.
 
 Perhaps we can expose a “replace keys” feature that could be used to 
 facilitate this after initial setup of the bay. This way you could establish 
 a trust that excludes the administrator. This approach potentially lends 
 itself to additional automation to make the replacement process a bit less 
 manual.
 
 Thanks,
 
 Adrian
 
 
 Tom
 
 On 15/06/15 17:52, Egor Guz wrote:
 +1 for non-Barbican support first, unfortunately Barbican is not very well 
 adopted in existing installation.
 
 Madhuri, also please keep in mind we should come with solution which 
 should work with Swarm and Mesos as well in further.
 
 —
 Egor
 
 From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Monday, June 15, 2015 at 0:47
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum
 
 Hi,
 
 Thanks Adrian for the quick response. Please find my response inline.
 
 On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
 adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
 Madhuri,
 
 On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
 madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:
 
 Hi All

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Fox, Kevin M
Awesome. Thanks. :)

Kevin

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Monday, June 15, 2015 4:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Kevin,

We currently do not use SSH for any of our orchestration. You have highlighted 
a good reason for us to avoid that wherever possible. Good catch!

Cheers,

Adrian

 On Jun 15, 2015, at 3:59 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 No, I was confused by your statement:
 When we create a bay, we have an ssh keypair that we use to inject the ssh 
 public key onto the nova instances we create.

 It sounded like you were using that keypair to inject a public key. I just 
 misunderstood.

 It does raise the question though, are you using ssh between the controller 
 and the instance anywhere? If so, we will still run into issues when we go to 
 try and test it at our site. Sahara does currently, and we're forced to put a 
 floating ip on every instance. Its less then ideal...

 Thanks,
 Kevin
 
 From: Adrian Otto [adrian.o...@rackspace.com]
 Sent: Monday, June 15, 2015 3:17 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

 Kevin,

 On Jun 15, 2015, at 1:25 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 Why not just push the ssh keypair via cloud-init? Its more firewall friendly.

 Nova already handles the injection the SSH key for us. I think you meant to 
 suggest that we use cloud-init to inject the TLS keys, right?

 Thanks,

 Adrian

 Having the controller - instance via ssh has proven very problematic for us 
 for a lot of projects. :/

 Thanks,
 Kevin
 
 From: Adrian Otto [adrian.o...@rackspace.com]
 Sent: Monday, June 15, 2015 11:18 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

 Tom,

 On Jun 15, 2015, at 10:59 AM, Tom Cammann tom.camm...@hp.com wrote:

 My main issue with having the user generate the keys/certs for the kube 
 nodes
 is that the keys have to be insecurely moved onto the kube nodes. Barbican 
 can
 talk to heat but heat must still copy them across to the nodes, exposing the
 keys on the wire. Perhaps there are ways of moving secrets correctly which I
 have missed.

 When we create a bay, we have an ssh keypair that we use to inject the ssh 
 public key onto the nova instances we create. We can use scp to securely 
 transfer the keys over the wire using that keypair.

 I also agree that we should opt for a non-Barbican deployment first.

 At the summit we talked about using Magnum as a CA and signing the
 certificates, and we seemed to have some consensus about doing this with the
 possibility of using Anchor. This would take a lot of the onus off of the 
 user to
 fiddle around with openssl and craft the right signed certs safely. Using
 Magnum as a CA the user would generate a key/cert pair, and then get the
 cert signed by Magnum, and the kube node would do the same. The main
 downside of this technique is that the user MUST trust Magnum and the
 administrator as they would have access to the CA signing cert.

 An alternative to that where the user holds the CA cert/key, is to have the 
 user:

 - generate a CA cert/key (or use existing corp one etc)
 - generate own cert/key
 - sign their cert with their CA cert/key
 - spin up kubecluster
 - each node would generate key/cert
 - each node exposes this cert to be signed
 - user signs each cert and returns it to the node.

 This is going quite manual unless they have a CA that the kube nodes can 
 call
 into. However this is the most secure way I could come up with.

 Perhaps we can expose a “replace keys” feature that could be used to 
 facilitate this after initial setup of the bay. This way you could establish 
 a trust that excludes the administrator. This approach potentially lends 
 itself to additional automation to make the replacement process a bit less 
 manual.

 Thanks,

 Adrian


 Tom

 On 15/06/15 17:52, Egor Guz wrote:
 +1 for non-Barbican support first, unfortunately Barbican is not very well 
 adopted in existing installation.

 Madhuri, also please keep in mind we should come with solution which 
 should work with Swarm and Mesos as well in further.

 —
 Egor

 From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Monday, June 15, 2015 at 0:47
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

 Hi,

 Thanks Adrian for the quick response. Please find my response inline.

 On Mon

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Adrian Otto
Madhuri,

On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible 
change area to support this feature in Magnum. Below is just a rough idea on 
how to proceed further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

Keep in mind that not all clouds will support Barbican yet, so this approach 
could impair adoption of Magnum until Barbican is universally supported. It 
might be worth considering a solution that would generate all keys on the 
client, and copy them to the Bay master for communication with other Bay nodes. 
This is less secure than using Barbican, but would allow for use of Magnum 
before Barbican is adopted.

If both methods were supported, the Barbican method should be the default, and 
we should put warning messages in the config file so that when the 
administrator relaxes the setting to use the non-Barbican configuration he/she 
is made aware that it requires a less secure mode of operation.

My suggestion is to completely implement the Barbican support first, and follow 
up that implementation with a non-Barbican option as a second iteration for the 
feature.

Another possibility would be for Magnum to use its own private installation of 
Barbican in cases where it is not available in the service catalog. I dislike 
this option because it creates an operational burden for maintaining the 
private Barbican service, and additional complexities with securing it.

3. Add support of TLS in Magnum.
This work mainly involves supporting the use of key and certificates in magnum 
to support TLS.

The user generates the keys, certificates and store them in Barbican. Now there 
is two way to access these keys while creating a bay.

Rather than the user generates the keys…, perhaps it might be better to word 
that as the magnum client library code generates the keys for the user…”.

1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat templates will 
fetch this key from Barbican.

I think you mean that Heat will use the Barbican key to fetch the TLS key for 
accessing the native API service running on the Bay.

2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then Magnum-conductor 
will fetch this key from Barbican and provide this key to heat.

Then heat will copy this files on kubernetes master node. Then bay will use 
this key to start a Kubernetes services signed with these keys.

Make sure that the Barbican keys used by Heat and magnum-conductor to store the 
various TLS certificates/keys are unique per tenant and per bay, and are not 
shared among multiple tenants. We don’t want it to ever be possible to trick 
Magnum into revealing secrets belonging to other tenants.

After discussion when we all come to same point, I will create separate 
blueprints for each task.
I am currently working on configuring Kubernetes services with TLS keys.

Please provide your suggestions if any.

Thanks for kicking off this discussion.

Regards,

Adrian



Regards,
Madhuri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Madhuri Rai
Hi,

Thanks Adrian for the quick response. Please find my response inline.

On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto adrian.o...@rackspace.com
wrote:

  Madhuri,

  On Jun 14, 2015, at 10:30 PM, Madhuri Rai madhuri.ra...@gmail.com
 wrote:

Hi All,

 This is to bring the blueprint  secure-kubernetes
 https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in 
 discussion.
 I have been trying to figure out what could be the possible change area
 to support this feature in Magnum. Below is just a rough idea on how to
 proceed further on it.

 This task can be further broken in smaller pieces.

 *1. Add support for TLS in python-k8sclient.*
 The current auto-generated code doesn't support TLS. So this work will be
 to add TLS support in kubernetes python APIs.

 *2. Add support for Barbican in Magnum.*
  Barbican will be used to store all the keys and certificates.


  Keep in mind that not all clouds will support Barbican yet, so this
 approach could impair adoption of Magnum until Barbican is universally
 supported. It might be worth considering a solution that would generate all
 keys on the client, and copy them to the Bay master for communication with
 other Bay nodes. This is less secure than using Barbican, but would allow
 for use of Magnum before Barbican is adopted.


+1, I agree. One question here, we are trying to secure the communication
between magnum-conductor and kube-apiserver. Right?


  If both methods were supported, the Barbican method should be the
 default, and we should put warning messages in the config file so that when
 the administrator relaxes the setting to use the non-Barbican configuration
 he/she is made aware that it requires a less secure mode of operation.


In non-Barbican support, client will generate the keys and pass the
location of the key to the magnum services. Then again heat template will
copy and configure the kubernetes services on master node. Same as the
below step.


  My suggestion is to completely implement the Barbican support first, and
 follow up that implementation with a non-Barbican option as a second
 iteration for the feature.


How about implementing the non-Barbican support first as this would be easy
to implement, so that we can first concentrate on Point 1 and 3. And then
after it, we can work on Barbican support with more insights.


  Another possibility would be for Magnum to use its own private
 installation of Barbican in cases where it is not available in the service
 catalog. I dislike this option because it creates an operational burden for
 maintaining the private Barbican service, and additional complexities with
 securing it.


In my opinion, installation of Barbican should be independent of Magnum. My
idea here is, if user wants to store his/her keys in Barbican then he/she
will install it.
We will have a config paramter like store_secure when True means we have
to store the keys in Barbican or else not.
What do you think?


*3. Add support of TLS in Magnum.*
  This work mainly involves supporting the use of key and certificates in
 magnum to support TLS.

 The user generates the keys, certificates and store them in Barbican. Now
 there is two way to access these keys while creating a bay.


  Rather than the user generates the keys…, perhaps it might be better to
 word that as the magnum client library code generates the keys for the
 user…”.


It is user here. In my opinion, there could be users who don't want to
use magnum client rather the APIs directly, in that case the user will
generate the key themselves.

In our first implementation, we can support the user generating the keys
and then later client generating the keys.


 1. Heat will access Barbican directly.
 While creating bay, the user will provide this key and heat templates will
 fetch this key from Barbican.


  I think you mean that Heat will use the Barbican key to fetch the TLS key
 for accessing the native API service running on the Bay.

Yes.


 2. Magnum-conductor access Barbican.
 While creating bay, the user will provide this key and then
 Magnum-conductor will fetch this key from Barbican and provide this key to
 heat.

 Then heat will copy this files on kubernetes master node. Then bay will
 use this key to start a Kubernetes services signed with these keys.


  Make sure that the Barbican keys used by Heat and magnum-conductor to
 store the various TLS certificates/keys are unique per tenant and per bay,
 and are not shared among multiple tenants. We don’t want it to ever be
 possible to trick Magnum into revealing secrets belonging to other tenants.


Yes, I will take care of it.

   After discussion when we all come to same point, I will create separate
blueprints for each task.
I am currently working on configuring Kubernetes services with TLS keys.

 Please provide your suggestions if any.


 Thanks for kicking off this discussion.


  Regards,

  Adrian



  Regards,
  Madhuri
  

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Adrian Otto
Kevin,

 On Jun 15, 2015, at 1:25 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 
 Why not just push the ssh keypair via cloud-init? Its more firewall friendly.

Nova already handles the injection the SSH key for us. I think you meant to 
suggest that we use cloud-init to inject the TLS keys, right?

Thanks,

Adrian

 Having the controller - instance via ssh has proven very problematic for us 
 for a lot of projects. :/
 
 Thanks,
 Kevin
 
 From: Adrian Otto [adrian.o...@rackspace.com]
 Sent: Monday, June 15, 2015 11:18 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum
 
 Tom,
 
 On Jun 15, 2015, at 10:59 AM, Tom Cammann tom.camm...@hp.com wrote:
 
 My main issue with having the user generate the keys/certs for the kube nodes
 is that the keys have to be insecurely moved onto the kube nodes. Barbican 
 can
 talk to heat but heat must still copy them across to the nodes, exposing the
 keys on the wire. Perhaps there are ways of moving secrets correctly which I
 have missed.
 
 When we create a bay, we have an ssh keypair that we use to inject the ssh 
 public key onto the nova instances we create. We can use scp to securely 
 transfer the keys over the wire using that keypair.
 
 I also agree that we should opt for a non-Barbican deployment first.
 
 At the summit we talked about using Magnum as a CA and signing the
 certificates, and we seemed to have some consensus about doing this with the
 possibility of using Anchor. This would take a lot of the onus off of the 
 user to
 fiddle around with openssl and craft the right signed certs safely. Using
 Magnum as a CA the user would generate a key/cert pair, and then get the
 cert signed by Magnum, and the kube node would do the same. The main
 downside of this technique is that the user MUST trust Magnum and the
 administrator as they would have access to the CA signing cert.
 
 An alternative to that where the user holds the CA cert/key, is to have the 
 user:
 
 - generate a CA cert/key (or use existing corp one etc)
 - generate own cert/key
 - sign their cert with their CA cert/key
 - spin up kubecluster
 - each node would generate key/cert
 - each node exposes this cert to be signed
 - user signs each cert and returns it to the node.
 
 This is going quite manual unless they have a CA that the kube nodes can call
 into. However this is the most secure way I could come up with.
 
 Perhaps we can expose a “replace keys” feature that could be used to 
 facilitate this after initial setup of the bay. This way you could establish 
 a trust that excludes the administrator. This approach potentially lends 
 itself to additional automation to make the replacement process a bit less 
 manual.
 
 Thanks,
 
 Adrian
 
 
 Tom
 
 On 15/06/15 17:52, Egor Guz wrote:
 +1 for non-Barbican support first, unfortunately Barbican is not very well 
 adopted in existing installation.
 
 Madhuri, also please keep in mind we should come with solution which should 
 work with Swarm and Mesos as well in further.
 
 —
 Egor
 
 From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Monday, June 15, 2015 at 0:47
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum
 
 Hi,
 
 Thanks Adrian for the quick response. Please find my response inline.
 
 On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
 adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
 Madhuri,
 
 On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
 madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:
 
 Hi All,
 
 This is to bring the blueprint  
 secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
  in discussion. I have been trying to figure out what could be the possible 
 change area to support this feature in Magnum. Below is just a rough idea 
 on how to proceed further on it.
 
 This task can be further broken in smaller pieces.
 
 1. Add support for TLS in python-k8sclient.
 The current auto-generated code doesn't support TLS. So this work will be 
 to add TLS support in kubernetes python APIs.
 
 2. Add support for Barbican in Magnum.
 Barbican will be used to store all the keys and certificates.
 
 Keep in mind that not all clouds will support Barbican yet, so this 
 approach could impair adoption of Magnum until Barbican is universally 
 supported. It might be worth considering a solution that would generate all 
 keys on the client, and copy them to the Bay master for communication with 
 other Bay nodes. This is less secure than using Barbican, but would allow 
 for use of Magnum before Barbican is adopted.
 
 +1, I agree. One question here

[openstack-dev] [Magnum] TLS Support in Magnum

2015-06-14 Thread Madhuri Rai
Hi All,

This is to bring the blueprint  secure-kubernetes
https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in
discussion.
I have been trying to figure out what could be the possible change area to
support this feature in Magnum. Below is just a rough idea on how to
proceed further on it.

This task can be further broken in smaller pieces.

*1. Add support for TLS in python-k8sclient.*
The current auto-generated code doesn't support TLS. So this work will be
to add TLS support in kubernetes python APIs.

*2. Add support for Barbican in Magnum.*
Barbican will be used to store all the keys and certificates.

*3. Add support of TLS in Magnum.*
This work mainly involves supporting the use of key and certificates in
magnum to support TLS.

The user generates the keys, certificates and store them in Barbican. Now
there is two way to access these keys while creating a bay.

1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat templates will
fetch this key from Barbican.


2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then
Magnum-conductor will fetch this key from Barbican and provide this key to
heat.

Then heat will copy this files on kubernetes master node. Then bay will use
this key to start a Kubernetes services signed with these keys.


After discussion when we all come to same point, I will create separate
blueprints for each task.
I am currently working on configuring Kubernetes services with TLS keys.

Please provide your suggestions if any.


Regards,
Madhuri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev