+1, to Hongbin’s concerns about exposing passwords. I think we should start 
with dedicated kub user in magnum config and moved to keystone domains after.

I just wondering how how Kuryr team planning to solve similar issue (I believe 
libnetwork driver require Neutron’s credentials). Can someone comment on it?

—
Egor

From: "Steven Dake (stdake)" <std...@cisco.com<mailto:std...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, September 20, 2015 at 19:34
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hongbin,

I believe the domain approach is the preferred approach for the solution long 
term.  It will require more R&D to execute then other options but also be 
completely secure.

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, September 20, 2015 at 4:26 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hi Ton,

If I understand your proposal correctly, it means the inputted password will be 
exposed to users in the same tenant (since the password is passed as stack 
parameter, which is exposed within tenant). If users are not admin, they don’t 
have privilege to create a temp user. As a result, users have to expose their 
own password to create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is dedicated for 
communication between k8s and neutron load balancer service. The password of 
the user can be written into config file, picked up by conductor and passed to 
heat. The drawback is that there is no multi-tenancy for openstack load 
balancer service, since all bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone domain 
[1] for each bay (using admin credential in config file), and assign bay’s 
owner to that domain. As a result, the user will have privilege to create a bay 
user within that domain. It seems Heat supports native keystone resource [2], 
which makes the administration of keystone users much easier. The drawback is 
the implementation is more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] 
http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for load 
balancer in k8s services. After a chat with sdake, I would like to run this by 
the team for feedback/suggestion.
First let me give a little background for context. In the current k8s cluster, 
all k8s pods and services run within a private subnet (on Flannel) and they can 
access each other but they cannot be accessed from external network. The way to 
publish an endpoint to the external network is by specifying this attribute in 
your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer pool, 
members, VIP, monitor. The user would associate the VIP with a floating IP and 
then the endpoint of the service would be accessible from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored in a 
config file on the master node. This includes the username, tenant name, 
password. When k8s starts up, it will load the config file and create an 
authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the password. 
With the current effort on security to make Magnum production-ready, we want to 
make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to use, 
but this will require sizeable change upstream in k8s. We have good reason to 
pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call 
(normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to the heat 
templates
  3.  When configuring the master node, the password is saved in the config 
file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We can 
deprecate it later when we have a better solution. So leaving aside the issue 
of how k8s should be changed, the question is: is this approach reasonable for 
the time, or is there a better approach?

Ton Ngo,

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to