Re: [openstack-dev] [magnum] Handling password for k8s

2015-09-21 Thread Ton Ngo

Another option is for Magnum to do all the necessary set up and leave to
the user the final step of editing the config file with the password.  Then
the load balancer feature will be disabled by default and we provide the
instruction for the user to enable it.  This would circumvent the issue
with handling the password and would actually match the intended usage in
k8s.
Ton Ngo,



From:   Ton Ngo/Watson/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage questions
\)" <openstack-dev@lists.openstack.org>
Date:   09/20/2015 09:57 PM
Subject:    Re: [openstack-dev] [magnum] Handling password for k8s



Hi Vikas,
It's correct that once the password is saved in the k8s master node, then
it would have the same security as the nova-instance. The issue is as
Hongbin noted, the password is exposed along the chain of interaction
between magnum and heat. Users in the same tenant can potentially see the
password of the user who creates the cluster. The current k8s mode of
operation is k8s-centric, where the cluster is assumed to be managed
manually so it is reasonable to configure with one OpenStack user
credential. With Magnum managing the k8s cluster, we add another layer of
management, hence the complication.

Thanks Hongbin, Steve for the suggestion. If we don't see any fundamental
flaw, we can proceed with the initial sub-optimal implementation and refine
it later with the service domain implementation.

Ton Ngo,


Inactive hide details for Vikas Choudhary ---09/20/2015 09:02:49 PM---Hi
Ton, kube-masters will be nova instances only and becaVikas Choudhary
---09/20/2015 09:02:49 PM---Hi Ton, kube-masters will be nova instances
only and because any access to

From: Vikas Choudhary <choudharyvika...@gmail.com>
To: openstack-dev@lists.openstack.org
Date: 09/20/2015 09:02 PM
Subject: [openstack-dev] [magnum] Handling password for k8s



Hi Ton,
kube-masters will be nova instances only and because any access to
nova-instances is already being secured using keystone, I am not able to
understand what are the concerns in storing password on master-nodes.
Can you please list down concerns in our current approach?
-Vikas Choudhary
Hi
everyone,
I
am running into a potential issue in implementing the support for
load
balancer in k8s services. After a chat with sdake, I would like to
run
this by the team for feedback/suggestion.
First
let me give a little background for context. In the current k8s
cluster,
all k8s pods and services run within a private subnet (on Flannel)
and
they can access each other but they cannot be accessed from external
network.
The way to publish an endpoint to the external network is by
specifying
this attribute in your service manifest:
type:
LoadBalancer
   Then
k8s will talk to OpenStack Neutron to create the load balancer
pool,
members, VIP, monitor. The user would associate the VIP with a
floating
IP and then the endpoint of the service would be accessible from
the
external internet.
   To
talk to Neutron, k8s needs the user credential and this is stored in
a
config file on the master node. This includes the username, tenant
name,
password.
When k8s starts up, it will load the config file and create an
authenticated
client with Keystone.
The
issue we need to find a good solution for is how to handle the
password.
With the current effort on security to make Magnum
production-ready,
we want to make sure to handle the password properly.
Ideally,
the best solution is to pass the authenticated token to k8s to
use,
but this will require sizeable change upstream in k8s. We have good
reason
to pursue this but it will take time.
For
now, my current implementation is as follows:
   In
a bay-create, magnum client adds the password to the API call
   (normally
it authenticates and sends the token)
   The
conductor picks it up and uses it as an input parameter to the heat
   templates
   When
configuring the master node, the password is saved in the config
   file
for k8s services.
   Magnum
does not store the password internally.



This
is probably not ideal, but it would let us proceed for now. We
can
deprecate it later when we have a better solution. So leaving aside
the
issue of how k8s should be changed, the question is: is this
approach
reasonable
for the time, or is there a better approach?



Ton
Ngo,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__

Re: [openstack-dev] [magnum] Handling password for k8s

2015-09-21 Thread 王华
I think it is the same case in docker registry v2. User credentials are
needed in docker registry v2 config file. We can use the same user in all
bays, but different trust[1] to it. The user should have no role, it can
only work with trust.

[1] https://wiki.openstack.org/wiki/Keystone/Trusts

Regards
Wanghua

On Mon, Sep 21, 2015 at 10:34 AM, Steven Dake (stdake) <std...@cisco.com>
wrote:

> Hongbin,
>
> I believe the domain approach is the preferred approach for the solution
> long term.  It will require more R to execute then other options but also
> be completely secure.
>
> Regards
> -steve
>
>
> From: Hongbin Lu <hongbin...@huawei.com>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Sunday, September 20, 2015 at 4:26 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [magnum] Handling password for k8s
>
> Hi Ton,
>
>
>
> If I understand your proposal correctly, it means the inputted password
> will be exposed to users in the same tenant (since the password is passed
> as stack parameter, which is exposed within tenant). If users are not
> admin, they don’t have privilege to create a temp user. As a result, users
> have to expose their own password to create a bay, which is suboptimal.
>
>
>
> A slightly amendment is to have operator to create a user that is
> dedicated for communication between k8s and neutron load balancer service.
> The password of the user can be written into config file, picked up by
> conductor and passed to heat. The drawback is that there is no
> multi-tenancy for openstack load balancer service, since all bays will
> share the same credential.
>
>
>
> Another solution I can think of is to have magnum to create a keystone
> domain [1] for each bay (using admin credential in config file), and assign
> bay’s owner to that domain. As a result, the user will have privilege to
> create a bay user within that domain. It seems Heat supports native
> keystone resource [2], which makes the administration of keystone users
> much easier. The drawback is the implementation is more complicated.
>
>
>
> [1] https://wiki.openstack.org/wiki/Domains
>
> [2]
> http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Ton Ngo [mailto:t...@us.ibm.com <t...@us.ibm.com>]
> *Sent:* September-20-15 2:08 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [magnum] Handling password for k8s
>
>
>
> Hi everyone,
> I am running into a potential issue in implementing the support for load
> balancer in k8s services. After a chat with sdake, I would like to run this
> by the team for feedback/suggestion.
> First let me give a little background for context. In the current k8s
> cluster, all k8s pods and services run within a private subnet (on Flannel)
> and they can access each other but they cannot be accessed from external
> network. The way to publish an endpoint to the external network is by
> specifying this attribute in your service manifest:
> type: LoadBalancer
> Then k8s will talk to OpenStack Neutron to create the load balancer pool,
> members, VIP, monitor. The user would associate the VIP with a floating IP
> and then the endpoint of the service would be accessible from the external
> internet.
> To talk to Neutron, k8s needs the user credential and this is stored in a
> config file on the master node. This includes the username, tenant name,
> password. When k8s starts up, it will load the config file and create an
> authenticated client with Keystone.
> The issue we need to find a good solution for is how to handle the
> password. With the current effort on security to make Magnum
> production-ready, we want to make sure to handle the password properly.
> Ideally, the best solution is to pass the authenticated token to k8s to
> use, but this will require sizeable change upstream in k8s. We have good
> reason to pursue this but it will take time.
> For now, my current implementation is as follows:
>
>1. In a bay-create, magnum client adds the password to the API call
>(normally it authenticates and sends the token)
>2. The conductor picks it up and uses it as an input parameter to the
>heat templates
>3. When configuring the master node, the password is saved in the
>config file for k8s services.
>4. Magnum does not store the password internally.
>
>
> This is probably not ideal, but it would let us proceed for now. We c

Re: [openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Hongbin Lu
Hi Ton,

If I understand your proposal correctly, it means the inputted password will be 
exposed to users in the same tenant (since the password is passed as stack 
parameter, which is exposed within tenant). If users are not admin, they don't 
have privilege to create a temp user. As a result, users have to expose their 
own password to create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is dedicated for 
communication between k8s and neutron load balancer service. The password of 
the user can be written into config file, picked up by conductor and passed to 
heat. The drawback is that there is no multi-tenancy for openstack load 
balancer service, since all bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone domain 
[1] for each bay (using admin credential in config file), and assign bay's 
owner to that domain. As a result, the user will have privilege to create a bay 
user within that domain. It seems Heat supports native keystone resource [2], 
which makes the administration of keystone users much easier. The drawback is 
the implementation is more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] 
http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for load 
balancer in k8s services. After a chat with sdake, I would like to run this by 
the team for feedback/suggestion.
First let me give a little background for context. In the current k8s cluster, 
all k8s pods and services run within a private subnet (on Flannel) and they can 
access each other but they cannot be accessed from external network. The way to 
publish an endpoint to the external network is by specifying this attribute in 
your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer pool, 
members, VIP, monitor. The user would associate the VIP with a floating IP and 
then the endpoint of the service would be accessible from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored in a 
config file on the master node. This includes the username, tenant name, 
password. When k8s starts up, it will load the config file and create an 
authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the password. 
With the current effort on security to make Magnum production-ready, we want to 
make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to use, 
but this will require sizeable change upstream in k8s. We have good reason to 
pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call 
(normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to the heat 
templates
  3.  When configuring the master node, the password is saved in the config 
file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We can 
deprecate it later when we have a better solution. So leaving aside the issue 
of how k8s should be changed, the question is: is this approach reasonable for 
the time, or is there a better approach?

Ton Ngo,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Steven Dake (stdake)
Hongbin,

I believe the domain approach is the preferred approach for the solution long 
term.  It will require more R to execute then other options but also be 
completely secure.

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, September 20, 2015 at 4:26 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hi Ton,

If I understand your proposal correctly, it means the inputted password will be 
exposed to users in the same tenant (since the password is passed as stack 
parameter, which is exposed within tenant). If users are not admin, they don’t 
have privilege to create a temp user. As a result, users have to expose their 
own password to create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is dedicated for 
communication between k8s and neutron load balancer service. The password of 
the user can be written into config file, picked up by conductor and passed to 
heat. The drawback is that there is no multi-tenancy for openstack load 
balancer service, since all bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone domain 
[1] for each bay (using admin credential in config file), and assign bay’s 
owner to that domain. As a result, the user will have privilege to create a bay 
user within that domain. It seems Heat supports native keystone resource [2], 
which makes the administration of keystone users much easier. The drawback is 
the implementation is more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] 
http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for load 
balancer in k8s services. After a chat with sdake, I would like to run this by 
the team for feedback/suggestion.
First let me give a little background for context. In the current k8s cluster, 
all k8s pods and services run within a private subnet (on Flannel) and they can 
access each other but they cannot be accessed from external network. The way to 
publish an endpoint to the external network is by specifying this attribute in 
your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer pool, 
members, VIP, monitor. The user would associate the VIP with a floating IP and 
then the endpoint of the service would be accessible from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored in a 
config file on the master node. This includes the username, tenant name, 
password. When k8s starts up, it will load the config file and create an 
authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the password. 
With the current effort on security to make Magnum production-ready, we want to 
make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to use, 
but this will require sizeable change upstream in k8s. We have good reason to 
pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call 
(normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to the heat 
templates
  3.  When configuring the master node, the password is saved in the config 
file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We can 
deprecate it later when we have a better solution. So leaving aside the issue 
of how k8s should be changed, the question is: is this approach reasonable for 
the time, or is there a better approach?

Ton Ngo,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Vikas Choudhary
Hi Ton,

kube-masters will be nova instances only and because any access to
nova-instances is already being secured using keystone, I am not able
to understand what are the concerns in storing password on
master-nodes.

Can you please list down concerns in our current approach?

-Vikas Choudhary

*Hi
everyone,*

*I
am running into a potential issue in implementing the support for*

*load
balancer in k8s services.  After a chat with sdake, I would like to*

*run
this by the team for feedback/suggestion.*

*First
let me give a little background for context.  In the current k8s*

*cluster,
all k8s pods and services run within a private subnet (on Flannel)*

*and
they can access each other but they cannot be accessed from external*

*network.
 The way to publish an endpoint to the external network is by*

*specifying
this attribute in your service manifest:*

*type:
LoadBalancer*

   *Then
k8s will talk to OpenStack Neutron to create the load balancer*

*pool,
members, VIP, monitor.  The user would associate the VIP with a*

*floating
IP and then the endpoint of the service would be accessible from*

*the
external internet.*

   *To
talk to Neutron, k8s needs the user credential and this is stored in*

*a
config file on the master node.  This includes the username, tenant
name,*

*password.
 When k8s starts up, it will load the config file and create an*

*authenticated
client with Keystone.*

*The
issue we need to find a good solution for is how to handle the*

*password.
 With the current effort on security to make Magnum*

*production-ready,
we want to make sure to handle the password properly.*

*Ideally,
the best solution is to pass the authenticated token to k8s to*

*use,
but this will require sizeable change upstream in k8s.  We have good*

*reason
to pursue this but it will take time.*

*For
now, my current implementation is as follows:*

   *In
a bay-create, magnum client adds the password to the API call*

   *(normally
it authenticates and sends the token)*

   *The
conductor picks it up and uses it as an input parameter to the heat*

   *templates*

   *When
configuring the master node, the password is saved in the config*

   *file
for k8s services.*

   *Magnum
does not store the password internally.*


 *This
is probably not ideal, but it would let us proceed for now.  We*

*can
deprecate it later when we have a better solution.  So leaving aside*

*the
issue of how k8s should be changed, the question is:  is this
approach*

*reasonable
for the time, or is there a better approach?*


 *Ton
Ngo,*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Vikas Choudhary
Thanks Hongbin.

I was not aware of stack-parameters visibility, so was not able to
figure out actual concerns in Ton's initial approach.

keystone domain approach seems secure enough.

-Vikas



Hongbin,

I believe the domain approach is the preferred approach for the
solution long term.  It will require more R to execute then other
options but also be completely secure.

Regards
-steve


From: Hongbin Lu http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev><mailto:hongbin.lu
at huawei.com 
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev><mailto:openstack-dev
at lists.openstack.org
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>>
Date: Sunday, September 20, 2015 at 4:26 PM
To: "OpenStack Development Mailing List (not for usage questions)"
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev><mailto:openstack-dev
at lists.openstack.org
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hi Ton,

If I understand your proposal correctly, it means the inputted
password will be exposed to users in the same tenant (since the
password is passed as stack parameter, which is exposed within
tenant). If users are not admin, they don’t have privilege to create a
temp user. As a result, users have to expose their own password to
create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is
dedicated for communication between k8s and neutron load balancer
service. The password of the user can be written into config file,
picked up by conductor and passed to heat. The drawback is that there
is no multi-tenancy for openstack load balancer service, since all
bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone
domain [1] for each bay (using admin credential in config file), and
assign bay’s owner to that domain. As a result, the user will have
privilege to create a bay user within that domain. It seems Heat
supports native keystone resource [2], which makes the administration
of keystone users much easier. The drawback is the implementation is
more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] 
http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:ton at us.ibm.com
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for
load balancer in k8s services. After a chat with sdake, I would like
to run this by the team for feedback/suggestion.
First let me give a little background for context. In the current k8s
cluster, all k8s pods and services run within a private subnet (on
Flannel) and they can access each other but they cannot be accessed
from external network. The way to publish an endpoint to the external
network is by specifying this attribute in your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer
pool, members, VIP, monitor. The user would associate the VIP with a
floating IP and then the endpoint of the service would be accessible
from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored
in a config file on the master node. This includes the username,
tenant name, password. When k8s starts up, it will load the config
file and create an authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the
password. With the current effort on security to make Magnum
production-ready, we want to make sure to handle the password
properly.
Ideally, the best solution is to pass the authenticated token to k8s
to use, but this will require sizeable change upstream in k8s. We have
good reason to pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call
(normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to
the heat templates
  3.  When configuring the master node, the password is saved in the
config file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We
can deprecate it later when we have a better solution. So leaving
aside the issue of how k8s should be changed, the question is: i

Re: [openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Ton Ngo

Hi Vikas,
 It's correct that once the password is saved in the k8s master node,
then it would have the same security as the nova-instance.  The issue is as
Hongbin noted, the password is exposed along the chain of interaction
between magnum and heat.  Users in the same tenant can potentially see the
password of the user who creates the cluster.  The current k8s mode of
operation is k8s-centric, where the cluster is assumed to be managed
manually so it is reasonable to configure with one OpenStack user
credential.  With Magnum managing the k8s cluster, we add another layer of
management, hence the complication.

Thanks Hongbin, Steve for the suggestion.  If we don't see any fundamental
flaw, we can proceed with the initial sub-optimal implementation and refine
it later with the service domain implementation.

Ton Ngo,




From:   Vikas Choudhary <choudharyvika...@gmail.com>
To: openstack-dev@lists.openstack.org
Date:   09/20/2015 09:02 PM
Subject:    [openstack-dev] [magnum] Handling password for k8s



Hi Ton,
kube-masters will be nova instances only and because any access to
nova-instances is already being secured using keystone, I am not able to
understand what are the concerns in storing password on master-nodes.
Can you please list down concerns in our current approach?
-Vikas Choudhary
Hi
everyone,
I
am running into a potential issue in implementing the support for
load
balancer in k8s services.  After a chat with sdake, I would like to
run
this by the team for feedback/suggestion.
First
let me give a little background for context.  In the current k8s
cluster,
all k8s pods and services run within a private subnet (on Flannel)
and
they can access each other but they cannot be accessed from external
network.
 The way to publish an endpoint to the external network is by
specifying
this attribute in your service manifest:
type:
LoadBalancer
   Then
k8s will talk to OpenStack Neutron to create the load balancer
pool,
members, VIP, monitor.  The user would associate the VIP with a
floating
IP and then the endpoint of the service would be accessible from
the
external internet.
   To
talk to Neutron, k8s needs the user credential and this is stored in
a
config file on the master node.  This includes the username, tenant
name,
password.
 When k8s starts up, it will load the config file and create an
authenticated
client with Keystone.
The
issue we need to find a good solution for is how to handle the
password.
 With the current effort on security to make Magnum
production-ready,
we want to make sure to handle the password properly.
Ideally,
the best solution is to pass the authenticated token to k8s to
use,
but this will require sizeable change upstream in k8s.  We have good
reason
to pursue this but it will take time.
For
now, my current implementation is as follows:
   In
a bay-create, magnum client adds the password to the API call
   (normally
it authenticates and sends the token)
   The
conductor picks it up and uses it as an input parameter to the heat
   templates
   When
configuring the master node, the password is saved in the config
   file
for k8s services.
   Magnum
does not store the password internally.



This
is probably not ideal, but it would let us proceed for now.  We
can
deprecate it later when we have a better solution.  So leaving aside
the
issue of how k8s should be changed, the question is:  is this
approach
reasonable
for the time, or is there a better approach?



Ton
Ngo,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Ton Ngo


Hi everyone,
I am running into a potential issue in implementing the support for
load balancer in k8s services.  After a chat with sdake, I would like to
run this by the team for feedback/suggestion.
First let me give a little background for context.  In the current k8s
cluster, all k8s pods and services run within a private subnet (on Flannel)
and they can access each other but they cannot be accessed from external
network.  The way to publish an endpoint to the external network is by
specifying this attribute in your service manifest:
type: LoadBalancer
   Then k8s will talk to OpenStack Neutron to create the load balancer
pool, members, VIP, monitor.  The user would associate the VIP with a
floating IP and then the endpoint of the service would be accessible from
the external internet.
   To talk to Neutron, k8s needs the user credential and this is stored in
a config file on the master node.  This includes the username, tenant name,
password.  When k8s starts up, it will load the config file and create an
authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the
password.  With the current effort on security to make Magnum
production-ready, we want to make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to
use, but this will require sizeable change upstream in k8s.  We have good
reason to pursue this but it will take time.
For now, my current implementation is as follows:
   In a bay-create, magnum client adds the password to the API call
   (normally it authenticates and sends the token)
   The conductor picks it up and uses it as an input parameter to the heat
   templates
   When configuring the master node, the password is saved in the config
   file for k8s services.
   Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now.  We
can deprecate it later when we have a better solution.  So leaving aside
the issue of how k8s should be changed, the question is:  is this approach
reasonable for the time, or is there a better approach?

Ton Ngo,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev