bject: Re: [openstack-dev] [magnum] Handling password for k8s
Hi Vikas,
It's correct that once the password is saved in the k8s master node, then
it would have the same security as the nova-instance. The issue is as
Hongbin noted, the password is exposed along the chain of interaction
between magnum
penStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [magnum] Handling password for k8s
>
> Hi Ton,
>
>
>
> If I understand your proposal correctly, it means the inputted password
> wil
: [openstack-dev] [magnum] Handling password for k8s
Hi everyone,
I am running into a potential issue in implementing the support for load
balancer in k8s services. After a chat with sdake, I would like to run this by
the team for feedback/suggestion.
First let me give a little background for context
ck-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s
Hi Ton,
If I understand your proposal correctly, it means the inputted password will be
exposed to users in the same tenant (since the password is passed as sta
Hi Ton,
kube-masters will be nova instances only and because any access to
nova-instances is already being secured using keystone, I am not able
to understand what are the concerns in storing password on
master-nodes.
Can you please list down concerns in our current approach?
-Vikas Choudhary
s.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s
Hi Ton,
If I understand your proposal correctly, it means the inputted
password will be exposed to users in the same tenant (since the
password is passed as stack para
com>
To: openstack-dev@lists.openstack.org
Date: 09/20/2015 09:02 PM
Subject: [openstack-dev] [magnum] Handling password for k8s
Hi Ton,
kube-masters will be nova instances only and because any access to
nova-instances is already being secured using keystone, I am not able to
understan
Hi everyone,
I am running into a potential issue in implementing the support for
load balancer in k8s services. After a chat with sdake, I would like to
run this by the team for feedback/suggestion.
First let me give a little background for context. In the current k8s
cluster, all k8s pods