Vladimir,
What Andrew is saying is we should copy some specific keys to some
specific roles, and it's easy to do even now, just create several role
specific
tasks and copy required keys.
Deployment engineer who knows which keys are required for which roles
can do that.
What you are saying is we
Andrew
+1 to it - I provided these concerns to guys that we should not ship data
to tasks that do not need it. It will make us able to increase security for
pluggable architecture
On Fri, Feb 13, 2015 at 9:57 PM, Andrew Woodward xar...@gmail.com wrote:
Cool, You guys read my mind o.O
RE: the
+1 to Andrew
This is actually what we want to do with SSL keys.
On Wed, Feb 11, 2015 at 3:26 AM, Andrew Woodward xar...@gmail.com wrote:
We need to be highly security conscious here doing this in an insecure
manner is a HUGE risk so rsync over ssh from the master node is usually (or
scp) OK
Andrew,
It looks like what you've described is already done for ssh keys [1].
[1] https://review.openstack.org/#/c/149543/
On Fri, Feb 13, 2015 at 6:12 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:
+1 to Andrew
This is actually what we want to do with SSL keys.
On Wed, Feb 11, 2015 at
Cool, You guys read my mind o.O
RE: the review. We need to avoid copying the secrets to nodes that don't
require them. I think it might be too soon to be able to make granular
tasks based for this, but we need to move that way.
Also, how are the astute tasks read into the environment? Same as
We need to be highly security conscious here doing this in an insecure
manner is a HUGE risk so rsync over ssh from the master node is usually (or
scp) OK but rsync protocol from the node in the cluster will not be BAD (it
leaves the certs exposed on an weak service.)
I could see this being
Dmitry, Evgeniy
This is exactly what I was talking about when I mentioned serializers for
tasks - taking data from 3rd party sources if user wants. In this case user
will be able to generate some data somewhere and fetch it using this code
that we import.
On Thu, Jan 29, 2015 at 12:08 AM,
Evgeniy
This is not about layers - it is about how we get data. And we need to
separate data sources from the way we manipulate it. Thus, sources may be:
1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
DNS Servers. Then all this data is aggregated and transformed somehow.
Vladimir,
It's no clear how it's going to help. You can generate keys with one
tasks and then upload them with another task, why do we need
another layer/entity here?
Thanks,
On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:
Dmitry, Evgeniy
This is exactly what I
Vladimir,
1) Nailgun DB
Just a small note, we should not provide access to the database, this
approach
has serious issues, what we can do is to provide this information for
example
via REST API.
What you are saying is already implemented in any deployment tool for
example
lets take a look at
Evgeniy,
I am not suggesting to go to Nailgun DB directly. There obviously should be
some layer between a serializier and DB.
On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:
Vladimir,
1) Nailgun DB
Just a small note, we should not provide access to the database, this
Hi Dmitry,
I'm not sure if we should user approach when task executor reads
some data from the file system, ideally Nailgun should push
all of the required data to Astute.
But it can be tricky to implement, so I vote for 2nd approach.
Thanks,
On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko
Thank you guys for quick response.
Than, if there is no better option we will follow with second approach.
On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:
Hi Dmitry,
I'm not sure if we should user approach when task executor reads
some data from the file system, ideally
Hi.
I'm vote for second option, cause if we will want to implement some unified
hierarchy (like Fuel as CA for keys on controllers for different env's)
then it will fit better than other options. If we implement 3rd option then
we will reinvent the wheel with SSL in future. Bare rsync as storage
3rd option is about using rsyncd that we run under xinetd on primary
controller. And yes, the main concern here is security.
On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin sbogat...@mirantis.com
wrote:
Hi.
I'm vote for second option, cause if we will want to implement some
unified
Hi folks,
I want to discuss the way we are working with generated keys for
nova/ceph/mongo and something else.
Right now we are generating keys on master itself, and then distributing
them by mcollective
transport to all nodes. As you may know we are in the process of making
this process
16 matches
Mail list logo