Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-02-18 Thread Evgeniy L
Vladimir,

What Andrew is saying is we should copy some specific keys to some
specific roles, and it's easy to do even now, just create several role
specific
tasks and copy required keys.
Deployment engineer who knows which keys are required for which roles
can do that.

What you are saying is we should have some way to restrict task from
getting information it wants, it is separate huge topic, because it requires
to create polices which plugin developer should describe to get access to
the data, as it's done for iOS/Android, also it's not so easy to do
sandboxing
when task can execute any shell command on any node.

Thanks,

On Wed, Feb 18, 2015 at 12:49 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Andrew

 +1 to it - I provided these concerns to guys that we should not ship data
 to tasks that do not need it. It will make us able to increase security for
 pluggable architecture

 On Fri, Feb 13, 2015 at 9:57 PM, Andrew Woodward xar...@gmail.com wrote:

 Cool, You guys read my mind o.O

 RE: the review. We need to avoid copying the secrets to nodes that don't
 require them. I think it might be too soon to be able to make granular
 tasks based for this, but we need to move that way.

 Also, how are the astute tasks read into the environment? Same as with
 the others?

 fuel rel --sync-deployment-tasks


 On Fri, Feb 13, 2015 at 7:32 AM, Evgeniy L e...@mirantis.com wrote:

 Andrew,

 It looks like what you've described is already done for ssh keys [1].

 [1] https://review.openstack.org/#/c/149543/

 On Fri, Feb 13, 2015 at 6:12 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 +1 to Andrew

 This is actually what we want to do with SSL keys.

 On Wed, Feb 11, 2015 at 3:26 AM, Andrew Woodward xar...@gmail.com
 wrote:

 We need to be highly security conscious here doing this in an insecure
 manner is a HUGE risk so rsync over ssh from the master node is usually 
 (or
 scp) OK but rsync protocol from the node in the cluster will not be BAD 
 (it
 leaves the certs exposed on an weak service.)

 I could see this being implemented as some additional task type that
 can instead be run on the fuel master nodes instead of a target node. This
 could also be useful for plugin writers that may need to access some
 external API as part of their task graph. We'd need some way to make the
 generate task run once for the env, vs the push certs which runs for each
 role that has a cert requirement.

 we'd end up with some like
 generate_certs:
   runs_from: master_once
   provider: whatever
 push_certs:
   runs_from: master
   provider: bash
   role: [*]

 On Thu, Jan 29, 2015 at 2:07 PM, Vladimir Kuklin vkuk...@mirantis.com
  wrote:

 Evgeniy,

 I am not suggesting to go to Nailgun DB directly. There obviously
 should be some layer between a serializier and DB.

 On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database,
 this approach
 has serious issues, what we can do is to provide this information
 for example
 via REST API.

 What you are saying is already implemented in any deployment tool
 for example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1]
 http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need
 to separate data sources from the way we manipulate it. Thus, sources 
 may
 be: 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of
 Google DNS Servers. Then all this data is aggregated and transformed
 somehow. After that it is shipped to the deployment layer. That's how 
 I see
 it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com
 wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with
 one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned
 serializers for tasks - taking data from 3rd party sources if user 
 wants.
 In this case user will be able to generate some data somewhere and 
 fetch it
 using this code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second
 approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com
 wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to 

Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-02-18 Thread Vladimir Kuklin
Andrew

+1 to it - I provided these concerns to guys that we should not ship data
to tasks that do not need it. It will make us able to increase security for
pluggable architecture

On Fri, Feb 13, 2015 at 9:57 PM, Andrew Woodward xar...@gmail.com wrote:

 Cool, You guys read my mind o.O

 RE: the review. We need to avoid copying the secrets to nodes that don't
 require them. I think it might be too soon to be able to make granular
 tasks based for this, but we need to move that way.

 Also, how are the astute tasks read into the environment? Same as with the
 others?

 fuel rel --sync-deployment-tasks


 On Fri, Feb 13, 2015 at 7:32 AM, Evgeniy L e...@mirantis.com wrote:

 Andrew,

 It looks like what you've described is already done for ssh keys [1].

 [1] https://review.openstack.org/#/c/149543/

 On Fri, Feb 13, 2015 at 6:12 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 +1 to Andrew

 This is actually what we want to do with SSL keys.

 On Wed, Feb 11, 2015 at 3:26 AM, Andrew Woodward xar...@gmail.com
 wrote:

 We need to be highly security conscious here doing this in an insecure
 manner is a HUGE risk so rsync over ssh from the master node is usually (or
 scp) OK but rsync protocol from the node in the cluster will not be BAD (it
 leaves the certs exposed on an weak service.)

 I could see this being implemented as some additional task type that
 can instead be run on the fuel master nodes instead of a target node. This
 could also be useful for plugin writers that may need to access some
 external API as part of their task graph. We'd need some way to make the
 generate task run once for the env, vs the push certs which runs for each
 role that has a cert requirement.

 we'd end up with some like
 generate_certs:
   runs_from: master_once
   provider: whatever
 push_certs:
   runs_from: master
   provider: bash
   role: [*]

 On Thu, Jan 29, 2015 at 2:07 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy,

 I am not suggesting to go to Nailgun DB directly. There obviously
 should be some layer between a serializier and DB.

 On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database, this
 approach
 has serious issues, what we can do is to provide this information for
 example
 via REST API.

 What you are saying is already implemented in any deployment tool for
 example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1]
 http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need
 to separate data sources from the way we manipulate it. Thus, sources 
 may
 be: 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of
 Google DNS Servers. Then all this data is aggregated and transformed
 somehow. After that it is shipped to the deployment layer. That's how I 
 see
 it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned
 serializers for tasks - taking data from 3rd party sources if user 
 wants.
 In this case user will be able to generate some data somewhere and 
 fetch it
 using this code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second
 approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com
 wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on
 primary controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement
 some unified hierarchy (like Fuel as CA for keys on controllers 
 for
 different env's) then it will fit better than other options. If we
 implement 3rd option then we will reinvent the wheel with SSL in 
 future.
 Bare rsync as storage for private keys sounds pretty 
 uncomfortable for me.

 On Wed, Jan 

Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-02-13 Thread Vladimir Kuklin
+1 to Andrew

This is actually what we want to do with SSL keys.

On Wed, Feb 11, 2015 at 3:26 AM, Andrew Woodward xar...@gmail.com wrote:

 We need to be highly security conscious here doing this in an insecure
 manner is a HUGE risk so rsync over ssh from the master node is usually (or
 scp) OK but rsync protocol from the node in the cluster will not be BAD (it
 leaves the certs exposed on an weak service.)

 I could see this being implemented as some additional task type that can
 instead be run on the fuel master nodes instead of a target node. This
 could also be useful for plugin writers that may need to access some
 external API as part of their task graph. We'd need some way to make the
 generate task run once for the env, vs the push certs which runs for each
 role that has a cert requirement.

 we'd end up with some like
 generate_certs:
   runs_from: master_once
   provider: whatever
 push_certs:
   runs_from: master
   provider: bash
   role: [*]

 On Thu, Jan 29, 2015 at 2:07 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy,

 I am not suggesting to go to Nailgun DB directly. There obviously should
 be some layer between a serializier and DB.

 On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database, this
 approach
 has serious issues, what we can do is to provide this information for
 example
 via REST API.

 What you are saying is already implemented in any deployment tool for
 example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1]
 http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need to
 separate data sources from the way we manipulate it. Thus, sources may be:
 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
 DNS Servers. Then all this data is aggregated and transformed somehow.
 After that it is shipped to the deployment layer. That's how I see it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned serializers
 for tasks - taking data from 3rd party sources if user wants. In this 
 case
 user will be able to generate some data somewhere and fetch it using this
 code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second
 approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on
 primary controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement
 some unified hierarchy (like Fuel as CA for keys on controllers for
 different env's) then it will fit better than other options. If we
 implement 3rd option then we will reinvent the wheel with SSL in 
 future.
 Bare rsync as storage for private keys sounds pretty uncomfortable 
 for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder
 /etc/fuel/keys, and then copy them with rsync task (but it feels 
 not very
 secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute
 on target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from
 file on master and put it on the node.

 Also there is 3rd option to generate keys right on
 primary-controller and then distribute them on all other nodes, and 
 i guess
 it 

Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-02-13 Thread Evgeniy L
Andrew,

It looks like what you've described is already done for ssh keys [1].

[1] https://review.openstack.org/#/c/149543/

On Fri, Feb 13, 2015 at 6:12 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 +1 to Andrew

 This is actually what we want to do with SSL keys.

 On Wed, Feb 11, 2015 at 3:26 AM, Andrew Woodward xar...@gmail.com wrote:

 We need to be highly security conscious here doing this in an insecure
 manner is a HUGE risk so rsync over ssh from the master node is usually (or
 scp) OK but rsync protocol from the node in the cluster will not be BAD (it
 leaves the certs exposed on an weak service.)

 I could see this being implemented as some additional task type that can
 instead be run on the fuel master nodes instead of a target node. This
 could also be useful for plugin writers that may need to access some
 external API as part of their task graph. We'd need some way to make the
 generate task run once for the env, vs the push certs which runs for each
 role that has a cert requirement.

 we'd end up with some like
 generate_certs:
   runs_from: master_once
   provider: whatever
 push_certs:
   runs_from: master
   provider: bash
   role: [*]

 On Thu, Jan 29, 2015 at 2:07 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy,

 I am not suggesting to go to Nailgun DB directly. There obviously should
 be some layer between a serializier and DB.

 On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database, this
 approach
 has serious issues, what we can do is to provide this information for
 example
 via REST API.

 What you are saying is already implemented in any deployment tool for
 example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1]
 http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need to
 separate data sources from the way we manipulate it. Thus, sources may be:
 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
 DNS Servers. Then all this data is aggregated and transformed somehow.
 After that it is shipped to the deployment layer. That's how I see it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned
 serializers for tasks - taking data from 3rd party sources if user 
 wants.
 In this case user will be able to generate some data somewhere and 
 fetch it
 using this code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second
 approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com
 wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on
 primary controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement
 some unified hierarchy (like Fuel as CA for keys on controllers for
 different env's) then it will fit better than other options. If we
 implement 3rd option then we will reinvent the wheel with SSL in 
 future.
 Bare rsync as storage for private keys sounds pretty uncomfortable 
 for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys
 for nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process
 of making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder
 /etc/fuel/keys, and then copy them with rsync task (but it feels 
 not very
 secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute
 on target nodes. It will require additional
 hook in astute, smth like 

Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-02-13 Thread Andrew Woodward
Cool, You guys read my mind o.O

RE: the review. We need to avoid copying the secrets to nodes that don't
require them. I think it might be too soon to be able to make granular
tasks based for this, but we need to move that way.

Also, how are the astute tasks read into the environment? Same as with the
others?

 fuel rel --sync-deployment-tasks


On Fri, Feb 13, 2015 at 7:32 AM, Evgeniy L e...@mirantis.com wrote:

 Andrew,

 It looks like what you've described is already done for ssh keys [1].

 [1] https://review.openstack.org/#/c/149543/

 On Fri, Feb 13, 2015 at 6:12 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 +1 to Andrew

 This is actually what we want to do with SSL keys.

 On Wed, Feb 11, 2015 at 3:26 AM, Andrew Woodward xar...@gmail.com
 wrote:

 We need to be highly security conscious here doing this in an insecure
 manner is a HUGE risk so rsync over ssh from the master node is usually (or
 scp) OK but rsync protocol from the node in the cluster will not be BAD (it
 leaves the certs exposed on an weak service.)

 I could see this being implemented as some additional task type that can
 instead be run on the fuel master nodes instead of a target node. This
 could also be useful for plugin writers that may need to access some
 external API as part of their task graph. We'd need some way to make the
 generate task run once for the env, vs the push certs which runs for each
 role that has a cert requirement.

 we'd end up with some like
 generate_certs:
   runs_from: master_once
   provider: whatever
 push_certs:
   runs_from: master
   provider: bash
   role: [*]

 On Thu, Jan 29, 2015 at 2:07 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy,

 I am not suggesting to go to Nailgun DB directly. There obviously
 should be some layer between a serializier and DB.

 On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database, this
 approach
 has serious issues, what we can do is to provide this information for
 example
 via REST API.

 What you are saying is already implemented in any deployment tool for
 example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1]
 http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin vkuk...@mirantis.com
  wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need
 to separate data sources from the way we manipulate it. Thus, sources may
 be: 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of
 Google DNS Servers. Then all this data is aggregated and transformed
 somehow. After that it is shipped to the deployment layer. That's how I 
 see
 it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin 
 vkuk...@mirantis.com wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned
 serializers for tasks - taking data from 3rd party sources if user 
 wants.
 In this case user will be able to generate some data somewhere and 
 fetch it
 using this code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second
 approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com
 wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on
 primary controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement
 some unified hierarchy (like Fuel as CA for keys on controllers for
 different env's) then it will fit better than other options. If we
 implement 3rd option then we will reinvent the wheel with SSL in 
 future.
 Bare rsync as storage for private keys sounds pretty uncomfortable 
 for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys
 for nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by 

Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-02-10 Thread Andrew Woodward
We need to be highly security conscious here doing this in an insecure
manner is a HUGE risk so rsync over ssh from the master node is usually (or
scp) OK but rsync protocol from the node in the cluster will not be BAD (it
leaves the certs exposed on an weak service.)

I could see this being implemented as some additional task type that can
instead be run on the fuel master nodes instead of a target node. This
could also be useful for plugin writers that may need to access some
external API as part of their task graph. We'd need some way to make the
generate task run once for the env, vs the push certs which runs for each
role that has a cert requirement.

we'd end up with some like
generate_certs:
  runs_from: master_once
  provider: whatever
push_certs:
  runs_from: master
  provider: bash
  role: [*]

On Thu, Jan 29, 2015 at 2:07 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Evgeniy,

 I am not suggesting to go to Nailgun DB directly. There obviously should
 be some layer between a serializier and DB.

 On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database, this
 approach
 has serious issues, what we can do is to provide this information for
 example
 via REST API.

 What you are saying is already implemented in any deployment tool for
 example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1] http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need to
 separate data sources from the way we manipulate it. Thus, sources may be:
 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
 DNS Servers. Then all this data is aggregated and transformed somehow.
 After that it is shipped to the deployment layer. That's how I see it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin vkuk...@mirantis.com
  wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned serializers
 for tasks - taking data from 3rd party sources if user wants. In this case
 user will be able to generate some data somewhere and fetch it using this
 code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second
 approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on
 primary controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement
 some unified hierarchy (like Fuel as CA for keys on controllers for
 different env's) then it will fit better than other options. If we
 implement 3rd option then we will reinvent the wheel with SSL in 
 future.
 Bare rsync as storage for private keys sounds pretty uncomfortable 
 for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder
 /etc/fuel/keys, and then copy them with rsync task (but it feels not 
 very
 secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from
 file on master and put it on the node.

 Also there is 3rd option to generate keys right on
 primary-controller and then distribute them on all other nodes, and 
 i guess
 it will be responsibility of controller to store current keys that 
 are
 valid for cluster. Alex please provide more details about 3rd 
 approach.

 Maybe there is 

Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-29 Thread Vladimir Kuklin
Dmitry, Evgeniy

This is exactly what I was talking about when I mentioned serializers for
tasks - taking data from 3rd party sources if user wants. In this case user
will be able to generate some data somewhere and fetch it using this code
that we import.

On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko adide...@mirantis.com
  wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak dshul...@mirantis.com
  wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys,
 and then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file on
 master and put it on the node.

 Also there is 3rd option to generate keys right on primary-controller
 and then distribute them on all other nodes, and i guess it will be
 responsibility of controller to store current keys that are valid for
 cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-29 Thread Vladimir Kuklin
Evgeniy

This is not about layers - it is about how we get data. And we need to
separate data sources from the way we manipulate it. Thus, sources may be:
1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
DNS Servers. Then all this data is aggregated and transformed somehow.
After that it is shipped to the deployment layer. That's how I see it.

On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned serializers for
 tasks - taking data from 3rd party sources if user wants. In this case user
 will be able to generate some data somewhere and fetch it using this code
 that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys,
 and then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file
 on master and put it on the node.

 Also there is 3rd option to generate keys right on
 primary-controller and then distribute them on all other nodes, and i 
 guess
 it will be responsibility of controller to store current keys that are
 valid for cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-29 Thread Evgeniy L
Vladimir,

It's no clear how it's going to help. You can generate keys with one
tasks and then upload them with another task, why do we need
another layer/entity here?

Thanks,

On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned serializers for
 tasks - taking data from 3rd party sources if user wants. In this case user
 will be able to generate some data somewhere and fetch it using this code
 that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys,
 and then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file
 on master and put it on the node.

 Also there is 3rd option to generate keys right on primary-controller
 and then distribute them on all other nodes, and i guess it will be
 responsibility of controller to store current keys that are valid for
 cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-29 Thread Evgeniy L
Vladimir,

 1) Nailgun DB

Just a small note, we should not provide access to the database, this
approach
has serious issues, what we can do is to provide this information for
example
via REST API.

What you are saying is already implemented in any deployment tool for
example
lets take a look at Ansible [1].

What you can do there is to create a task which stores the result of
executed
shell command in some variable.
And you can reuse it in any other task. I think we should use this approach.

[1] http://docs.ansible.com/playbooks_variables.html#registered-variables

On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need to
 separate data sources from the way we manipulate it. Thus, sources may be:
 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
 DNS Servers. Then all this data is aggregated and transformed somehow.
 After that it is shipped to the deployment layer. That's how I see it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned serializers
 for tasks - taking data from 3rd party sources if user wants. In this case
 user will be able to generate some data somewhere and fetch it using this
 code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak dshul...@mirantis.com
  wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys,
 and then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file
 on master and put it on the node.

 Also there is 3rd option to generate keys right on
 primary-controller and then distribute them on all other nodes, and i 
 guess
 it will be responsibility of controller to store current keys that are
 valid for cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-29 Thread Vladimir Kuklin
Evgeniy,

I am not suggesting to go to Nailgun DB directly. There obviously should be
some layer between a serializier and DB.

On Thu, Jan 29, 2015 at 9:07 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

  1) Nailgun DB

 Just a small note, we should not provide access to the database, this
 approach
 has serious issues, what we can do is to provide this information for
 example
 via REST API.

 What you are saying is already implemented in any deployment tool for
 example
 lets take a look at Ansible [1].

 What you can do there is to create a task which stores the result of
 executed
 shell command in some variable.
 And you can reuse it in any other task. I think we should use this
 approach.

 [1] http://docs.ansible.com/playbooks_variables.html#registered-variables

 On Thu, Jan 29, 2015 at 2:47 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Evgeniy

 This is not about layers - it is about how we get data. And we need to
 separate data sources from the way we manipulate it. Thus, sources may be:
 1) Nailgun DB 2) Users inventory system 3) Opendata like, list of Google
 DNS Servers. Then all this data is aggregated and transformed somehow.
 After that it is shipped to the deployment layer. That's how I see it.

 On Thu, Jan 29, 2015 at 2:18 PM, Evgeniy L e...@mirantis.com wrote:

 Vladimir,

 It's no clear how it's going to help. You can generate keys with one
 tasks and then upload them with another task, why do we need
 another layer/entity here?

 Thanks,

 On Thu, Jan 29, 2015 at 11:54 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Dmitry, Evgeniy

 This is exactly what I was talking about when I mentioned serializers
 for tasks - taking data from 3rd party sources if user wants. In this case
 user will be able to generate some data somewhere and fetch it using this
 code that we import.

 On Thu, Jan 29, 2015 at 12:08 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Thank you guys for quick response.
 Than, if there is no better option we will follow with second approach.

 On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko 
 adide...@mirantis.com wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for 
 different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync 
 as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of
 making this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder
 /etc/fuel/keys, and then copy them with rsync task (but it feels not 
 very
 secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from
 file on master and put it on the node.

 Also there is 3rd option to generate keys right on
 primary-controller and then distribute them on all other nodes, and i 
 guess
 it will be responsibility of controller to store current keys that are
 valid for cluster. Alex please provide more details about 3rd 
 approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 

Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-28 Thread Evgeniy L
Hi Dmitry,

I'm not sure if we should user approach when task executor reads
some data from the file system, ideally Nailgun should push
all of the required data to Astute.
But it can be tricky to implement, so I vote for 2nd approach.

Thanks,

On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko adide...@mirantis.com
wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then distributing
 them by mcollective
 transport to all nodes. As you may know we are in the process of making
 this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys, and
 then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on target
 nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file on
 master and put it on the node.

 Also there is 3rd option to generate keys right on primary-controller
 and then distribute them on all other nodes, and i guess it will be
 responsibility of controller to store current keys that are valid for
 cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-28 Thread Dmitriy Shulyak
Thank you guys for quick response.
Than, if there is no better option we will follow with second approach.

On Wed, Jan 28, 2015 at 7:08 PM, Evgeniy L e...@mirantis.com wrote:

 Hi Dmitry,

 I'm not sure if we should user approach when task executor reads
 some data from the file system, ideally Nailgun should push
 all of the required data to Astute.
 But it can be tricky to implement, so I vote for 2nd approach.

 Thanks,

 On Wed, Jan 28, 2015 at 7:08 PM, Aleksandr Didenko adide...@mirantis.com
 wrote:

 3rd option is about using rsyncd that we run under xinetd on primary
 controller. And yes, the main concern here is security.

 On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then
 distributing them by mcollective
 transport to all nodes. As you may know we are in the process of making
 this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys, and
 then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on
 target nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file on
 master and put it on the node.

 Also there is 3rd option to generate keys right on primary-controller
 and then distribute them on all other nodes, and i guess it will be
 responsibility of controller to store current keys that are valid for
 cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-28 Thread Stanislaw Bogatkin
Hi.
I'm vote for second option, cause if we will want to implement some unified
hierarchy (like Fuel as CA for keys on controllers for different env's)
then it will fit better than other options. If we implement 3rd option then
we will reinvent the wheel with SSL in future. Bare rsync as storage for
private keys sounds pretty uncomfortable for me.

On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then distributing
 them by mcollective
 transport to all nodes. As you may know we are in the process of making
 this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys, and
 then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on target
 nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file on
 master and put it on the node.

 Also there is 3rd option to generate keys right on primary-controller and
 then distribute them on all other nodes, and i guess it will be
 responsibility of controller to store current keys that are valid for
 cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Distribution of keys for environments

2015-01-28 Thread Aleksandr Didenko
3rd option is about using rsyncd that we run under xinetd on primary
controller. And yes, the main concern here is security.

On Wed, Jan 28, 2015 at 6:04 PM, Stanislaw Bogatkin sbogat...@mirantis.com
wrote:

 Hi.
 I'm vote for second option, cause if we will want to implement some
 unified hierarchy (like Fuel as CA for keys on controllers for different
 env's) then it will fit better than other options. If we implement 3rd
 option then we will reinvent the wheel with SSL in future. Bare rsync as
 storage for private keys sounds pretty uncomfortable for me.

 On Wed, Jan 28, 2015 at 6:44 PM, Dmitriy Shulyak dshul...@mirantis.com
 wrote:

 Hi folks,

 I want to discuss the way we are working with generated keys for
 nova/ceph/mongo and something else.

 Right now we are generating keys on master itself, and then distributing
 them by mcollective
 transport to all nodes. As you may know we are in the process of making
 this process described as
 task.

 There is a couple of options:
 1. Expose keys in rsync server on master, in folder /etc/fuel/keys, and
 then copy them with rsync task (but it feels not very secure)
 2. Copy keys from /etc/fuel/keys on master, to /var/lib/astute on target
 nodes. It will require additional
 hook in astute, smth like copy_file, which will copy data from file on
 master and put it on the node.

 Also there is 3rd option to generate keys right on primary-controller and
 then distribute them on all other nodes, and i guess it will be
 responsibility of controller to store current keys that are valid for
 cluster. Alex please provide more details about 3rd approach.

 Maybe there is more options?



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev