Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-13 Thread Nachi Ueno
Hi Clint

2014/1/10 Clint Byrum cl...@fewbar.com:
 Excerpts from Nachi Ueno's message of 2014-01-10 13:42:30 -0700:
 Hi Flavio, Clint

 I agree with you guys.
 sorry, may be, I wasn't clear. My opinion is to remove every
 configuration in the node,
 and every configuration should be done by API from central resource
 manager. (nova-api or neturon server etc).

 This is how to add new hosts, in cloudstack, vcenter, and openstack.

 Cloudstack: Go to web UI, add Host/ID/PW.
 http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/host-add.html

 vCenter: Go to vsphere client, Host/ID/PW.
 https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.solutions.doc%2FGUID-A367585C-EB0E-4CEB-B147-817C1E5E8D1D.html

 Openstack,
 - Manual
- setup mysql connection config, rabbitmq/qpid connection config,
 keystone config,, neturon config, 
 http://docs.openstack.org/havana/install-guide/install/apt/content/nova-compute.html

 We have some deployment system including chef / puppet / packstack, TripleO
 - Chef/Puppet
Setup chef node
Add node/ apply role
 - Packstack
-  Generate answer file
   
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/sect-Running_PackStack_Non-interactively.html
-  packstack --install-hosts=192.168.1.0,192.168.1.1,192.168.1.2
 - TripleO
- UnderCloud
nova baremetal node add
- OverCloud
modify heat template

 For residence in this mailing list, Chef/Puppet or third party tool is
 easy to use.
 However,  I believe they are magical tools to use for many operators.
 Furthermore, these development system tend to take time to support
 newest release.
 so most of users, OpenStack release didn't means it can be usable for them.

 IMO, current way to manage configuration is the cause of this issue.
 If we manage everything via API, we can manage cluster by horizon.
 Then user can do go to horizon, just add host.

 It may take time to migrate config to API, so one easy step is to convert
 existing config for API resources. This is the purpose of this proposal.


 Hi Nachi. What you've described is the vision for TripleO and Tuskar. We
 do not lag the release. We run CD and will be in the gate real soon
 now so that TripleO should be able to fully deploy Icehouse on Icehouse
 release day.

yeah, I'm big fan of TripleO and Tuskar.
However, may be, it is difficult to let TripleO/Tuskar up-to-dated
with newest releases.

so let's say Nova and neutron added new function in 3rd release (I3
for icehouse),
there is no way to support it in TripleO/Tuskar.
This is natural, because TripleO/Tuskar is 3rd party tool for nova or
neutron. (same as Chef/Puppet).
IMO, Tuskar API and existing projects(nova, neturon) should be
integrated in design level.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-13 Thread Clint Byrum
Excerpts from Nachi Ueno's message of 2014-01-13 10:35:07 -0800:
 Hi Clint
 
 2014/1/10 Clint Byrum cl...@fewbar.com:
  Excerpts from Nachi Ueno's message of 2014-01-10 13:42:30 -0700:
  Hi Flavio, Clint
 
  I agree with you guys.
  sorry, may be, I wasn't clear. My opinion is to remove every
  configuration in the node,
  and every configuration should be done by API from central resource
  manager. (nova-api or neturon server etc).
 
  This is how to add new hosts, in cloudstack, vcenter, and openstack.
 
  Cloudstack: Go to web UI, add Host/ID/PW.
  http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/host-add.html
 
  vCenter: Go to vsphere client, Host/ID/PW.
  https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.solutions.doc%2FGUID-A367585C-EB0E-4CEB-B147-817C1E5E8D1D.html
 
  Openstack,
  - Manual
 - setup mysql connection config, rabbitmq/qpid connection config,
  keystone config,, neturon config, 
  http://docs.openstack.org/havana/install-guide/install/apt/content/nova-compute.html
 
  We have some deployment system including chef / puppet / packstack, TripleO
  - Chef/Puppet
 Setup chef node
 Add node/ apply role
  - Packstack
 -  Generate answer file

  https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/sect-Running_PackStack_Non-interactively.html
 -  packstack --install-hosts=192.168.1.0,192.168.1.1,192.168.1.2
  - TripleO
 - UnderCloud
 nova baremetal node add
 - OverCloud
 modify heat template
 
  For residence in this mailing list, Chef/Puppet or third party tool is
  easy to use.
  However,  I believe they are magical tools to use for many operators.
  Furthermore, these development system tend to take time to support
  newest release.
  so most of users, OpenStack release didn't means it can be usable for them.
 
  IMO, current way to manage configuration is the cause of this issue.
  If we manage everything via API, we can manage cluster by horizon.
  Then user can do go to horizon, just add host.
 
  It may take time to migrate config to API, so one easy step is to convert
  existing config for API resources. This is the purpose of this proposal.
 
 
  Hi Nachi. What you've described is the vision for TripleO and Tuskar. We
  do not lag the release. We run CD and will be in the gate real soon
  now so that TripleO should be able to fully deploy Icehouse on Icehouse
  release day.
 
 yeah, I'm big fan of TripleO and Tuskar.
 However, may be, it is difficult to let TripleO/Tuskar up-to-dated
 with newest releases.
 
 so let's say Nova and neutron added new function in 3rd release (I3
 for icehouse),
 there is no way to support it in TripleO/Tuskar.
 This is natural, because TripleO/Tuskar is 3rd party tool for nova or
 neutron. (same as Chef/Puppet).
 IMO, Tuskar API and existing projects(nova, neturon) should be
 integrated in design level.
 

This is false. TripleO is the official OpenStack deployment program. It
is not a 3rd party tool. Of course sometimes TripleO may lag the same
way Heat may lag other integrated release components. But that is one
reason we have release meetings, blueprints, and summits, so that projects
like Heat and TripleO can be aware of what is landing in i3 and at least
attempt to have some support in place ASAP.

Trying to make this happen inside the individual projects instead of
in projects dedicated to working well in this space is a recipe for
frustration, and I don't believe it would lead to any less lag. People
would just land features with FIXME: support config api.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-13 Thread Nachi Ueno
2014/1/13 Clint Byrum cl...@fewbar.com:
 Excerpts from Nachi Ueno's message of 2014-01-13 10:35:07 -0800:
 Hi Clint

 2014/1/10 Clint Byrum cl...@fewbar.com:
  Excerpts from Nachi Ueno's message of 2014-01-10 13:42:30 -0700:
  Hi Flavio, Clint
 
  I agree with you guys.
  sorry, may be, I wasn't clear. My opinion is to remove every
  configuration in the node,
  and every configuration should be done by API from central resource
  manager. (nova-api or neturon server etc).
 
  This is how to add new hosts, in cloudstack, vcenter, and openstack.
 
  Cloudstack: Go to web UI, add Host/ID/PW.
  http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/host-add.html
 
  vCenter: Go to vsphere client, Host/ID/PW.
  https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.solutions.doc%2FGUID-A367585C-EB0E-4CEB-B147-817C1E5E8D1D.html
 
  Openstack,
  - Manual
 - setup mysql connection config, rabbitmq/qpid connection config,
  keystone config,, neturon config, 
  http://docs.openstack.org/havana/install-guide/install/apt/content/nova-compute.html
 
  We have some deployment system including chef / puppet / packstack, 
  TripleO
  - Chef/Puppet
 Setup chef node
 Add node/ apply role
  - Packstack
 -  Generate answer file

  https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/sect-Running_PackStack_Non-interactively.html
 -  packstack --install-hosts=192.168.1.0,192.168.1.1,192.168.1.2
  - TripleO
 - UnderCloud
 nova baremetal node add
 - OverCloud
 modify heat template
 
  For residence in this mailing list, Chef/Puppet or third party tool is
  easy to use.
  However,  I believe they are magical tools to use for many operators.
  Furthermore, these development system tend to take time to support
  newest release.
  so most of users, OpenStack release didn't means it can be usable for 
  them.
 
  IMO, current way to manage configuration is the cause of this issue.
  If we manage everything via API, we can manage cluster by horizon.
  Then user can do go to horizon, just add host.
 
  It may take time to migrate config to API, so one easy step is to convert
  existing config for API resources. This is the purpose of this proposal.
 
 
  Hi Nachi. What you've described is the vision for TripleO and Tuskar. We
  do not lag the release. We run CD and will be in the gate real soon
  now so that TripleO should be able to fully deploy Icehouse on Icehouse
  release day.

 yeah, I'm big fan of TripleO and Tuskar.
 However, may be, it is difficult to let TripleO/Tuskar up-to-dated
 with newest releases.

 so let's say Nova and neutron added new function in 3rd release (I3
 for icehouse),
 there is no way to support it in TripleO/Tuskar.
 This is natural, because TripleO/Tuskar is 3rd party tool for nova or
 neutron. (same as Chef/Puppet).
 IMO, Tuskar API and existing projects(nova, neturon) should be
 integrated in design level.


 This is false. TripleO is the official OpenStack deployment program. It
 is not a 3rd party tool. Of course sometimes TripleO may lag the same
 way Heat may lag other integrated release components. But that is one
 reason we have release meetings, blueprints, and summits, so that projects
 like Heat and TripleO can be aware of what is landing in i3 and at least
 attempt to have some support in place ASAP.

This is a beauty we could have TripleO as official project.
However, this is solution by management.
We should have an architecture for supporting this resource management.

 Trying to make this happen inside the individual projects instead of
 in projects dedicated to working well in this space is a recipe for
 frustration, and I don't believe it would lead to any less lag.

I believe existing situation is Resource management happen inside the
individual projects instead of
 in projects dedicated to working well in this space.

Neutron and nova has db table related with 'host'. (scheduler, agents)
TripleO/Tushar has a resource management table (
I don't know detail of the project, so please correct me if I were wrong here)
Nova-scheduler monitors compute nodes, neutron also monitors agents.
It looks like TripleO/Tushar is planning to monitor nodes.

People
 would just land features with FIXME: support config api.

My original proposal won't change config api, so this won't happen.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-11 Thread Clint Byrum
Excerpts from Nachi Ueno's message of 2014-01-10 13:42:30 -0700:
 Hi Flavio, Clint
 
 I agree with you guys.
 sorry, may be, I wasn't clear. My opinion is to remove every
 configuration in the node,
 and every configuration should be done by API from central resource
 manager. (nova-api or neturon server etc).
 
 This is how to add new hosts, in cloudstack, vcenter, and openstack.
 
 Cloudstack: Go to web UI, add Host/ID/PW.
 http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/host-add.html
 
 vCenter: Go to vsphere client, Host/ID/PW.
 https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.solutions.doc%2FGUID-A367585C-EB0E-4CEB-B147-817C1E5E8D1D.html
 
 Openstack,
 - Manual
- setup mysql connection config, rabbitmq/qpid connection config,
 keystone config,, neturon config, 
 http://docs.openstack.org/havana/install-guide/install/apt/content/nova-compute.html
 
 We have some deployment system including chef / puppet / packstack, TripleO
 - Chef/Puppet
Setup chef node
Add node/ apply role
 - Packstack
-  Generate answer file
   
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/sect-Running_PackStack_Non-interactively.html
-  packstack --install-hosts=192.168.1.0,192.168.1.1,192.168.1.2
 - TripleO
- UnderCloud
nova baremetal node add
- OverCloud
modify heat template
 
 For residence in this mailing list, Chef/Puppet or third party tool is
 easy to use.
 However,  I believe they are magical tools to use for many operators.
 Furthermore, these development system tend to take time to support
 newest release.
 so most of users, OpenStack release didn't means it can be usable for them.
 
 IMO, current way to manage configuration is the cause of this issue.
 If we manage everything via API, we can manage cluster by horizon.
 Then user can do go to horizon, just add host.
 
 It may take time to migrate config to API, so one easy step is to convert
 existing config for API resources. This is the purpose of this proposal.
 

Hi Nachi. What you've described is the vision for TripleO and Tuskar. We
do not lag the release. We run CD and will be in the gate real soon
now so that TripleO should be able to fully deploy Icehouse on Icehouse
release day.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-10 Thread Flavio Percoco

On 09/01/14 13:28 -0500, Jay Pipes wrote:

On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:

On 08/01/14 17:13 -0800, Nachi Ueno wrote:
Hi folks

OpenStack process tend to have many config options, and many hosts.
It is a pain to manage this tons of config options.
To centralize this management helps operation.

We can use chef or puppet kind of tools, however
sometimes each process depends on the other processes configuration.
For example, nova depends on neutron configuration etc

My idea is to have config server in oslo.config, and let cfg.CONF get
config from the server.
This way has several benefits.

- We can get centralized management without modification on each
projects ( nova, neutron, etc)
- We can provide horizon for configuration

This is bp for this proposal.
https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

I'm very appreciate any comments on this.

I've thought about this as well. I like the overall idea of having a
config server. However, I don't like the idea of having it within
oslo.config. I'd prefer oslo.config to remain a library.

Also, I think it would be more complex than just having a server that
provides the configs. It'll need authentication like all other
services in OpenStack and perhaps even support of encryption.

I like the idea of a config registry but as mentioned above, IMHO it's
to live under its own project.


Hi Nati and Flavio!

So, I'm -1 on this idea, just because I think it belongs in the realm of
configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
tools are built to manage multiple configuration files and changes in
them. Adding a config server would dramatically change the way that
configuration management tools would interface with OpenStack services.
Instead of managing the config file templates as all of the tools
currently do, the tools would need to essentially need to forego the
tried-and-true INI files and instead write a bunch of code in order to
deal with REST API set/get operations for changing configuration data.

In summary, while I agree that OpenStack services have an absolute TON
of configurability -- for good and bad -- there are ways to improve the
usability of configuration without changing the paradigm that most
configuration management tools expect. One such example is having
include.d/ support -- similar to the existing oslo.cfg module's support
for a --config-dir, but more flexible and more like what other open
source programs (like Apache) have done for years.


FWIW, this is the exact reason why I didn't propose the idea. Although
I like the idea, I'm not fully convinced.

I don't want to reinvent existing configuration management tools, nor
tight OpenStack services to this server. In my head I thought about it
as an optional thing that could help deployments that are not already
using other tools but lets be realistic, who isn't using configuration
tools nowadays? It'd be very painful to manage the whole thing without
these tools.

Anyway, all this to say, I agree with you and that I think
implementing this service would be more complex than just serving
configurations. :)

Cheers,
FF

--
@flaper87
Flavio Percoco


pgp5cd_BzKtiA.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-10 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2014-01-09 12:21:05 -0700:
 On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:
 
  Hi folks
 
  Thank you for your input.
 
  The key difference from external configuration system (Chef, puppet
  etc) is integration with
  openstack services.
  There are cases a process should know the config value in the other hosts.
  If we could have centralized config storage api, we can solve this issue.
 
  One example of such case is neuron + nova vif parameter configuration
  regarding to security group.
  The workflow is something like this.
 
  nova asks vif configuration information for neutron server.
  Neutron server ask configuration in neutron l2-agent on the same host
  of nova-compute.
 
 
 That extra round trip does sound like a potential performance bottleneck,
 but sharing the configuration data directly is not the right solution. If
 the configuration setting names are shared, they become part of the
 integration API between the two services. Nova should ask neutron how to
 connect the VIF, and it shouldn't care how neutron decides to answer that
 question. The configuration setting is an implementation detail of neutron
 that shouldn't be exposed directly to nova.
 

That is where I think my resistance to such a change starts. If Nova and
Neutron need to share a value, they should just do that via their API's.
There is no need for a config server in the middle. If it is networking
related, it lives in Neutron's configs, and if it is compute related,
Nova's configs.

Is there any example where values need to be in sync but are not
sharable via normal API chatter?

 Running a configuration service also introduces what could be a single
 point of failure for all of the other distributed services in OpenStack. An
 out-of-band tool like chef or puppet doesn't result in the same sort of
 situation, because the tool does not have to be online in order for the
 cloud to be online.
 

Configuration shouldn't ever have a rapid pattern of change, so even if
this service existed I'd suggest that it would be used just like current
config management solutions: scrape values out, write to config files.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-10 Thread Nachi Ueno
Hi Flavio, Clint

I agree with you guys.
sorry, may be, I wasn't clear. My opinion is to remove every
configuration in the node,
and every configuration should be done by API from central resource
manager. (nova-api or neturon server etc).

This is how to add new hosts, in cloudstack, vcenter, and openstack.

Cloudstack: Go to web UI, add Host/ID/PW.
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/host-add.html

vCenter: Go to vsphere client, Host/ID/PW.
https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.solutions.doc%2FGUID-A367585C-EB0E-4CEB-B147-817C1E5E8D1D.html

Openstack,
- Manual
   - setup mysql connection config, rabbitmq/qpid connection config,
keystone config,, neturon config, 
http://docs.openstack.org/havana/install-guide/install/apt/content/nova-compute.html

We have some deployment system including chef / puppet / packstack, TripleO
- Chef/Puppet
   Setup chef node
   Add node/ apply role
- Packstack
   -  Generate answer file
  
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/sect-Running_PackStack_Non-interactively.html
   -  packstack --install-hosts=192.168.1.0,192.168.1.1,192.168.1.2
- TripleO
   - UnderCloud
   nova baremetal node add
   - OverCloud
   modify heat template

For residence in this mailing list, Chef/Puppet or third party tool is
easy to use.
However,  I believe they are magical tools to use for many operators.
Furthermore, these development system tend to take time to support
newest release.
so most of users, OpenStack release didn't means it can be usable for them.

IMO, current way to manage configuration is the cause of this issue.
If we manage everything via API, we can manage cluster by horizon.
Then user can do go to horizon, just add host.

It may take time to migrate config to API, so one easy step is to convert
existing config for API resources. This is the purpose of this proposal.

Best
Nachi


2014/1/10 Clint Byrum cl...@fewbar.com:
 Excerpts from Doug Hellmann's message of 2014-01-09 12:21:05 -0700:
 On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:

  Hi folks
 
  Thank you for your input.
 
  The key difference from external configuration system (Chef, puppet
  etc) is integration with
  openstack services.
  There are cases a process should know the config value in the other hosts.
  If we could have centralized config storage api, we can solve this issue.
 
  One example of such case is neuron + nova vif parameter configuration
  regarding to security group.
  The workflow is something like this.
 
  nova asks vif configuration information for neutron server.
  Neutron server ask configuration in neutron l2-agent on the same host
  of nova-compute.
 

 That extra round trip does sound like a potential performance bottleneck,
 but sharing the configuration data directly is not the right solution. If
 the configuration setting names are shared, they become part of the
 integration API between the two services. Nova should ask neutron how to
 connect the VIF, and it shouldn't care how neutron decides to answer that
 question. The configuration setting is an implementation detail of neutron
 that shouldn't be exposed directly to nova.


 That is where I think my resistance to such a change starts. If Nova and
 Neutron need to share a value, they should just do that via their API's.
 There is no need for a config server in the middle. If it is networking
 related, it lives in Neutron's configs, and if it is compute related,
 Nova's configs.

 Is there any example where values need to be in sync but are not
 sharable via normal API chatter?

 Running a configuration service also introduces what could be a single
 point of failure for all of the other distributed services in OpenStack. An
 out-of-band tool like chef or puppet doesn't result in the same sort of
 situation, because the tool does not have to be online in order for the
 cloud to be online.


 Configuration shouldn't ever have a rapid pattern of change, so even if
 this service existed I'd suggest that it would be used just like current
 config management solutions: scrape values out, write to config files.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Flavio Percoco

On 08/01/14 17:13 -0800, Nachi Ueno wrote:

Hi folks

OpenStack process tend to have many config options, and many hosts.
It is a pain to manage this tons of config options.
To centralize this management helps operation.

We can use chef or puppet kind of tools, however
sometimes each process depends on the other processes configuration.
For example, nova depends on neutron configuration etc

My idea is to have config server in oslo.config, and let cfg.CONF get
config from the server.
This way has several benefits.

- We can get centralized management without modification on each
projects ( nova, neutron, etc)
- We can provide horizon for configuration

This is bp for this proposal.
https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

I'm very appreciate any comments on this.



I've thought about this as well. I like the overall idea of having a
config server. However, I don't like the idea of having it within
oslo.config. I'd prefer oslo.config to remain a library.

Also, I think it would be more complex than just having a server that
provides the configs. It'll need authentication like all other
services in OpenStack and perhaps even support of encryption.

I like the idea of a config registry but as mentioned above, IMHO it's
to live under its own project.

That's all I've got for now,
FF

--
@flaper87
Flavio Percoco


pgpeZ2M6TmKBW.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Flavio

Thank you for your input.
I agree with you. oslo.config isn't right place to have server side code.

How about oslo.configserver ?
For authentication, we can reuse keystone auth and oslo.rpc.

Best
Nachi


2014/1/9 Flavio Percoco fla...@redhat.com:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:

 Hi folks

 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.

 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc

 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.

 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration

 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

 I'm very appreciate any comments on this.



 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 That's all I've got for now,
 FF

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Doug Hellmann
What capabilities would this new service give us that existing, proven,
configuration management tools like chef and puppet don't have?


On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Flavio

 Thank you for your input.
 I agree with you. oslo.config isn't right place to have server side code.

 How about oslo.configserver ?
 For authentication, we can reuse keystone auth and oslo.rpc.

 Best
 Nachi


 2014/1/9 Flavio Percoco fla...@redhat.com:
  On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 
  Hi folks
 
  OpenStack process tend to have many config options, and many hosts.
  It is a pain to manage this tons of config options.
  To centralize this management helps operation.
 
  We can use chef or puppet kind of tools, however
  sometimes each process depends on the other processes configuration.
  For example, nova depends on neutron configuration etc
 
  My idea is to have config server in oslo.config, and let cfg.CONF get
  config from the server.
  This way has several benefits.
 
  - We can get centralized management without modification on each
  projects ( nova, neutron, etc)
  - We can provide horizon for configuration
 
  This is bp for this proposal.
  https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
  I'm very appreciate any comments on this.
 
 
 
  I've thought about this as well. I like the overall idea of having a
  config server. However, I don't like the idea of having it within
  oslo.config. I'd prefer oslo.config to remain a library.
 
  Also, I think it would be more complex than just having a server that
  provides the configs. It'll need authentication like all other
  services in OpenStack and perhaps even support of encryption.
 
  I like the idea of a config registry but as mentioned above, IMHO it's
  to live under its own project.
 
  That's all I've got for now,
  FF
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Oleg Gelbukh
Nachi,

Thanks for bringing this up. We've been thinking a lot about handling of
configurations while working on Rubick.

In my understanding, oslo.config could provide an interface to different
back-ends to store configuration parameters. It could be simple centralized
alternative to configuration files, like k-v store or SQL database. It also
could be something complicated, like a service of its own
(Configration-as-a-Service), with cross-services validation capability etc.

By the way, configuration as a service was mentioned in Solum session at
the last summit, which implies that such service could have more then one
application.

The first step to this could be abstracting a back-end in oslo.config and
implementing some simplistic driver, SQL or k-v storage. This could help to
outline requirements to future configuraiton service.

--
Best regards,
Oleg Gelbukh


On Thu, Jan 9, 2014 at 1:23 PM, Flavio Percoco fla...@redhat.com wrote:

 On 08/01/14 17:13 -0800, Nachi Ueno wrote:

 Hi folks

 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.

 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc

 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.

 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration

 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

 I'm very appreciate any comments on this.



 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 That's all I've got for now,
 FF

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Jay Pipes
On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 Hi folks
 
 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.
 
 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc
 
 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.
 
 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration
 
 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
 I'm very appreciate any comments on this.
 
 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.
 
 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.
 
 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

Hi Nati and Flavio!

So, I'm -1 on this idea, just because I think it belongs in the realm of
configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
tools are built to manage multiple configuration files and changes in
them. Adding a config server would dramatically change the way that
configuration management tools would interface with OpenStack services.
Instead of managing the config file templates as all of the tools
currently do, the tools would need to essentially need to forego the
tried-and-true INI files and instead write a bunch of code in order to
deal with REST API set/get operations for changing configuration data.

In summary, while I agree that OpenStack services have an absolute TON
of configurability -- for good and bad -- there are ways to improve the
usability of configuration without changing the paradigm that most
configuration management tools expect. One such example is having
include.d/ support -- similar to the existing oslo.cfg module's support
for a --config-dir, but more flexible and more like what other open
source programs (like Apache) have done for years.

All the best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Morgan Fainberg
I agree with Doug’s question, but also would extend the train of thought to ask 
why not help to make Chef or Puppet better and cover the more OpenStack 
use-cases rather than add yet another competing system?

Cheers,
Morgan
On January 9, 2014 at 10:24:06, Doug Hellmann (doug.hellm...@dreamhost.com) 
wrote:

What capabilities would this new service give us that existing, proven, 
configuration management tools like chef and puppet don't have?


On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:
Hi Flavio

Thank you for your input.
I agree with you. oslo.config isn't right place to have server side code.

How about oslo.configserver ?
For authentication, we can reuse keystone auth and oslo.rpc.

Best
Nachi


2014/1/9 Flavio Percoco fla...@redhat.com:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:

 Hi folks

 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.

 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc

 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.

 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration

 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized

 I'm very appreciate any comments on this.



 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 That's all I've got for now,
 FF

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Jeremy Hanmer
+1 to Jay.  Existing tools are both better suited to the job and work
quite well in their current state.  To address Nachi's first example,
there's nothing preventing a Nova node in Chef from reading Neutron's
configuration (either by using a (partial) search or storing the
necessary information in the environment rather than in roles).  I
assume Puppet offers the same.  Please don't re-invent this hugely
complicated wheel.

On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 Hi folks
 
 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.
 
 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc
 
 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.
 
 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration
 
 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
 I'm very appreciate any comments on this.

 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 Hi Nati and Flavio!

 So, I'm -1 on this idea, just because I think it belongs in the realm of
 configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
 tools are built to manage multiple configuration files and changes in
 them. Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 Instead of managing the config file templates as all of the tools
 currently do, the tools would need to essentially need to forego the
 tried-and-true INI files and instead write a bunch of code in order to
 deal with REST API set/get operations for changing configuration data.

 In summary, while I agree that OpenStack services have an absolute TON
 of configurability -- for good and bad -- there are ways to improve the
 usability of configuration without changing the paradigm that most
 configuration management tools expect. One such example is having
 include.d/ support -- similar to the existing oslo.cfg module's support
 for a --config-dir, but more flexible and more like what other open
 source programs (like Apache) have done for years.

 All the best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi folks

Thank you for your input.

The key difference from external configuration system (Chef, puppet
etc) is integration with
openstack services.
There are cases a process should know the config value in the other hosts.
If we could have centralized config storage api, we can solve this issue.

One example of such case is neuron + nova vif parameter configuration
regarding to security group.
The workflow is something like this.

nova asks vif configuration information for neutron server.
Neutron server ask configuration in neutron l2-agent on the same host
of nova-compute.

host1
  neutron server
  nova-api

host2
  neturon l2-agent
  nova-compute

In this case, a process should know the config value in the other hosts.

Replying some questions

 Adding a config server would dramatically change the way that
configuration management tools would interface with OpenStack services. [Jay]

Since this bp is just adding new mode, we can still use existing config files.

 why not help to make Chef or Puppet better and cover the more OpenStack 
 use-cases rather than add yet another competing system [Doug, Morgan]

I believe this system is not competing system.
The key point is we should have some standard api to access such services.
As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
as a backend system.

Best
Nachi


2014/1/9 Morgan Fainberg m...@metacloud.com:
 I agree with Doug’s question, but also would extend the train of thought to
 ask why not help to make Chef or Puppet better and cover the more OpenStack
 use-cases rather than add yet another competing system?

 Cheers,
 Morgan

 On January 9, 2014 at 10:24:06, Doug Hellmann (doug.hellm...@dreamhost.com)
 wrote:

 What capabilities would this new service give us that existing, proven,
 configuration management tools like chef and puppet don't have?


 On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Flavio

 Thank you for your input.
 I agree with you. oslo.config isn't right place to have server side code.

 How about oslo.configserver ?
 For authentication, we can reuse keystone auth and oslo.rpc.

 Best
 Nachi


 2014/1/9 Flavio Percoco fla...@redhat.com:
  On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 
  Hi folks
 
  OpenStack process tend to have many config options, and many hosts.
  It is a pain to manage this tons of config options.
  To centralize this management helps operation.
 
  We can use chef or puppet kind of tools, however
  sometimes each process depends on the other processes configuration.
  For example, nova depends on neutron configuration etc
 
  My idea is to have config server in oslo.config, and let cfg.CONF get
  config from the server.
  This way has several benefits.
 
  - We can get centralized management without modification on each
  projects ( nova, neutron, etc)
  - We can provide horizon for configuration
 
  This is bp for this proposal.
  https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
  I'm very appreciate any comments on this.
 
 
 
  I've thought about this as well. I like the overall idea of having a
  config server. However, I don't like the idea of having it within
  oslo.config. I'd prefer oslo.config to remain a library.
 
  Also, I think it would be more complex than just having a server that
  provides the configs. It'll need authentication like all other
  services in OpenStack and perhaps even support of encryption.
 
  I like the idea of a config registry but as mentioned above, IMHO it's
  to live under its own project.
 
  That's all I've got for now,
  FF
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Jeremy

Don't you think it is burden for operators if we should choose correct
combination of config for multiple nodes even if we have chef and
puppet?

If we have some constraint or dependency in configurations, such logic
should be in openstack source code.
We can solve this issue if we have a standard way to know the config
value of other process in the other host.

Something like this.
self.conf.host('host1').firewall_driver

Then we can have a chef/or file baed config backend code for this for example.

Best
Nachi


2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
 +1 to Jay.  Existing tools are both better suited to the job and work
 quite well in their current state.  To address Nachi's first example,
 there's nothing preventing a Nova node in Chef from reading Neutron's
 configuration (either by using a (partial) search or storing the
 necessary information in the environment rather than in roles).  I
 assume Puppet offers the same.  Please don't re-invent this hugely
 complicated wheel.

 On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 Hi folks
 
 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.
 
 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc
 
 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.
 
 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration
 
 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
 I'm very appreciate any comments on this.

 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 Hi Nati and Flavio!

 So, I'm -1 on this idea, just because I think it belongs in the realm of
 configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
 tools are built to manage multiple configuration files and changes in
 them. Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 Instead of managing the config file templates as all of the tools
 currently do, the tools would need to essentially need to forego the
 tried-and-true INI files and instead write a bunch of code in order to
 deal with REST API set/get operations for changing configuration data.

 In summary, while I agree that OpenStack services have an absolute TON
 of configurability -- for good and bad -- there are ways to improve the
 usability of configuration without changing the paradigm that most
 configuration management tools expect. One such example is having
 include.d/ support -- similar to the existing oslo.cfg module's support
 for a --config-dir, but more flexible and more like what other open
 source programs (like Apache) have done for years.

 All the best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Doug Hellmann
On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Thank you for your input.

 The key difference from external configuration system (Chef, puppet
 etc) is integration with
 openstack services.
 There are cases a process should know the config value in the other hosts.
 If we could have centralized config storage api, we can solve this issue.

 One example of such case is neuron + nova vif parameter configuration
 regarding to security group.
 The workflow is something like this.

 nova asks vif configuration information for neutron server.
 Neutron server ask configuration in neutron l2-agent on the same host
 of nova-compute.


That extra round trip does sound like a potential performance bottleneck,
but sharing the configuration data directly is not the right solution. If
the configuration setting names are shared, they become part of the
integration API between the two services. Nova should ask neutron how to
connect the VIF, and it shouldn't care how neutron decides to answer that
question. The configuration setting is an implementation detail of neutron
that shouldn't be exposed directly to nova.

Running a configuration service also introduces what could be a single
point of failure for all of the other distributed services in OpenStack. An
out-of-band tool like chef or puppet doesn't result in the same sort of
situation, because the tool does not have to be online in order for the
cloud to be online.

Doug




 host1
   neutron server
   nova-api

 host2
   neturon l2-agent
   nova-compute

 In this case, a process should know the config value in the other hosts.

 Replying some questions

  Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 [Jay]

 Since this bp is just adding new mode, we can still use existing config
 files.

  why not help to make Chef or Puppet better and cover the more OpenStack
 use-cases rather than add yet another competing system [Doug, Morgan]

 I believe this system is not competing system.
 The key point is we should have some standard api to access such services.
 As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
 as a backend system.

 Best
 Nachi


 2014/1/9 Morgan Fainberg m...@metacloud.com:
  I agree with Doug’s question, but also would extend the train of thought
 to
  ask why not help to make Chef or Puppet better and cover the more
 OpenStack
  use-cases rather than add yet another competing system?
 
  Cheers,
  Morgan
 
  On January 9, 2014 at 10:24:06, Doug Hellmann (
 doug.hellm...@dreamhost.com)
  wrote:
 
  What capabilities would this new service give us that existing, proven,
  configuration management tools like chef and puppet don't have?
 
 
  On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:
 
  Hi Flavio
 
  Thank you for your input.
  I agree with you. oslo.config isn't right place to have server side
 code.
 
  How about oslo.configserver ?
  For authentication, we can reuse keystone auth and oslo.rpc.
 
  Best
  Nachi
 
 
  2014/1/9 Flavio Percoco fla...@redhat.com:
   On 08/01/14 17:13 -0800, Nachi Ueno wrote:
  
   Hi folks
  
   OpenStack process tend to have many config options, and many hosts.
   It is a pain to manage this tons of config options.
   To centralize this management helps operation.
  
   We can use chef or puppet kind of tools, however
   sometimes each process depends on the other processes configuration.
   For example, nova depends on neutron configuration etc
  
   My idea is to have config server in oslo.config, and let cfg.CONF get
   config from the server.
   This way has several benefits.
  
   - We can get centralized management without modification on each
   projects ( nova, neutron, etc)
   - We can provide horizon for configuration
  
   This is bp for this proposal.
   https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
  
   I'm very appreciate any comments on this.
  
  
  
   I've thought about this as well. I like the overall idea of having a
   config server. However, I don't like the idea of having it within
   oslo.config. I'd prefer oslo.config to remain a library.
  
   Also, I think it would be more complex than just having a server that
   provides the configs. It'll need authentication like all other
   services in OpenStack and perhaps even support of encryption.
  
   I like the idea of a config registry but as mentioned above, IMHO it's
   to live under its own project.
  
   That's all I've got for now,
   FF
  
   --
   @flaper87
   Flavio Percoco
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Doug

2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:



 On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Thank you for your input.

 The key difference from external configuration system (Chef, puppet
 etc) is integration with
 openstack services.
 There are cases a process should know the config value in the other hosts.
 If we could have centralized config storage api, we can solve this issue.

 One example of such case is neuron + nova vif parameter configuration
 regarding to security group.
 The workflow is something like this.

 nova asks vif configuration information for neutron server.
 Neutron server ask configuration in neutron l2-agent on the same host
 of nova-compute.


 That extra round trip does sound like a potential performance bottleneck,
 but sharing the configuration data directly is not the right solution. If
 the configuration setting names are shared, they become part of the
 integration API between the two services. Nova should ask neutron how to
 connect the VIF, and it shouldn't care how neutron decides to answer that
 question. The configuration setting is an implementation detail of neutron
 that shouldn't be exposed directly to nova.

I agree for nova - neutron if.
However, neutron server and neutron l2 agent configuration depends on
each other.

 Running a configuration service also introduces what could be a single point
 of failure for all of the other distributed services in OpenStack. An
 out-of-band tool like chef or puppet doesn't result in the same sort of
 situation, because the tool does not have to be online in order for the
 cloud to be online.

We can choose same implementation. ( Copy information in local cache etc)

Thank you for your input, I could organize my thought.
My proposal can be split for the two bps.

[BP1] conf api for the other process
Provide standard way to know the config value in the other process in
same host or the other host.

- API Example:
conf.host('host1').firewall_driver

- Conf file baed implementaion:
config for each host will be placed in here.
 /etc/project/conf.d/{hostname}/agent.conf

[BP2] Multiple backend for storing config files

Currently, we have only file based configration.
In this bp, we are extending support for config storage.
- KVS
- SQL
- Chef - Ohai

Best
Nachi

 Doug




 host1
   neutron server
   nova-api

 host2
   neturon l2-agent
   nova-compute

 In this case, a process should know the config value in the other hosts.

 Replying some questions

  Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 [Jay]

 Since this bp is just adding new mode, we can still use existing config
 files.

  why not help to make Chef or Puppet better and cover the more OpenStack
  use-cases rather than add yet another competing system [Doug, Morgan]

 I believe this system is not competing system.
 The key point is we should have some standard api to access such services.
 As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
 as a backend system.

 Best
 Nachi


 2014/1/9 Morgan Fainberg m...@metacloud.com:
  I agree with Doug’s question, but also would extend the train of thought
  to
  ask why not help to make Chef or Puppet better and cover the more
  OpenStack
  use-cases rather than add yet another competing system?
 
  Cheers,
  Morgan
 
  On January 9, 2014 at 10:24:06, Doug Hellmann
  (doug.hellm...@dreamhost.com)
  wrote:
 
  What capabilities would this new service give us that existing, proven,
  configuration management tools like chef and puppet don't have?
 
 
  On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:
 
  Hi Flavio
 
  Thank you for your input.
  I agree with you. oslo.config isn't right place to have server side
  code.
 
  How about oslo.configserver ?
  For authentication, we can reuse keystone auth and oslo.rpc.
 
  Best
  Nachi
 
 
  2014/1/9 Flavio Percoco fla...@redhat.com:
   On 08/01/14 17:13 -0800, Nachi Ueno wrote:
  
   Hi folks
  
   OpenStack process tend to have many config options, and many hosts.
   It is a pain to manage this tons of config options.
   To centralize this management helps operation.
  
   We can use chef or puppet kind of tools, however
   sometimes each process depends on the other processes configuration.
   For example, nova depends on neutron configuration etc
  
   My idea is to have config server in oslo.config, and let cfg.CONF
   get
   config from the server.
   This way has several benefits.
  
   - We can get centralized management without modification on each
   projects ( nova, neutron, etc)
   - We can provide horizon for configuration
  
   This is bp for this proposal.
   https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
  
   I'm very appreciate any comments on this.
  
  
  
   I've thought about this as well. I like the overall idea of having a
   config server. However, I don't like 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Chmouel Boudjnah
On Thu, Jan 9, 2014 at 7:53 PM, Nachi Ueno na...@ntti3.com wrote:

 One example of such case is neuron + nova vif parameter configuration
 regarding to security group.
 The workflow is something like this.

 nova asks vif configuration information for neutron server.
 Neutron server ask configuration in neutron l2-agent on the same host
 of nova-compute.


What about using for that something like the discoverability middleware
that was added in swift[1] and extend it to get it integrated oslo?

Chmouel.

[1] http://techs.enovance.com/6509/swift-discoverable-capabilities
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Doug Hellmann
On Thu, Jan 9, 2014 at 2:34 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Doug

 2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:
 
 
 
  On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:
 
  Hi folks
 
  Thank you for your input.
 
  The key difference from external configuration system (Chef, puppet
  etc) is integration with
  openstack services.
  There are cases a process should know the config value in the other
 hosts.
  If we could have centralized config storage api, we can solve this
 issue.
 
  One example of such case is neuron + nova vif parameter configuration
  regarding to security group.
  The workflow is something like this.
 
  nova asks vif configuration information for neutron server.
  Neutron server ask configuration in neutron l2-agent on the same host
  of nova-compute.
 
 
  That extra round trip does sound like a potential performance bottleneck,
  but sharing the configuration data directly is not the right solution. If
  the configuration setting names are shared, they become part of the
  integration API between the two services. Nova should ask neutron how to
  connect the VIF, and it shouldn't care how neutron decides to answer that
  question. The configuration setting is an implementation detail of
 neutron
  that shouldn't be exposed directly to nova.

 I agree for nova - neutron if.
 However, neutron server and neutron l2 agent configuration depends on
 each other.


  Running a configuration service also introduces what could be a single
 point
  of failure for all of the other distributed services in OpenStack. An
  out-of-band tool like chef or puppet doesn't result in the same sort of
  situation, because the tool does not have to be online in order for the
  cloud to be online.

 We can choose same implementation. ( Copy information in local cache etc)

 Thank you for your input, I could organize my thought.
 My proposal can be split for the two bps.

 [BP1] conf api for the other process
 Provide standard way to know the config value in the other process in
 same host or the other host.


Please don't do this. It's just a bad idea to expose the configuration
settings between apps this way, because it couples the applications tightly
at a low level, instead of letting the applications have APIs for sharing
logical information at a high level. It's the difference between asking
what is the value of a specific configuration setting on a particular
hypervisor and asking how do I connect a VIF for this instance. The
latter lets you provide different answers based on context. The former
doesn't.

Doug




 - API Example:
 conf.host('host1').firewall_driver

 - Conf file baed implementaion:
 config for each host will be placed in here.
  /etc/project/conf.d/{hostname}/agent.conf

 [BP2] Multiple backend for storing config files

 Currently, we have only file based configration.
 In this bp, we are extending support for config storage.
 - KVS
 - SQL
 - Chef - Ohai


 Best
 Nachi

  Doug
 
 
 
 
  host1
neutron server
nova-api
 
  host2
neturon l2-agent
nova-compute
 
  In this case, a process should know the config value in the other hosts.
 
  Replying some questions
 
   Adding a config server would dramatically change the way that
  configuration management tools would interface with OpenStack services.
  [Jay]
 
  Since this bp is just adding new mode, we can still use existing
 config
  files.
 
   why not help to make Chef or Puppet better and cover the more
 OpenStack
   use-cases rather than add yet another competing system [Doug, Morgan]
 
  I believe this system is not competing system.
  The key point is we should have some standard api to access such
 services.
  As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
  as a backend system.
 
  Best
  Nachi
 
 
  2014/1/9 Morgan Fainberg m...@metacloud.com:
   I agree with Doug’s question, but also would extend the train of
 thought
   to
   ask why not help to make Chef or Puppet better and cover the more
   OpenStack
   use-cases rather than add yet another competing system?
  
   Cheers,
   Morgan
  
   On January 9, 2014 at 10:24:06, Doug Hellmann
   (doug.hellm...@dreamhost.com)
   wrote:
  
   What capabilities would this new service give us that existing,
 proven,
   configuration management tools like chef and puppet don't have?
  
  
   On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:
  
   Hi Flavio
  
   Thank you for your input.
   I agree with you. oslo.config isn't right place to have server side
   code.
  
   How about oslo.configserver ?
   For authentication, we can reuse keystone auth and oslo.rpc.
  
   Best
   Nachi
  
  
   2014/1/9 Flavio Percoco fla...@redhat.com:
On 08/01/14 17:13 -0800, Nachi Ueno wrote:
   
Hi folks
   
OpenStack process tend to have many config options, and many
 hosts.
It is a pain to manage this tons of config options.
To centralize this management helps operation.
   
We can 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Jeremy Hanmer
Having run openstack clusters for ~2 years, I can't say that I've ever
desired such functionality.

How do you see these interactions defined?  For instance, if I deploy
a custom driver for Neutron, does that mean I also have to patch
everything that will be talking to it (Nova, for instance) so they can
agree on compatibility?

Also, I know that I run what is probably a more complicated cluster
than most production clusters, but I can't think of very many
configuration options that are globally in sync across the cluster.
Hypervisors, network drivers, mysql servers, API endpoints...they all
might vary between hosts/racks/etc.

On Thu, Jan 9, 2014 at 11:08 AM, Nachi Ueno na...@ntti3.com wrote:
 Hi Jeremy

 Don't you think it is burden for operators if we should choose correct
 combination of config for multiple nodes even if we have chef and
 puppet?

 If we have some constraint or dependency in configurations, such logic
 should be in openstack source code.
 We can solve this issue if we have a standard way to know the config
 value of other process in the other host.

 Something like this.
 self.conf.host('host1').firewall_driver

 Then we can have a chef/or file baed config backend code for this for example.

 Best
 Nachi


 2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
 +1 to Jay.  Existing tools are both better suited to the job and work
 quite well in their current state.  To address Nachi's first example,
 there's nothing preventing a Nova node in Chef from reading Neutron's
 configuration (either by using a (partial) search or storing the
 necessary information in the environment rather than in roles).  I
 assume Puppet offers the same.  Please don't re-invent this hugely
 complicated wheel.

 On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 Hi folks
 
 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.
 
 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc
 
 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.
 
 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration
 
 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
 I'm very appreciate any comments on this.

 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 Hi Nati and Flavio!

 So, I'm -1 on this idea, just because I think it belongs in the realm of
 configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
 tools are built to manage multiple configuration files and changes in
 them. Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 Instead of managing the config file templates as all of the tools
 currently do, the tools would need to essentially need to forego the
 tried-and-true INI files and instead write a bunch of code in order to
 deal with REST API set/get operations for changing configuration data.

 In summary, while I agree that OpenStack services have an absolute TON
 of configurability -- for good and bad -- there are ways to improve the
 usability of configuration without changing the paradigm that most
 configuration management tools expect. One such example is having
 include.d/ support -- similar to the existing oslo.cfg module's support
 for a --config-dir, but more flexible and more like what other open
 source programs (like Apache) have done for years.

 All the best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Doug

Thank you for your input.

2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:



 On Thu, Jan 9, 2014 at 2:34 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Doug

 2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:
 
 
 
  On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:
 
  Hi folks
 
  Thank you for your input.
 
  The key difference from external configuration system (Chef, puppet
  etc) is integration with
  openstack services.
  There are cases a process should know the config value in the other
  hosts.
  If we could have centralized config storage api, we can solve this
  issue.
 
  One example of such case is neuron + nova vif parameter configuration
  regarding to security group.
  The workflow is something like this.
 
  nova asks vif configuration information for neutron server.
  Neutron server ask configuration in neutron l2-agent on the same host
  of nova-compute.
 
 
  That extra round trip does sound like a potential performance
  bottleneck,
  but sharing the configuration data directly is not the right solution.
  If
  the configuration setting names are shared, they become part of the
  integration API between the two services. Nova should ask neutron how to
  connect the VIF, and it shouldn't care how neutron decides to answer
  that
  question. The configuration setting is an implementation detail of
  neutron
  that shouldn't be exposed directly to nova.

 I agree for nova - neutron if.
 However, neutron server and neutron l2 agent configuration depends on
 each other.


  Running a configuration service also introduces what could be a single
  point
  of failure for all of the other distributed services in OpenStack. An
  out-of-band tool like chef or puppet doesn't result in the same sort of
  situation, because the tool does not have to be online in order for the
  cloud to be online.

 We can choose same implementation. ( Copy information in local cache etc)

 Thank you for your input, I could organize my thought.
 My proposal can be split for the two bps.

 [BP1] conf api for the other process
 Provide standard way to know the config value in the other process in
 same host or the other host.


 Please don't do this. It's just a bad idea to expose the configuration
 settings between apps this way, because it couples the applications tightly
 at a low level, instead of letting the applications have APIs for sharing
 logical information at a high level. It's the difference between asking
 what is the value of a specific configuration setting on a particular
 hypervisor and asking how do I connect a VIF for this instance. The
 latter lets you provide different answers based on context. The former
 doesn't.

Essentially, A configuration is a API.
I don't think every configuration is a kind of  low level
configuration (timeout etc).
Some configuration should tell  how do I connect a VIF for this instance,
and we should select high level design configuration parameters.

 Doug




 - API Example:
 conf.host('host1').firewall_driver

 - Conf file baed implementaion:
 config for each host will be placed in here.
  /etc/project/conf.d/{hostname}/agent.conf

 [BP2] Multiple backend for storing config files

 Currently, we have only file based configration.
 In this bp, we are extending support for config storage.
 - KVS
 - SQL
 - Chef - Ohai


 Best
 Nachi

  Doug
 
 
 
 
  host1
neutron server
nova-api
 
  host2
neturon l2-agent
nova-compute
 
  In this case, a process should know the config value in the other
  hosts.
 
  Replying some questions
 
   Adding a config server would dramatically change the way that
  configuration management tools would interface with OpenStack services.
  [Jay]
 
  Since this bp is just adding new mode, we can still use existing
  config
  files.
 
   why not help to make Chef or Puppet better and cover the more
   OpenStack
   use-cases rather than add yet another competing system [Doug, Morgan]
 
  I believe this system is not competing system.
  The key point is we should have some standard api to access such
  services.
  As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
  as a backend system.
 
  Best
  Nachi
 
 
  2014/1/9 Morgan Fainberg m...@metacloud.com:
   I agree with Doug’s question, but also would extend the train of
   thought
   to
   ask why not help to make Chef or Puppet better and cover the more
   OpenStack
   use-cases rather than add yet another competing system?
  
   Cheers,
   Morgan
  
   On January 9, 2014 at 10:24:06, Doug Hellmann
   (doug.hellm...@dreamhost.com)
   wrote:
  
   What capabilities would this new service give us that existing,
   proven,
   configuration management tools like chef and puppet don't have?
  
  
   On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno na...@ntti3.com wrote:
  
   Hi Flavio
  
   Thank you for your input.
   I agree with you. oslo.config isn't right place to have server side
   code.
  
   How about oslo.configserver ?
   For 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
 Having run openstack clusters for ~2 years, I can't say that I've ever
 desired such functionality.

My proposal is adding functionalities, not removing it.
so if you are satisfied with file based configuration with chef or puppet,
this change won't affect you

 How do you see these interactions defined?  For instance, if I deploy
 a custom driver for Neutron, does that mean I also have to patch
 everything that will be talking to it (Nova, for instance) so they can
 agree on compatibility?

Nova / Neutron talks with neturon api. so it should be OK because we
are talking care
backward compatibility in the REST API.

The point in my example is neutron server + neutron l2 agent sync.

 Also, I know that I run what is probably a more complicated cluster
 than most production clusters, but I can't think of very many
 configuration options that are globally in sync across the cluster.
 Hypervisors, network drivers, mysql servers, API endpoints...they all
 might vary between hosts/racks/etc.

To support such heterogeneous environment is a purpose of this bp.
Configuration dependency is pain point for me, and it's get more worse
if the env is heterogeneous.

I have also some experience to run openstack clusters, but it is still
pain for me..

My experience is something like this
# Wow, new release! ohh this chef repo didn't supported..
# hmm i should modify chef recipe.. hmm debug.. debug..


 On Thu, Jan 9, 2014 at 11:08 AM, Nachi Ueno na...@ntti3.com wrote:
 Hi Jeremy

 Don't you think it is burden for operators if we should choose correct
 combination of config for multiple nodes even if we have chef and
 puppet?

 If we have some constraint or dependency in configurations, such logic
 should be in openstack source code.
 We can solve this issue if we have a standard way to know the config
 value of other process in the other host.

 Something like this.
 self.conf.host('host1').firewall_driver

 Then we can have a chef/or file baed config backend code for this for 
 example.

 Best
 Nachi


 2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
 +1 to Jay.  Existing tools are both better suited to the job and work
 quite well in their current state.  To address Nachi's first example,
 there's nothing preventing a Nova node in Chef from reading Neutron's
 configuration (either by using a (partial) search or storing the
 necessary information in the environment rather than in roles).  I
 assume Puppet offers the same.  Please don't re-invent this hugely
 complicated wheel.

 On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
 On 08/01/14 17:13 -0800, Nachi Ueno wrote:
 Hi folks
 
 OpenStack process tend to have many config options, and many hosts.
 It is a pain to manage this tons of config options.
 To centralize this management helps operation.
 
 We can use chef or puppet kind of tools, however
 sometimes each process depends on the other processes configuration.
 For example, nova depends on neutron configuration etc
 
 My idea is to have config server in oslo.config, and let cfg.CONF get
 config from the server.
 This way has several benefits.
 
 - We can get centralized management without modification on each
 projects ( nova, neutron, etc)
 - We can provide horizon for configuration
 
 This is bp for this proposal.
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
 
 I'm very appreciate any comments on this.

 I've thought about this as well. I like the overall idea of having a
 config server. However, I don't like the idea of having it within
 oslo.config. I'd prefer oslo.config to remain a library.

 Also, I think it would be more complex than just having a server that
 provides the configs. It'll need authentication like all other
 services in OpenStack and perhaps even support of encryption.

 I like the idea of a config registry but as mentioned above, IMHO it's
 to live under its own project.

 Hi Nati and Flavio!

 So, I'm -1 on this idea, just because I think it belongs in the realm of
 configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
 tools are built to manage multiple configuration files and changes in
 them. Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 Instead of managing the config file templates as all of the tools
 currently do, the tools would need to essentially need to forego the
 tried-and-true INI files and instead write a bunch of code in order to
 deal with REST API set/get operations for changing configuration data.

 In summary, while I agree that OpenStack services have an absolute TON
 of configurability -- for good and bad -- there are ways to improve the
 usability of configuration without changing the paradigm that most
 configuration management tools expect. One such example is having
 include.d/ support -- similar to the existing 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Robert Kukura
On 01/09/2014 02:34 PM, Nachi Ueno wrote:
 Hi Doug
 
 2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:



 On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Thank you for your input.

 The key difference from external configuration system (Chef, puppet
 etc) is integration with
 openstack services.
 There are cases a process should know the config value in the other hosts.
 If we could have centralized config storage api, we can solve this issue.

 One example of such case is neuron + nova vif parameter configuration
 regarding to security group.
 The workflow is something like this.

 nova asks vif configuration information for neutron server.
 Neutron server ask configuration in neutron l2-agent on the same host
 of nova-compute.


 That extra round trip does sound like a potential performance bottleneck,
 but sharing the configuration data directly is not the right solution. If
 the configuration setting names are shared, they become part of the
 integration API between the two services. Nova should ask neutron how to
 connect the VIF, and it shouldn't care how neutron decides to answer that
 question. The configuration setting is an implementation detail of neutron
 that shouldn't be exposed directly to nova.
 
 I agree for nova - neutron if.
 However, neutron server and neutron l2 agent configuration depends on
 each other.
 
 Running a configuration service also introduces what could be a single point
 of failure for all of the other distributed services in OpenStack. An
 out-of-band tool like chef or puppet doesn't result in the same sort of
 situation, because the tool does not have to be online in order for the
 cloud to be online.
 
 We can choose same implementation. ( Copy information in local cache etc)
 
 Thank you for your input, I could organize my thought.
 My proposal can be split for the two bps.
 
 [BP1] conf api for the other process
 Provide standard way to know the config value in the other process in
 same host or the other host.
 
 - API Example:
 conf.host('host1').firewall_driver
 
 - Conf file baed implementaion:
 config for each host will be placed in here.
  /etc/project/conf.d/{hostname}/agent.conf
 
 [BP2] Multiple backend for storing config files
 
 Currently, we have only file based configration.
 In this bp, we are extending support for config storage.
 - KVS
 - SQL
 - Chef - Ohai

I'm not opposed to making oslo.config support pluggable back ends, but I
don't think BP2 could be depended upon to satisfy a requirement for a
global view of arbitrary config information, since this wouldn't be
available if a file-based backend were selected.

As far as the neutron server getting info it needs about running L2
agents, this is currently done via the agents_db RPC, where each agent
periodically sends certain info to the server and the server stores it
in the DB for subsequent use. The same mechanism is also used for L3 and
DHCP agents, and probably for *aaS agents. Some agent config information
is included, as well as some stats, etc.. This mechanism does the job,
but could be generalized and improved a bit. But I think this flow of
information is really for specialized purposes - only a small subset of
the config info is passed, and other info is passed that doesn't come
from config.

My only real concern with using this current mechanism is that some of
the information (stats and liveness) is very dynamic, while other
information (config) is relatively static. Its a bit wasteful to send
all of it every couple seconds, but at least liveness (heartbeat) info
does need to be sent frequently. BP1 sounds like it could address the
static part, but I'm still not sure config file info is the only
relatively static info that might need to be shared. I think neutron can
stick with its agents_db RPC, DB, and API extension for now, and improve
it as needed.

-Bob

 
 Best
 Nachi
 
 Doug




 host1
   neutron server
   nova-api

 host2
   neturon l2-agent
   nova-compute

 In this case, a process should know the config value in the other hosts.

 Replying some questions

 Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 [Jay]

 Since this bp is just adding new mode, we can still use existing config
 files.

 why not help to make Chef or Puppet better and cover the more OpenStack
 use-cases rather than add yet another competing system [Doug, Morgan]

 I believe this system is not competing system.
 The key point is we should have some standard api to access such services.
 As Oleg suggested, we can use sql-server, kv-store, or chef or puppet
 as a backend system.

 Best
 Nachi


 2014/1/9 Morgan Fainberg m...@metacloud.com:
 I agree with Doug’s question, but also would extend the train of thought
 to
 ask why not help to make Chef or Puppet better and cover the more
 OpenStack
 use-cases rather than add yet another competing system?

 Cheers,
 Morgan

 On January 9, 2014 at 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Oleg Gelbukh
On Thu, Jan 9, 2014 at 10:53 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Thank you for your input.

 The key difference from external configuration system (Chef, puppet
 etc) is integration with
 openstack services.
 There are cases a process should know the config value in the other hosts.
 If we could have centralized config storage api, we can solve this issue.


Technically, that is already implemented in TripleO: configuration params
are stored in Heat templates metadata, and os-*-config scripts are applying
changes to that parameters on the nodes. I'm not sure if that could help
solve the use case you describe, as overcloud nodes probably won't have an
access to undercloud Heat server. But that counts as a centralized storage
of confguration information, from my standpoint.

--
Best regards,
Oleg Gelbukh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Bob

2014/1/9 Robert Kukura rkuk...@redhat.com:
 On 01/09/2014 02:34 PM, Nachi Ueno wrote:
 Hi Doug

 2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:



 On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 Thank you for your input.

 The key difference from external configuration system (Chef, puppet
 etc) is integration with
 openstack services.
 There are cases a process should know the config value in the other hosts.
 If we could have centralized config storage api, we can solve this issue.

 One example of such case is neuron + nova vif parameter configuration
 regarding to security group.
 The workflow is something like this.

 nova asks vif configuration information for neutron server.
 Neutron server ask configuration in neutron l2-agent on the same host
 of nova-compute.


 That extra round trip does sound like a potential performance bottleneck,
 but sharing the configuration data directly is not the right solution. If
 the configuration setting names are shared, they become part of the
 integration API between the two services. Nova should ask neutron how to
 connect the VIF, and it shouldn't care how neutron decides to answer that
 question. The configuration setting is an implementation detail of neutron
 that shouldn't be exposed directly to nova.

 I agree for nova - neutron if.
 However, neutron server and neutron l2 agent configuration depends on
 each other.

 Running a configuration service also introduces what could be a single point
 of failure for all of the other distributed services in OpenStack. An
 out-of-band tool like chef or puppet doesn't result in the same sort of
 situation, because the tool does not have to be online in order for the
 cloud to be online.

 We can choose same implementation. ( Copy information in local cache etc)

 Thank you for your input, I could organize my thought.
 My proposal can be split for the two bps.

 [BP1] conf api for the other process
 Provide standard way to know the config value in the other process in
 same host or the other host.

 - API Example:
 conf.host('host1').firewall_driver

 - Conf file baed implementaion:
 config for each host will be placed in here.
  /etc/project/conf.d/{hostname}/agent.conf

 [BP2] Multiple backend for storing config files

 Currently, we have only file based configration.
 In this bp, we are extending support for config storage.
 - KVS
 - SQL
 - Chef - Ohai

 I'm not opposed to making oslo.config support pluggable back ends, but I
 don't think BP2 could be depended upon to satisfy a requirement for a
 global view of arbitrary config information, since this wouldn't be
 available if a file-based backend were selected.

We can do it even if it's a file-based backend.
Chef or puppet will copy some configuration on the sever side and agent side.
The server read agent configuration stored in the server.

 As far as the neutron server getting info it needs about running L2
 agents, this is currently done via the agents_db RPC, where each agent
 periodically sends certain info to the server and the server stores it
 in the DB for subsequent use. The same mechanism is also used for L3 and
 DHCP agents, and probably for *aaS agents. Some agent config information
 is included, as well as some stats, etc.. This mechanism does the job,
 but could be generalized and improved a bit. But I think this flow of
 information is really for specialized purposes - only a small subset of
 the config info is passed, and other info is passed that doesn't come
 from config.

I agree on here.
We need a generic framework to do..

- static config with server and agent
- dynamic resource information and update
- stats or liveness updates

Today, we are re-inventing these frameworks in the different processes.

 My only real concern with using this current mechanism is that some of
 the information (stats and liveness) is very dynamic, while other
 information (config) is relatively static. Its a bit wasteful to send
 all of it every couple seconds, but at least liveness (heartbeat) info
 does need to be sent frequently. BP1 sounds like it could address the
 static part, but I'm still not sure config file info is the only
 relatively static info that might need to be shared. I think neutron can
 stick with its agents_db RPC, DB, and API extension for now, and improve
 it as needed.

I got it.
It looks like the community tend to don't like this idea, so it's not
good timing
to do this in generic way.
Let's work on this in neutron for now.

 Doug, Jeremy , Jay, Greg
Thank you for your inputs! I'll obsolete this bp.

Nachi

 -Bob


 Best
 Nachi

 Doug




 host1
   neutron server
   nova-api

 host2
   neturon l2-agent
   nova-compute

 In this case, a process should know the config value in the other hosts.

 Replying some questions

 Adding a config server would dramatically change the way that
 configuration management tools would interface with OpenStack services.
 [Jay]

 Since this bp is just adding 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Jay Pipes
Hope you don't mind, I'll jump in here :)

On Thu, 2014-01-09 at 11:08 -0800, Nachi Ueno wrote:
 Hi Jeremy
 
 Don't you think it is burden for operators if we should choose correct
 combination of config for multiple nodes even if we have chef and
 puppet?

It's more of a burden for operators to have to configure OpenStack in
multiple ways.

 If we have some constraint or dependency in configurations, such logic
 should be in openstack source code.

Could you explain this a bit more? I generally view packages and things
like requirements.txt and setup.py [extra] sections as the canonical way
of resolving dependencies. An example here would be great.

 We can solve this issue if we have a standard way to know the config
 value of other process in the other host.
 
 Something like this.
 self.conf.host('host1').firewall_driver

This is already in every configuration management system I can think of.

In Chef, a cookbook can call out to search (or partial search) for the
node in question and retrieve such information (called attributes in
Chef-world).

In Puppet, one would use Hiera to look up another node's configuration.

In Ansible, one would use a Dynamic Inventory.

In Salt, you'd use Salt Mine.

 Then we can have a chef/or file baed config backend code for this for example.

I actually think you're thinking about this in the reverse way to the
way operators think about things. Operators want all configuration data
managed by a singular system -- their configuration management system.
Adding a different configuration data manager into the mix is the
opposite of what most operators would like, at least, that's just in my
experience.

All the best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Oleg Gelbukh
On Fri, Jan 10, 2014 at 12:18 AM, Nachi Ueno na...@ntti3.com wrote:

 2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
  How do you see these interactions defined?  For instance, if I deploy
  a custom driver for Neutron, does that mean I also have to patch
  everything that will be talking to it (Nova, for instance) so they can
  agree on compatibility?

 Nova / Neutron talks with neturon api. so it should be OK because we
 are talking care
 backward compatibility in the REST API.

 The point in my example is neutron server + neutron l2 agent sync.


What about doing it the other way round, i.e. allow one server to query
certain configuration parameter(s) from the other via RPC? I believe I've
seen such proposal quite some time ago in Nova blueprints, but with no
actual implementation.

--
Best regards,
Oleg Gelbukh



  Also, I know that I run what is probably a more complicated cluster
  than most production clusters, but I can't think of very many
  configuration options that are globally in sync across the cluster.
  Hypervisors, network drivers, mysql servers, API endpoints...they all
  might vary between hosts/racks/etc.

 To support such heterogeneous environment is a purpose of this bp.
 Configuration dependency is pain point for me, and it's get more worse
 if the env is heterogeneous.

 I have also some experience to run openstack clusters, but it is still
 pain for me..

 My experience is something like this
 # Wow, new release! ohh this chef repo didn't supported..
 # hmm i should modify chef recipe.. hmm debug.. debug..


  On Thu, Jan 9, 2014 at 11:08 AM, Nachi Ueno na...@ntti3.com wrote:
  Hi Jeremy
 
  Don't you think it is burden for operators if we should choose correct
  combination of config for multiple nodes even if we have chef and
  puppet?
 
  If we have some constraint or dependency in configurations, such logic
  should be in openstack source code.
  We can solve this issue if we have a standard way to know the config
  value of other process in the other host.
 
  Something like this.
  self.conf.host('host1').firewall_driver
 
  Then we can have a chef/or file baed config backend code for this for
 example.
 
  Best
  Nachi
 
 
  2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
  +1 to Jay.  Existing tools are both better suited to the job and work
  quite well in their current state.  To address Nachi's first example,
  there's nothing preventing a Nova node in Chef from reading Neutron's
  configuration (either by using a (partial) search or storing the
  necessary information in the environment rather than in roles).  I
  assume Puppet offers the same.  Please don't re-invent this hugely
  complicated wheel.
 
  On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
  On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
  On 08/01/14 17:13 -0800, Nachi Ueno wrote:
  Hi folks
  
  OpenStack process tend to have many config options, and many hosts.
  It is a pain to manage this tons of config options.
  To centralize this management helps operation.
  
  We can use chef or puppet kind of tools, however
  sometimes each process depends on the other processes configuration.
  For example, nova depends on neutron configuration etc
  
  My idea is to have config server in oslo.config, and let cfg.CONF
 get
  config from the server.
  This way has several benefits.
  
  - We can get centralized management without modification on each
  projects ( nova, neutron, etc)
  - We can provide horizon for configuration
  
  This is bp for this proposal.
  https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
  
  I'm very appreciate any comments on this.
 
  I've thought about this as well. I like the overall idea of having a
  config server. However, I don't like the idea of having it within
  oslo.config. I'd prefer oslo.config to remain a library.
 
  Also, I think it would be more complex than just having a server that
  provides the configs. It'll need authentication like all other
  services in OpenStack and perhaps even support of encryption.
 
  I like the idea of a config registry but as mentioned above, IMHO
 it's
  to live under its own project.
 
  Hi Nati and Flavio!
 
  So, I'm -1 on this idea, just because I think it belongs in the realm
 of
  configuration management tooling (Chef/Puppet/Salt/Ansible/etc). Those
  tools are built to manage multiple configuration files and changes in
  them. Adding a config server would dramatically change the way that
  configuration management tools would interface with OpenStack
 services.
  Instead of managing the config file templates as all of the tools
  currently do, the tools would need to essentially need to forego the
  tried-and-true INI files and instead write a bunch of code in order to
  deal with REST API set/get operations for changing configuration data.
 
  In summary, while I agree that OpenStack services have an absolute TON
  of configurability -- for good and bad -- there are ways to improve
 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Jay

2014/1/9 Jay Pipes jaypi...@gmail.com:
 Hope you don't mind, I'll jump in here :)
I'll never mind to discuss with you :)

 On Thu, 2014-01-09 at 11:08 -0800, Nachi Ueno wrote:
 Hi Jeremy

 Don't you think it is burden for operators if we should choose correct
 combination of config for multiple nodes even if we have chef and
 puppet?

 It's more of a burden for operators to have to configure OpenStack in
 multiple ways.

This is independent discussion with pain of dependent configuration in
multiple node.

 If we have some constraint or dependency in configurations, such logic
 should be in openstack source code.

 Could you explain this a bit more? I generally view packages and things
 like requirements.txt and setup.py [extra] sections as the canonical way
 of resolving dependencies. An example here would be great.

It's package dependencies. I'm talking about configuration
dependencies or constraint.
For example, if we wanna use VLAN with neutron,
we should do proper configuration in neutron server and nova-compute
and l2-agent.

We get input such as this is a burden for operation.

Then neutron team start working on port binding extension to reduce this burden.
This extension let nova ask neutron for vif configuration, then we can remove
any redundant network configuration in nova.conf.


 We can solve this issue if we have a standard way to know the config
 value of other process in the other host.

 Something like this.
 self.conf.host('host1').firewall_driver

 This is already in every configuration management system I can think of.

Yes. I agree. But we can't assess it from inside of the openstack code.

 In Chef, a cookbook can call out to search (or partial search) for the
 node in question and retrieve such information (called attributes in
 Chef-world).

 In Puppet, one would use Hiera to look up another node's configuration.

 In Ansible, one would use a Dynamic Inventory.

 In Salt, you'd use Salt Mine.

 Then we can have a chef/or file baed config backend code for this for 
 example.

 I actually think you're thinking about this in the reverse way to the
 way operators think about things. Operators want all configuration data
 managed by a singular system -- their configuration management system.
 Adding a different configuration data manager into the mix is the
 opposite of what most operators would like, at least, that's just in my
 experience.

My point is let openstack access the single configuration management system.
Also, I wanna reduce redundant configuration in between multiple
nodes, and hopefully,
 we could have some generic framework to do this.

Nachi

 All the best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
Hi Oleg

2014/1/9 Oleg Gelbukh ogelb...@mirantis.com:
 On Fri, Jan 10, 2014 at 12:18 AM, Nachi Ueno na...@ntti3.com wrote:

 2014/1/9 Jeremy Hanmer jer...@dreamhost.com:

  How do you see these interactions defined?  For instance, if I deploy
  a custom driver for Neutron, does that mean I also have to patch
  everything that will be talking to it (Nova, for instance) so they can
  agree on compatibility?

 Nova / Neutron talks with neturon api. so it should be OK because we
 are talking care
 backward compatibility in the REST API.

 The point in my example is neutron server + neutron l2 agent sync.


 What about doing it the other way round, i.e. allow one server to query
 certain configuration parameter(s) from the other via RPC? I believe I've
 seen such proposal quite some time ago in Nova blueprints, but with no
 actual implementation.

I agree. This is a my current choice.

 --
 Best regards,
 Oleg Gelbukh



  Also, I know that I run what is probably a more complicated cluster
  than most production clusters, but I can't think of very many
  configuration options that are globally in sync across the cluster.
  Hypervisors, network drivers, mysql servers, API endpoints...they all
  might vary between hosts/racks/etc.

 To support such heterogeneous environment is a purpose of this bp.
 Configuration dependency is pain point for me, and it's get more worse
 if the env is heterogeneous.

 I have also some experience to run openstack clusters, but it is still
 pain for me..

 My experience is something like this
 # Wow, new release! ohh this chef repo didn't supported..
 # hmm i should modify chef recipe.. hmm debug.. debug..


  On Thu, Jan 9, 2014 at 11:08 AM, Nachi Ueno na...@ntti3.com wrote:
  Hi Jeremy
 
  Don't you think it is burden for operators if we should choose correct
  combination of config for multiple nodes even if we have chef and
  puppet?
 
  If we have some constraint or dependency in configurations, such logic
  should be in openstack source code.
  We can solve this issue if we have a standard way to know the config
  value of other process in the other host.
 
  Something like this.
  self.conf.host('host1').firewall_driver
 
  Then we can have a chef/or file baed config backend code for this for
  example.
 
  Best
  Nachi
 
 
  2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
  +1 to Jay.  Existing tools are both better suited to the job and work
  quite well in their current state.  To address Nachi's first example,
  there's nothing preventing a Nova node in Chef from reading Neutron's
  configuration (either by using a (partial) search or storing the
  necessary information in the environment rather than in roles).  I
  assume Puppet offers the same.  Please don't re-invent this hugely
  complicated wheel.
 
  On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com wrote:
  On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
  On 08/01/14 17:13 -0800, Nachi Ueno wrote:
  Hi folks
  
  OpenStack process tend to have many config options, and many hosts.
  It is a pain to manage this tons of config options.
  To centralize this management helps operation.
  
  We can use chef or puppet kind of tools, however
  sometimes each process depends on the other processes
   configuration.
  For example, nova depends on neutron configuration etc
  
  My idea is to have config server in oslo.config, and let cfg.CONF
   get
  config from the server.
  This way has several benefits.
  
  - We can get centralized management without modification on each
  projects ( nova, neutron, etc)
  - We can provide horizon for configuration
  
  This is bp for this proposal.
  https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
  
  I'm very appreciate any comments on this.
 
  I've thought about this as well. I like the overall idea of having a
  config server. However, I don't like the idea of having it within
  oslo.config. I'd prefer oslo.config to remain a library.
 
  Also, I think it would be more complex than just having a server
  that
  provides the configs. It'll need authentication like all other
  services in OpenStack and perhaps even support of encryption.
 
  I like the idea of a config registry but as mentioned above, IMHO
  it's
  to live under its own project.
 
  Hi Nati and Flavio!
 
  So, I'm -1 on this idea, just because I think it belongs in the realm
  of
  configuration management tooling (Chef/Puppet/Salt/Ansible/etc).
  Those
  tools are built to manage multiple configuration files and changes in
  them. Adding a config server would dramatically change the way that
  configuration management tools would interface with OpenStack
  services.
  Instead of managing the config file templates as all of the tools
  currently do, the tools would need to essentially need to forego the
  tried-and-true INI files and instead write a bunch of code in order
  to
  deal with REST API set/get operations for changing configuration
  data.
 
  In summary, while I 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Doug Hellmann
On Thu, Jan 9, 2014 at 3:56 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Oleg

 2014/1/9 Oleg Gelbukh ogelb...@mirantis.com:
  On Fri, Jan 10, 2014 at 12:18 AM, Nachi Ueno na...@ntti3.com wrote:
 
  2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
 
   How do you see these interactions defined?  For instance, if I deploy
   a custom driver for Neutron, does that mean I also have to patch
   everything that will be talking to it (Nova, for instance) so they can
   agree on compatibility?
 
  Nova / Neutron talks with neturon api. so it should be OK because we
  are talking care
  backward compatibility in the REST API.
 
  The point in my example is neutron server + neutron l2 agent sync.
 
 
  What about doing it the other way round, i.e. allow one server to query
  certain configuration parameter(s) from the other via RPC? I believe I've
  seen such proposal quite some time ago in Nova blueprints, but with no
  actual implementation.

 I agree. This is a my current choice.


But my point is that you shouldn't be thinking about this as querying
configuration settings. The fact that a piece of information one service
needs is stored in the configuration file of another service is an
implementation detail. It might move. The name of the option could change.
The way the value is determined might change.

So don't tie yourself to the configuration setting location and name of
another service. Ask the service the question you have, and let it provide
an answer. Make it a specific RPC call, so the input parameters can be
versioned and the response type can be versioned.

Doug




  --
  Best regards,
  Oleg Gelbukh
 
 
 
   Also, I know that I run what is probably a more complicated cluster
   than most production clusters, but I can't think of very many
   configuration options that are globally in sync across the cluster.
   Hypervisors, network drivers, mysql servers, API endpoints...they all
   might vary between hosts/racks/etc.
 
  To support such heterogeneous environment is a purpose of this bp.
  Configuration dependency is pain point for me, and it's get more worse
  if the env is heterogeneous.
 
  I have also some experience to run openstack clusters, but it is still
  pain for me..
 
  My experience is something like this
  # Wow, new release! ohh this chef repo didn't supported..
  # hmm i should modify chef recipe.. hmm debug.. debug..
 
 
   On Thu, Jan 9, 2014 at 11:08 AM, Nachi Ueno na...@ntti3.com wrote:
   Hi Jeremy
  
   Don't you think it is burden for operators if we should choose
 correct
   combination of config for multiple nodes even if we have chef and
   puppet?
  
   If we have some constraint or dependency in configurations, such
 logic
   should be in openstack source code.
   We can solve this issue if we have a standard way to know the config
   value of other process in the other host.
  
   Something like this.
   self.conf.host('host1').firewall_driver
  
   Then we can have a chef/or file baed config backend code for this for
   example.
  
   Best
   Nachi
  
  
   2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
   +1 to Jay.  Existing tools are both better suited to the job and
 work
   quite well in their current state.  To address Nachi's first
 example,
   there's nothing preventing a Nova node in Chef from reading
 Neutron's
   configuration (either by using a (partial) search or storing the
   necessary information in the environment rather than in roles).  I
   assume Puppet offers the same.  Please don't re-invent this hugely
   complicated wheel.
  
   On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com
 wrote:
   On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
   On 08/01/14 17:13 -0800, Nachi Ueno wrote:
   Hi folks
   
   OpenStack process tend to have many config options, and many
 hosts.
   It is a pain to manage this tons of config options.
   To centralize this management helps operation.
   
   We can use chef or puppet kind of tools, however
   sometimes each process depends on the other processes
configuration.
   For example, nova depends on neutron configuration etc
   
   My idea is to have config server in oslo.config, and let cfg.CONF
get
   config from the server.
   This way has several benefits.
   
   - We can get centralized management without modification on each
   projects ( nova, neutron, etc)
   - We can provide horizon for configuration
   
   This is bp for this proposal.
   
 https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
   
   I'm very appreciate any comments on this.
  
   I've thought about this as well. I like the overall idea of
 having a
   config server. However, I don't like the idea of having it within
   oslo.config. I'd prefer oslo.config to remain a library.
  
   Also, I think it would be more complex than just having a server
   that
   provides the configs. It'll need authentication like all other
   services in OpenStack and perhaps even support of encryption.
  
   I like the idea 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Nachi Ueno
2014/1/9 Doug Hellmann doug.hellm...@dreamhost.com:



 On Thu, Jan 9, 2014 at 3:56 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Oleg

 2014/1/9 Oleg Gelbukh ogelb...@mirantis.com:
  On Fri, Jan 10, 2014 at 12:18 AM, Nachi Ueno na...@ntti3.com wrote:
 
  2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
 
   How do you see these interactions defined?  For instance, if I deploy
   a custom driver for Neutron, does that mean I also have to patch
   everything that will be talking to it (Nova, for instance) so they
   can
   agree on compatibility?
 
  Nova / Neutron talks with neturon api. so it should be OK because we
  are talking care
  backward compatibility in the REST API.
 
  The point in my example is neutron server + neutron l2 agent sync.
 
 
  What about doing it the other way round, i.e. allow one server to query
  certain configuration parameter(s) from the other via RPC? I believe
  I've
  seen such proposal quite some time ago in Nova blueprints, but with no
  actual implementation.

 I agree. This is a my current choice.


 But my point is that you shouldn't be thinking about this as querying
 configuration settings. The fact that a piece of information one service
 needs is stored in the configuration file of another service is an
 implementation detail. It might move. The name of the option could change.
 The way the value is determined might change.

I agree, but may be definition of configuration is different with you.
For me, API and Configurations are all reflection from internal Models.
They are just some ways to configure these models.
If configuration is always some lower implementation parameter for you,
I would say it as Model.

 So don't tie yourself to the configuration setting location and name of
 another service. Ask the service the question you have, and let it provide
 an answer. Make it a specific RPC call, so the input parameters can be
 versioned and the response type can be versioned.

+1 for versioning,
However adding more and more rpc call get system management hard, and
let system unstable.
It  make processes tightly coupled, and it make hard to debug.

We should have single storage of logical models. (nova-api for
compute, and neutron server for networking for example). Then all
service should work to realize logical models.

Nachi


 Doug




  --
  Best regards,
  Oleg Gelbukh
 
 
 
   Also, I know that I run what is probably a more complicated cluster
   than most production clusters, but I can't think of very many
   configuration options that are globally in sync across the cluster.
   Hypervisors, network drivers, mysql servers, API endpoints...they all
   might vary between hosts/racks/etc.
 
  To support such heterogeneous environment is a purpose of this bp.
  Configuration dependency is pain point for me, and it's get more worse
  if the env is heterogeneous.
 
  I have also some experience to run openstack clusters, but it is still
  pain for me..
 
  My experience is something like this
  # Wow, new release! ohh this chef repo didn't supported..
  # hmm i should modify chef recipe.. hmm debug.. debug..
 
 
   On Thu, Jan 9, 2014 at 11:08 AM, Nachi Ueno na...@ntti3.com wrote:
   Hi Jeremy
  
   Don't you think it is burden for operators if we should choose
   correct
   combination of config for multiple nodes even if we have chef and
   puppet?
  
   If we have some constraint or dependency in configurations, such
   logic
   should be in openstack source code.
   We can solve this issue if we have a standard way to know the config
   value of other process in the other host.
  
   Something like this.
   self.conf.host('host1').firewall_driver
  
   Then we can have a chef/or file baed config backend code for this
   for
   example.
  
   Best
   Nachi
  
  
   2014/1/9 Jeremy Hanmer jer...@dreamhost.com:
   +1 to Jay.  Existing tools are both better suited to the job and
   work
   quite well in their current state.  To address Nachi's first
   example,
   there's nothing preventing a Nova node in Chef from reading
   Neutron's
   configuration (either by using a (partial) search or storing the
   necessary information in the environment rather than in roles).  I
   assume Puppet offers the same.  Please don't re-invent this hugely
   complicated wheel.
  
   On Thu, Jan 9, 2014 at 10:28 AM, Jay Pipes jaypi...@gmail.com
   wrote:
   On Thu, 2014-01-09 at 10:23 +0100, Flavio Percoco wrote:
   On 08/01/14 17:13 -0800, Nachi Ueno wrote:
   Hi folks
   
   OpenStack process tend to have many config options, and many
hosts.
   It is a pain to manage this tons of config options.
   To centralize this management helps operation.
   
   We can use chef or puppet kind of tools, however
   sometimes each process depends on the other processes
configuration.
   For example, nova depends on neutron configuration etc
   
   My idea is to have config server in oslo.config, and let
cfg.CONF
get
   config from the server.
   This way has several benefits.