[Openstack] Openstack networking failure after server reboot

2012-11-07 Thread Aniruddha Khadkikar
Hi Stackers,

We have a small Openstack lab using three servers. The components are
distributed as:
1. Network controller - Quantum L3  DHCP, L2 agent, Nova, Openvswitch
2. Cloud controller - Quantum server, L2 agent, Nova, Openvswitch,
Dashboard, API, MySQL, Rabbitmq
3. Compute node - Nova, Openvswitch, L2 agent

The network is setup in the following way:
1. Each server has 4 nics. We are using only one public IP and one
private IP for the openstack setup. We have a private switch for
inter-vm communication
2. We are using gre tunnelling and openvswitch
3. br-int is assigned an IP address
4. br-ex is configured for floating IP allocation

Everything works perfectly when we are setting it up from scratch

Each vm is able to get the private IP's assigned and the NAT based
floating IP is also assigned and we are able to SSH into it.
The VM's also get created on all the three hosts.

So we are confident that we have the right configurations in place as
we have fully operational Openstack implementation using gre-tunnels.

In order to test the resilience of the setup, we decided to reboot the
servers to see if everything comes up again. We faced some dependency
of services errors and after server reboot we restarted the services
in the proper order i.e. on cloud controller we have mysql, rabbitmq,
keystone, openvswitch and quantum-server started. This was followed by
starting openvswitch, L3, dhcp and L2 agent. After which we started L2
agents on all the remaining servers and followed by nova. There is
some confusion on how to orchestrate the right order of services. This
could possibly be something we will need to work upon in future.

After this, we have nova working properly i.e. we are able to create
vm's and the pre-existing ones are also started (virsh list also shows
the vm's). ovsctl shows all the interfaces as earlier. However we are
unable to access the vm's. On logging into the vm we do not see any IP
address being assigned as the VM is unable to contact the dhcp server.

The questions that come up are:
* What could change after a reboot that would compromise a running
network configuration?
* Could there be issues with the TAP interfaces created? What is the
best way to troubleshoot such a situation?
* Has anyone seen a similar behaviour and is it specific to when we
use gre-tunnels? Is it then specific to openvswitch which we are
using?
* On reboot of the network controller are any steps required to ensure
that Openstack continues to function properly?

The setup has failed twice on reboot. For the second iteration we are
assigning the IP on startup to br-int so that openvswitch does not
give errors.

Regards
Aniruddha

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack networking failure after server reboot

2012-11-07 Thread Aniruddha Khadkikar
On Wed, Nov 7, 2012 at 5:52 PM, Gary Kotton gkot...@redhat.com wrote:
 On 11/07/2012 11:47 AM, Aniruddha Khadkikar wrote:

 Hi Stackers,

 We have a small Openstack lab using three servers. The components are
 distributed as:
 1. Network controller - Quantum L3  DHCP, L2 agent, Nova, Openvswitch

 2. Cloud controller - Quantum server, L2 agent, Nova, Openvswitch,
 Dashboard, API, MySQL, Rabbitmq
 3. Compute node - Nova, Openvswitch, L2 agent

 The network is setup in the following way:
 1. Each server has 4 nics. We are using only one public IP and one
 private IP for the openstack setup. We have a private switch for
 inter-vm communication
 2. We are using gre tunnelling and openvswitch
 3. br-int is assigned an IP address
 4. br-ex is configured for floating IP allocation

 Everything works perfectly when we are setting it up from scratch

 Each vm is able to get the private IP's assigned and the NAT based
 floating IP is also assigned and we are able to SSH into it.
 The VM's also get created on all the three hosts.

 So we are confident that we have the right configurations in place as
 we have fully operational Openstack implementation using gre-tunnels.

 In order to test the resilience of the setup, we decided to reboot the
 servers to see if everything comes up again. We faced some dependency
 of services errors and after server reboot we restarted the services
 in the proper order i.e. on cloud controller we have mysql, rabbitmq,
 keystone, openvswitch and quantum-server started. This was followed by
 starting openvswitch, L3, dhcp and L2 agent. After which we started L2
 agents on all the remaining servers and followed by nova. There is
 some confusion on how to orchestrate the right order of services. This
 could possibly be something we will need to work upon in future.

 After this, we have nova working properly i.e. we are able to create
 vm's and the pre-existing ones are also started (virsh list also shows
 the vm's). ovsctl shows all the interfaces as earlier. However we are
 unable to access the vm's. On logging into the vm we do not see any IP
 address being assigned as the VM is unable to contact the dhcp server.

 The questions that come up are:
 * What could change after a reboot that would compromise a running
 network configuration?
 * Could there be issues with the TAP interfaces created? What is the
 best way to troubleshoot such a situation?
 * Has anyone seen a similar behaviour and is it specific to when we
 use gre-tunnels? Is it then specific to openvswitch which we are
 using?
 * On reboot of the network controller are any steps required to ensure
 that Openstack continues to function properly?


 Can you please look in the log files for Quantum and see if there are any
 errors?

 There is an open issue with Quantum and QPID after rebooting - the Quantum
 service hangs? On the host for Quantum is you do netstat -an |grep 9696 do
 you see anything?


Unfortunately we recreated the cloud again. This time however we have
not assigned an IP to the br-int interface.
It is working currently as we will do the reboot today. By evening I
will provide details of the errors.
In the syslog on the network node we started seeing a lot of:

Nov  7 12:59:30  dnsmasq-dhcp[5722]: last message repeated 3 times
Nov  7 12:59:30 us000901 dnsmasq-dhcp[5746]:
DHCPDISCOVER(tap224fcabc-70) fa:16:3e:52:38:ce
Nov  7 12:59:30 us000901 dnsmasq-dhcp[5722]:
DHCPDISCOVER(tap7736e97e-5c) fa:16:3e:52:38:ce no address available
Nov  7 12:59:30 us000901 dnsmasq-dhcp[5746]: DHCPOFFER(tap224fcabc-70)
172.24.2.11 fa:16:3e:52:38:ce
Nov  7 12:59:30 us000901 dnsmasq-dhcp[5722]:
DHCPDISCOVER(tap7736e97e-5c) fa:16:3e:52:38:ce no address available
Nov  7 12:59:39 us000901 dnsmasq-dhcp[5722]:
DHCPDISCOVER(tap7736e97e-5c) fa:16:3e:52:38:ce no address available
Nov  7 12:59:39 us000901 dnsmasq-dhcp[5746]:
DHCPDISCOVER(tap224fcabc-70) fa:16:3e:52:38:ce
Nov  7 12:59:39 us000901 dnsmasq-dhcp[5746]: DHCPOFFER(tap224fcabc-70)
172.24.2.11 fa:16:3e:52:38:ce
Nov  7 12:59:57 us000901 dnsmasq-dhcp[5722]:
DHCPDISCOVER(tap7736e97e-5c) fa:16:3e:52:38:ce no address available
Nov  7 12:59:57 us000901 dnsmasq-dhcp[5746]:
DHCPDISCOVER(tap224fcabc-70) fa:16:3e:52:38:ce
Nov  7 12:59:57 us000901 dnsmasq-dhcp[5746]: DHCPOFFER(tap224fcabc-70)
172.24.2.11 fa:16:3e:52:38:ce

The above actions are associated with near 100% cpu for kvm processes
and dnsmasq.

The Quantum dhcp log relevant part is at http://pastebin.com/GmksGeK6

Regards
Aniruddha


 The setup has failed twice on reboot. For the second iteration we are
 assigning the IP on startup to br-int so that openvswitch does not
 give errors.

 Regards
 Aniruddha

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net

Re: [Openstack] Distributed configuration database

2012-11-04 Thread Aniruddha Khadkikar
On Nov 4, 2012 8:16 PM, Nah, Zhongyue zhongyue@intel.com wrote:

 https://blueprints.launchpad.net/nova/+spec/deployer-friendly-confs

 This blueprint seems to be planning to implement what is being discussed
for Nova.

Marvellous. I believe this discussion adds/reinforces to the ideas proposed
and hope this blueprint is taken up for the next release!

 -zhongyue

 Sent from my iPhone

 On Nov 4, 2012, at 3:26 PM, Gary Kotton gkot...@redhat.commailto:
gkot...@redhat.com wrote:


 It would also be nice if one could change configuration settings at run
time instead of having to restart a process.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.netmailto:
openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to know capability of cloud resource pool?

2012-11-04 Thread Aniruddha Khadkikar
On Sun, Nov 4, 2012 at 5:45 AM, Ray Sun qsun01...@cienet.com.cn wrote:
 Is there any method to get the resource pool capability(CPU/Memory/Storage)
 in the cloud? Can OpenStack auto detect the resource when a new node join
 in?
 If not, is there a roadmap to implement it?


My thoughts exactly. Last week we began working with quotas for each
tenant and it struck me too that there does not seem a way through
Openstack to understand what's the total resource capacity of the
cloud. This would help in benchmarking how efficiently currently
provisioned resources are being utilized and if there is a need of
adding/removing nodes from the cloud.

Cheers
Aniruddha

 Thanks.

 - Ray
 Yours faithfully, Kind regards.

 CIeNET Technologies (Beijing) Co., Ltd
 Email: qsun01...@cienet.com.cn
 Office Phone: +86-01081470088-7079
 Mobile Phone: +86-13581988291


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Distributed configuration database

2012-11-03 Thread Aniruddha Khadkikar
@Rob - excellent points. It would be good to know in real life
deployments how often are configurations changed so the question of
network interruptions are handled in a befitting way. It is my
impression that once the database has synced the information, changes
would be infrequent. Regarding your second point, the solution lies in
how the records are maintained. One can design the structure to
include a 'version' column along with a boolean attribute for logical
deletion of records. This would allow storing information across
different versions. Puppet has its use, but the main point is that the
metadata should lie within openstack and not solely outside in a
configuration management tool (i.e. Puppet). With so many
configuration parameters in quantum, nova, swift, cinder, we need an
easy way to be able to query 'global' values and if required be able
to specify 'local' values applicable to nodes also, if such a need
arises. Areas where local values could be required could be location
of log files, different tuning parameters depending on hardware
configuration etc.
The data store design can be adapted to have the node level local
values and the individual daemons on start up would honour the local
values if defined. Also a common data store that can be queried (my
main point) within an openstack deployment would be extremely useful
for troubleshooting, rather than having to dig through each and every
configuration file (if I'm not using Puppet).

@Jon - I am happy that these ideas resonate with you. My moot point is
that the metadata should be within the openstack implementation and
not outside. I am not very familiar with Puppet - is there a way to
query the parameters set in the conf file. I would think that Puppet
would be given a conf file to deploy. The values within the conf file
would still remain abstracted and not be readily available. Please
correct me if I'm wrong in my presumption. Having the parameters with
their default values in the data store would allow a better
understanding of the different configuration parameters. Also if its
in a database then dependency and relationship rules or even
constraints (permissible values) could be defined.

On Sat, Nov 3, 2012 at 12:08 PM, Robert Collins
robe...@robertcollins.net wrote:
 One thing to bear in mind when considering a network API for this -
 beyond the issue of dealing with network interruptions gracefully - is
 dealing with version skew: while deploying a new release of Openstack,
 the definition of truth may be different for each version, so you need
 to either have very high quality accept-old-configurations-code in
 openstack (allowing you to never need differing versions of truth), or
 you need a system (such as Puppet) that can parameterise what it
 delivers based on e.g. the software version in question.

 -Rob

 On Sat, Nov 3, 2012 at 8:17 AM, Jonathan Proulx j...@csail.mit.edu wrote:
 On Sat, Nov 03, 2012 at 12:19:58AM +0530, Aniruddha Khadkikar wrote:
 : However I feel that the parameters that
 :govern the behaviour of openstack components should be in a data store
 :that can be queried from a single data store. Also it would make
 :deployments less error prone.

 On one hand I agree having a single source of truth is appealing in
 many ways.  The simplicity of text configuration files and the shared
 nothing nature of having config local to each system is also very
 appealing.

 In my world my puppet manifest is my single source of truth which
 provides both a single config interface so there is no error prone
 manual duplication and also results in a fully distributed text
 configuration so the truth persists even if the source is down for a
 while.

 There's a number of things besides puppet to implement this type of
 management with but conceptually I very much think that is the right
 thing.

 -Jon

 :Aniruddha
 :
 :
 :On Fri, Nov 2, 2012 at 11:52 PM, Endre Karlson endre.karl...@gmail.com 
 wrote:
 : Are you thinking of something like a Hiera style thing?
 :
 : Endre.
 :
 : 2012/11/2 Aniruddha Khadkikar askhadki...@gmail.com
 :
 : Hi Stackers,
 :
 : Are there plans to keep all configuration parameters in a central
 : location? Rather than having a lot of configuration files. By central
 : location I do not mean that its a single server but it could be a
 : distributed database with each node irrespective of purpose having a
 : copy of the configurations. Each node can refer to the parameters it
 : requires.
 :
 : This would ease understanding the myriad of configuration parameters
 : as elicited in the recent San Diego presentation that showed we have
 : now more than 500 configurable parameters in Nova
 :
 : 
 (http://www.openstack.org/summit/san-diego-2012/openstack-summit-sessions/presentation/pimp-my-cloud-nova-configuration-hints-and-tricks).
 :
 : For configuration parameters which are essentially key-value pairs,
 : using a nosql database would suffice.
 : Currently its quite difficult to dig up the default values

Re: [Openstack] Distributed configuration database

2012-11-03 Thread Aniruddha Khadkikar
On Sat, Nov 3, 2012 at 7:53 PM, andi abes andi.a...@gmail.com wrote:
 On Sat, Nov 3, 2012 at 9:30 AM, Aniruddha Khadkikar
 askhadki...@gmail.com wrote:
 @Rob - excellent points. It would be good to know in real life
 deployments how often are configurations changed so the question of
 network interruptions are handled in a befitting way. It is my
 impression that once the database has synced the information, changes
 would be infrequent. Regarding your second point, the solution lies in
 how the records are maintained. One can design the structure to
 include a 'version' column along with a boolean attribute for logical
 deletion of records. This would allow storing information across
 different versions. Puppet has its use, but the main point is that the
 metadata should lie within openstack and not solely outside in a
 configuration management tool (i.e. Puppet). With so many
 configuration parameters in quantum, nova, swift, cinder, we need an
 easy way to be able to query 'global' values and if required be able
 to specify 'local' values applicable to nodes also, if such a need
 arises. Areas where local values could be required could be location
 of log files, different tuning parameters depending on hardware
 configuration etc.

 from my experience many of the problems encountered in deploying
 openstack have to do with what you call local parameters which
 depend on the node's configuration. For nova e.g. the name of the
 interface/bridge. for swift e.g. the disks available on a node and so
 on. The values that are common for the whole deployment are the easy
 (or at least easier) part ;)


 The data store design can be adapted to have the node level local
 values and the individual daemons on start up would honour the local
 values if defined. Also a common data store that can be queried (my
 main point) within an openstack deployment would be extremely useful
 for troubleshooting, rather than having to dig through each and every
 configuration file (if I'm not using Puppet).

 for troubleshooting you'd want to look at resources available on the
 node, and compare/match them to the configuration parameters. typical
 configuration management systems (e.g. puppet , chef, juju etc)
 provide you that information,  in a centralized location, with various
 querying capabilities. Additionally, once you've found the problem,
 you'd want to fix it...where CM systems shine

 Other sources of problems configuration management systems can help
 you with are:
 - dependencies - other python modules and OS packages required to make
 openstack happy.
 - disk, network and other local resources' configuration (e.g
 interface /bridge config, disk formatting etc)

Agreed. Puppet scores at managing dependencies. No question about that.
Puppet is used for managing software that has its own metadata. The
way I have understood Puppet is that it uses this metadata as 'facts'
to take decisions while deploying a 'recipe'. In our case the metadata
is stored in conf files. The sample conf file for Nova has 500
parameters. So, in order to be able to query all known configuration
parameters (even if I'm happy with the defaults), my conf file would
need to have all of them listed. If we assume that the next version of
Nova has greater or lesser and/or different configuration parameters,
then my recipes have to be revisited so that all new configuration
parameters are documented in Puppet.

I still feel Openstack components need to manage their metadata (conf
data) in a better way than conf files, and allow Puppet/Chef to query
this metadata to manage an Openstack implementation efficiently. I am
not arguing for/against an configuration management approach. Rather I
feel it would be safer in the long run that conf data is managed
independently of any non-core component of openstack.

 I think there are good solutions out there, that provide more value
 than just a db for parameters...
 It might be worth your time to compare those to what would be gained
 by just a parameter store.


The parameter store is just one component of the implementation. The
larger issue concerns change management of the metadata 'dictionary'
(I borrow the term liberally, as I have a database background).


 @Jon - I am happy that these ideas resonate with you. My moot point is
 that the metadata should be within the openstack implementation and
 not outside. I am not very familiar with Puppet - is there a way to
 query the parameters set in the conf file. I would think that Puppet
 would be given a conf file to deploy. The values within the conf file
 would still remain abstracted and not be readily available. Please
 correct me if I'm wrong in my presumption. Having the parameters with
 their default values in the data store would allow a better
 understanding of the different configuration parameters. Also if its
 in a database then dependency and relationship rules or even
 constraints (permissible values) could be defined.

 On Sat, Nov 3, 2012

Re: [Openstack] Distributed configuration database

2012-11-03 Thread Aniruddha Khadkikar
On Sat, Nov 3, 2012 at 9:06 PM, Tim Bell tim.b...@cern.ch wrote:

 Puppet is great for this sort of thing. There are various ways of querying
 parameters and making choices about them. A typical example would be where
 you want to adjust a configuration parameter due to memory configuration or
 network.

I managed to get an overview of Puppet's approach. I agree that Puppet
is great for configuration management. In your experience how is the
change in Nova configuration parameters from Essex to Folsom dealt
with? Is Puppet being configured so that all known configurable
parameters for Nova are documented in a custom fact collector?
In our lab deployment our conf files only contain those parameters
that we want to define and we're happy to work with defaults for the
rest. It appears our approach may be incorrect as we're destroying the
source data which Facter can mine.


 Writing the puppet configuration is not difficult... Puppetlabs have an
 excellent and actively maintained configuration suite on the puppetforge for
 OpenStack along with tutorial videos.

 Chef support is also there so there is choice. The DevOps panel gives some
 discussion around this
 (http://www.openstack.org/summit/san-diego-2012/openstack-summit-sessions/pr
 esentation/devops-panel)

 I think building an openstack specific database for configuration would be
 hard work to include all of the flexibility that Puppet offers.  It is worth
 having a good look at puppet or chef before starting this.


Tim - I totally am with you on this. It will definitely be a rethink
and lot of hard work! My intention is not to replicate Puppet's
functiionality. It's on how to manage the Openstack 'data dictionary'
in a simpler way than a large number of conf files. I am trying to
visualize what difficulties we would face in operations, especially
doing change management for successive releases of Openstack which
might contain changes in the number and type of configurable
parameters and possibly changes in the default values.

 Tim


 @Jon - I am happy that these ideas resonate with you. My moot point is
 that
 the metadata should be within the openstack implementation and not
 outside.
 I am not very familiar with Puppet - is there a way to query the
 parameters set
 in the conf file. I would think that Puppet would be given a conf file to
 deploy.
 The values within the conf file would still remain abstracted and not be
 readily
 available. Please correct me if I'm wrong in my presumption. Having the
 parameters with their default values in the data store would allow a
 better
 understanding of the different configuration parameters. Also if its in a
 database then dependency and relationship rules or even constraints
 (permissible values) could be defined.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Distributed configuration database

2012-11-02 Thread Aniruddha Khadkikar
Hi Stackers,

Are there plans to keep all configuration parameters in a central
location? Rather than having a lot of configuration files. By central
location I do not mean that its a single server but it could be a
distributed database with each node irrespective of purpose having a
copy of the configurations. Each node can refer to the parameters it
requires.

This would ease understanding the myriad of configuration parameters
as elicited in the recent San Diego presentation that showed we have
now more than 500 configurable parameters in Nova
(http://www.openstack.org/summit/san-diego-2012/openstack-summit-sessions/presentation/pimp-my-cloud-nova-configuration-hints-and-tricks).

For configuration parameters which are essentially key-value pairs,
using a nosql database would suffice.
Currently its quite difficult to dig up the default values after
deployment. Or am I missing something here?

Br
Aniruddha

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Openstack configuration database

2012-11-02 Thread Aniruddha Khadkikar
Are there plans to keep all configuration parameters in a central
location? Rather than having a lot of configuration files. By central
location I do not mean that its a single server but it could be a
distributed database with each node irrespective of purpose having a
copy of the configurations. Each node can refer to the parameters it requires.

This would ease understanding the myriad of configuration parameters
as elicited in the recent Summit presentation that showed we have
now more than 500 configurable parameters in Nova.

For configuration parameters which are essentially key-value pairs,
using a nosql database would suffice. Currently its quite difficult to
dig up the default values after deployment. Or am I missing something
here?

Aniruddha

I am sending this again as the earlier message appears to have been lost somehow

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Distributed configuration database

2012-11-02 Thread Aniruddha Khadkikar
Well, I'm not really sure if the configuration parameters are
hierarchical. I haven't given much thought on the internal
organization of various parameters in all configuration files and
their inter-dependencies. However I feel that the parameters that
govern the behaviour of openstack components should be in a data store
that can be queried from a single data store. Also it would make
deployments less error prone.

Aniruddha


On Fri, Nov 2, 2012 at 11:52 PM, Endre Karlson endre.karl...@gmail.com wrote:
 Are you thinking of something like a Hiera style thing?

 Endre.

 2012/11/2 Aniruddha Khadkikar askhadki...@gmail.com

 Hi Stackers,

 Are there plans to keep all configuration parameters in a central
 location? Rather than having a lot of configuration files. By central
 location I do not mean that its a single server but it could be a
 distributed database with each node irrespective of purpose having a
 copy of the configurations. Each node can refer to the parameters it
 requires.

 This would ease understanding the myriad of configuration parameters
 as elicited in the recent San Diego presentation that showed we have
 now more than 500 configurable parameters in Nova

 (http://www.openstack.org/summit/san-diego-2012/openstack-summit-sessions/presentation/pimp-my-cloud-nova-configuration-hints-and-tricks).

 For configuration parameters which are essentially key-value pairs,
 using a nosql database would suffice.
 Currently its quite difficult to dig up the default values after
 deployment. Or am I missing something here?

 Br
 Aniruddha

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] User experience and simplified configurations

2012-03-14 Thread Aniruddha Khadkikar
On Thu, Mar 15, 2012 at 9:46 AM, Debo Dutta (dedutta) dedu...@cisco.com wrote:

 Have you tinkered with devstack …. Its simplifies some of the issues you
 raised for a dev guy.

No, I have not used Devstack as the purpose is to simulate a close to
production type POC and not a deployment on a single machine. Also we wanted
to go through the documentation in detail to increase our understanding of the
platform and implement the steps manually.

Regards,
Aniruddha




 From: openstack-bounces+dedutta=cisco@lists.launchpad.net
 [mailto:openstack-bounces+dedutta=cisco@lists.launchpad.net] On Behalf
 Of Aniruddha Khadkikar
 Sent: Wednesday, March 14, 2012 9:03 PM
 To: openstack@lists.launchpad.net
 Subject: [Openstack] User experience and simplified configurations



 Hi,

 Its only recently that I have started exploring openstack and the first
 thing that comes to my mind is the nature and amount of configuration
 required for the various components.
 The managed IT Deb packages for Diablo were very helpful in reaching a poc
 level implementation involving a separate cloud controller, volume and
 glance with swift and with a compute node.

 Are there any plans to simplify the configuration files and develop
 command line wizards for a better user experience?

 I believe that with every release an incrementally better experience will
 help in greater adoption of the platform. For example I could only get
 things working on my third trial. That too led to problems due to EC2 not
 working till a project was added using nova manage. I still remain confused
 between tenants and projects in Diablo. Logically one customer (tenant)
 should be able to run multiple projects. So I was a bit surprised at them
 being treated as equivalent.

 First target for simplification could be the various pipeline settings and
 the nova configuration. I have found these a bit complicated to understand.

 I have not started testing Essex yet.

 Regards
 Aniruddha

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp