Re: [Openstack-operators] Nova cells v2 and operational impacts

2015-07-21 Thread David Medberry
Also, if there is feedback, getting it in today or tomorrow would be most
effective.

Michael, this plan works for me/us. TWC. -d

On Tue, Jul 21, 2015 at 9:45 AM, Michael Still mi...@stillhq.com wrote:

 Heya,

 the nova developer mid-cycle meetup is happening this week. We've been
 talking through the operational impacts of cells v2, and thought it
 would be a good idea to mention them here and get your thoughts.

 First off, what is cells v2? The plan is that _every_ nova deployment
 will be running a new version of cells. The default will be a
 deployment of a single cell, which will have the impact that existing
 single cell deployments will end up having another mysql database that
 is required by cells. However, you wont be required to bring up any
 additional nova services at this point [1], as cells v2 lives inside
 the nova-api service.

 The advantage of this approach is that cells stops being a weird
 special case run by big deployments. We're forced to implement
 everything in cells, instead of the bits that a couple of bigger
 players cared enough about, and we're also forced to test it better.
 It also means that smaller deployments can grow into big deployments
 much more easily. Finally, it also simplifies the nova code, which
 will reduce our tech debt.

 This is a large block of work, so cells v2 wont be fully complete in
 Liberty. Cells v1 deployments will effective run both cells v2 and
 cells v1 for this release, with the cells v2 code thinking that there
 is a single very large cell. We'll continue the transition for cells
 v1 deployments to pure cells v2 in the M release.

 So what's the actual question? We're introducing an additional mysql
 database that every nova deployment will need to possess in Liberty.
 We talked through having this data be in the existing database, but
 that wasn't a plan that made us comfortable for various reasons. This
 means that operators would need to do two db_syncs instead of one
 during upgrades. We worry that this will be annoying to single cell
 deployments.

 We therefore propose the following:

  - all operators when they hit Liberty will need to add a new
 connection string to their nova.conf which configures this new mysql
 database, there will be a release note to remind you to do this.
  - we will add a flag which indicates if a db_sync should imply a sync
 of the cells database as well. The default for this flag will be true.

 This means that you can still do these syncs separately if you want,
 but we're not forcing you to remember to do it if you just want it to
 always happen at the same time.

 Does this sound acceptable? Or are we over thinking this? We'd
 appreciate your thoughts.

 Cheers,
 Michael

 1: there is some talk about having a separate pool of conductors to
 handle the cells database, but this wont be implemented in Liberty.

 --
 Rackspace Australia

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova cells v2 and operational impacts

2015-07-21 Thread gustavo panizzo (gfa)


On 2015-07-21 22:45, Michael Still wrote:
 We therefore propose the following:
 
  - all operators when they hit Liberty will need to add a new
 connection string to their nova.conf which configures this new mysql
 database, there will be a release note to remind you to do this.
  - we will add a flag which indicates if a db_sync should imply a sync
 of the cells database as well. The default for this flag will be true.
 
 This means that you can still do these syncs separately if you want,
 but we're not forcing you to remember to do it if you just want it to
 always happen at the same time.
 
 Does this sound acceptable? Or are we over thinking this? We'd
 appreciate your thoughts.

as an op I would like to know if nova can work with the db at different
shema level

nova api = N
nova db = N
nova cell db = M
nova compute = M  N

I have no problem doing the db updates in certain order (example: nova
db before nova cell db) but I want to be able to keep running if the
second db upgrade fails and I need more time to fix it.
a grenade job in the gate testing that would be great



-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

keybase: http://keybase.io/gfa

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova cells v2 and operational impacts

2015-07-21 Thread Michael Still
On Wed, Jul 22, 2015 at 1:14 AM, gustavo panizzo (gfa) g...@zumbi.com.ar 
wrote:


 On 2015-07-21 22:45, Michael Still wrote:
 We therefore propose the following:

  - all operators when they hit Liberty will need to add a new
 connection string to their nova.conf which configures this new mysql
 database, there will be a release note to remind you to do this.
  - we will add a flag which indicates if a db_sync should imply a sync
 of the cells database as well. The default for this flag will be true.

 This means that you can still do these syncs separately if you want,
 but we're not forcing you to remember to do it if you just want it to
 always happen at the same time.

 Does this sound acceptable? Or are we over thinking this? We'd
 appreciate your thoughts.

 as an op I would like to know if nova can work with the db at different
 shema level

 nova api = N
 nova db = N
 nova cell db = M
 nova compute = M  N

 I have no problem doing the db updates in certain order (example: nova
 db before nova cell db) but I want to be able to keep running if the
 second db upgrade fails and I need more time to fix it.
 a grenade job in the gate testing that would be great

So, first off the schema numbers will be separate for each database,
so if the numbers are ever the same in both that will be entirely by
accident.

That said, I see that you're saying about schema upgrades.
Unfortunately nova-api needs to talk to both databases, so the
databases need to be upgraded at the same time. However, I think that
our expand and contract support might help you here, in that you
should be able to alter the database schema before upgrading the
binaries. That would should you time to resolve migration issues.

Hope this helps,
Michael

-- 
Rackspace Australia

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Nova cells v2 and operational impacts

2015-07-21 Thread Michael Still
Heya,

the nova developer mid-cycle meetup is happening this week. We've been
talking through the operational impacts of cells v2, and thought it
would be a good idea to mention them here and get your thoughts.

First off, what is cells v2? The plan is that _every_ nova deployment
will be running a new version of cells. The default will be a
deployment of a single cell, which will have the impact that existing
single cell deployments will end up having another mysql database that
is required by cells. However, you wont be required to bring up any
additional nova services at this point [1], as cells v2 lives inside
the nova-api service.

The advantage of this approach is that cells stops being a weird
special case run by big deployments. We're forced to implement
everything in cells, instead of the bits that a couple of bigger
players cared enough about, and we're also forced to test it better.
It also means that smaller deployments can grow into big deployments
much more easily. Finally, it also simplifies the nova code, which
will reduce our tech debt.

This is a large block of work, so cells v2 wont be fully complete in
Liberty. Cells v1 deployments will effective run both cells v2 and
cells v1 for this release, with the cells v2 code thinking that there
is a single very large cell. We'll continue the transition for cells
v1 deployments to pure cells v2 in the M release.

So what's the actual question? We're introducing an additional mysql
database that every nova deployment will need to possess in Liberty.
We talked through having this data be in the existing database, but
that wasn't a plan that made us comfortable for various reasons. This
means that operators would need to do two db_syncs instead of one
during upgrades. We worry that this will be annoying to single cell
deployments.

We therefore propose the following:

 - all operators when they hit Liberty will need to add a new
connection string to their nova.conf which configures this new mysql
database, there will be a release note to remind you to do this.
 - we will add a flag which indicates if a db_sync should imply a sync
of the cells database as well. The default for this flag will be true.

This means that you can still do these syncs separately if you want,
but we're not forcing you to remember to do it if you just want it to
always happen at the same time.

Does this sound acceptable? Or are we over thinking this? We'd
appreciate your thoughts.

Cheers,
Michael

1: there is some talk about having a separate pool of conductors to
handle the cells database, but this wont be implemented in Liberty.

-- 
Rackspace Australia

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova cells v2 and operational impacts

2015-07-21 Thread Robert Collins
On 22 July 2015 at 02:45, Michael Still mi...@stillhq.com wrote:
 Heya,

...
 So what's the actual question? We're introducing an additional mysql
 database that every nova deployment will need to possess in Liberty.
 We talked through having this data be in the existing database, but
 that wasn't a plan that made us comfortable for various reasons. This
 means that operators would need to do two db_syncs instead of one
 during upgrades. We worry that this will be annoying to single cell
 deployments.

 We therefore propose the following:

  - all operators when they hit Liberty will need to add a new
 connection string to their nova.conf which configures this new mysql
 database, there will be a release note to remind you to do this.
  - we will add a flag which indicates if a db_sync should imply a sync
 of the cells database as well. The default for this flag will be true.

...

Will sites need to do some syncing or something to populate the new DB
[data, not schema], or will the v2 code automatically do this itself?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Network boot openstack instances

2015-07-21 Thread pra devOPS
Hi ALL:

I wnated to network boot Openstack Instances, Some where I have read about
ipxe.

Can somebody provide steps of how to go about doing it.

We have to give the instance booting details in the dhcp, Any special
confguration needed in the nova dhcp?

what would be my base openstack intances, How would it fetch the OS version
I it has boot?

etc.

Thanks,
Dev
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova cells v2 and operational impacts

2015-07-21 Thread Mike Dorman
Seems reasonable.

For us already running v1, will we be creating another new cell database 
for v2?  Or will our existing v1 cell database become that second database 
under v2?

Somewhat beyond the scope of this thread, but my main concern is the 
acrobatics going from v1 in Kilo to the hybrid v1/v2 in Liberty, to full 
v2 in Mitaka.  I think we all realize there will be some amount of pain to 
get to v2, but as long as that case for us existing cells users can be 
handled in a somewhat sane way, I’m happy.  

Mike





On 7/21/15, 8:45 AM, Michael Still mi...@stillhq.com wrote:

Heya,

the nova developer mid-cycle meetup is happening this week. We've been
talking through the operational impacts of cells v2, and thought it
would be a good idea to mention them here and get your thoughts.

First off, what is cells v2? The plan is that _every_ nova deployment
will be running a new version of cells. The default will be a
deployment of a single cell, which will have the impact that existing
single cell deployments will end up having another mysql database that
is required by cells. However, you wont be required to bring up any
additional nova services at this point [1], as cells v2 lives inside
the nova-api service.

The advantage of this approach is that cells stops being a weird
special case run by big deployments. We're forced to implement
everything in cells, instead of the bits that a couple of bigger
players cared enough about, and we're also forced to test it better.
It also means that smaller deployments can grow into big deployments
much more easily. Finally, it also simplifies the nova code, which
will reduce our tech debt.

This is a large block of work, so cells v2 wont be fully complete in
Liberty. Cells v1 deployments will effective run both cells v2 and
cells v1 for this release, with the cells v2 code thinking that there
is a single very large cell. We'll continue the transition for cells
v1 deployments to pure cells v2 in the M release.

So what's the actual question? We're introducing an additional mysql
database that every nova deployment will need to possess in Liberty.
We talked through having this data be in the existing database, but
that wasn't a plan that made us comfortable for various reasons. This
means that operators would need to do two db_syncs instead of one
during upgrades. We worry that this will be annoying to single cell
deployments.

We therefore propose the following:

 - all operators when they hit Liberty will need to add a new
connection string to their nova.conf which configures this new mysql
database, there will be a release note to remind you to do this.
 - we will add a flag which indicates if a db_sync should imply a sync
of the cells database as well. The default for this flag will be true.

This means that you can still do these syncs separately if you want,
but we're not forcing you to remember to do it if you just want it to
always happen at the same time.

Does this sound acceptable? Or are we over thinking this? We'd
appreciate your thoughts.

Cheers,
Michael

1: there is some talk about having a separate pool of conductors to
handle the cells database, but this wont be implemented in Liberty.

-- 
Rackspace Australia

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [puppet][keystone] Creating Keystone users with a password in the puppet module (Kilo) throws error at second puppetrun

2015-07-21 Thread Van Leeuwen, Robert
Hi,

I am using the Kilo puppet recipes to setup Kilo on Ubuntu 14.04 to test the 
latest Puppet recipes with Vagrant.
I am creating an keystone admin user from within the puppet recipe.
Creating the keystone user works fine but the second puppetrun gives an error 
whenever you set a password for the user you want to create.
Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could not 
evaluate: Execution of '/usr/bin/openstack token issue --format value' returned 
1: ERROR: openstack The resource could not be found.

* When you do not pass the password in the keystone_user native type it does 
not throw an error.
* The first run will create the user successfully and set the password
* After sourcing the credentials file and running manually  /usr/bin/openstack 
token issue --format value” also does not give an error.
( I could not immediately find where puppet decides this command is run and 
with which credentials. )

Anyone hitting the same issue or knows what could be going wrong?

Example puppet keystone user config which breaks after the second run:
  keystone_user { 'admin':
password = $::openstack::config::keystone_admin_password,#Removing 
this line fixes the issue
email= 'admin@openstack',
ensure   = present,
enabled  = True,
 }

Thx,
Robert van Leeuwen
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OSAD for RHEL

2015-07-21 Thread Jesse Pretorius
On 9 July 2015 at 05:54, John Dewey j...@dewey.ws wrote:

 IMO - registering the systems with subscription manager or pointing to in
 house yum repos should be included as part of system bootstrapping, and not
 a part of OSAD.  OSAD should simply install the specific packages for the
 alternate distro.


Agreed, trying to cater for all things that everyone wants in their
bootstrapping is a rabbit hole best not ventured into as it'll bloat the
project considerably.


 Might also be a good time to abstract the system packaging module into a
 higher level one which handles `yum` or `apt` behind the scenes.We can
 then manage the list of packages per distro[1].  Throwing this out as an
 idea vs copy-paste every apt with a yum section.


Ansible appears to be building this abstraction already for v2 [1], but has
a means to do this in an alternative way [2].

[1]
https://github.com/ansible/ansible-modules-core/blob/devel/packaging/os/package.py
[2] http://serverfault.com/a/697736
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators