[Openstack-operators] Duplicates and confusion in nova policy.json files

2016-06-15 Thread Sam Morrison
Now that policy files in nova Liberty apparently work I’m going through the 
stock example one and see that there are duplicate entries in the policy.json 
like

compute:create:forced_host
os_compute_api:servers:create:forced_host

Which one do I use to change who can do forced_host? both or a specific one?

Anyone have any ideas

Cheers,
Sam



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [app-catalog] App Catalog IRC meeting Thursday June 16th

2016-06-15 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for June 16th at
17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to discuss
something with the Community App Catalog team:
https://wiki.openstack.org/wiki/Meetings/app-catalog

In addition to status updates, we will continue the conversation
around the Application Development improvement effort being led by
Igor Marnat.

Hope to see you there tomorrow!

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] NOVA build timeout: Time to fail a vm from build state

2016-06-15 Thread David Medberry
Ack. I'm picking my worst case with a 2T volume create and then doubling.

On Wed, Jun 15, 2016 at 11:25 AM, Matt Riedemann  wrote:

> On 6/15/2016 12:09 PM, David Medberry wrote:
>
>> So, there is a nova.conf setting:
>>
>> instance_build_timeout (default to 0, never timeout)
>>
>> Does anyone have a "good" value they use for this? In my mind it falls
>> very much into the specific-to-your-cloud-implementation bucket but just
>> wondered what folks were usign for this setting (if any).
>>
>> 10 minutes would be way longer than I'd want to wait for a build to fail
>> but that's probably what we will set this to
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
> Hmm, yeah definitely deployment specific. Also, if you're doing boot from
> volume where nova is creating the volume, that's going to add additional
> time depending on the size of the volume, whether the image is cached, etc.
> And there are separate timeouts in the compute manager for waiting for the
> volume to be available for attaching to the server.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Keystone to drop support for driver versioning…

2016-06-15 Thread De Rose, Ronald
Operators,

We did not receive any feedback that dropping Keystone driver versioning would 
negatively impact anyone.  Thus, the Keystone team met yesterday and decided to 
drop support for driver versioning.  However, we will continue to support 
current legacy drivers.

Moving forward though, what this means is, that if a Keystone driver interface 
changes and you have a custom implementation, you would need to upgrade your 
custom drivers to meet the new interface in order to upgrade OpenStack.  Let us 
know if you have any questions.

Regards,
Ron

Ron De Rose
Intel (Keystone Developer)

More information on Keystone drivers can be found here:
http://docs.openstack.org/developer/keystone/developing_drivers.html

From: De Rose, Ronald
Sent: Monday, June 6, 2016 1:38 PM
To: openstack-operators@lists.openstack.org
Subject: Keystone to drop support for driver versioning…

Operators,

Currently in Keystone, we support driver versioning, where we support a driver 
interface for at least one version back.  However, this has become burdensome 
in terms of maintenance and some of us are questioning the value of supporting 
this.  Thus, we have a proposal in Newton to drop support for driver versioning.

Under the new proposal, if a driver interface changes, we would clearly 
document it in the release notes and in order for you to upgrade, you would 
need to upgrade your custom drivers to meet the new interface.

That being said, before deciding on this, we are looking for feedback on 
whether or not this would significantly impact you.

Regards,
Ron

Ron De Rose
Intel (Keystone Developer)

More information on Keystone drivers can be found here:
http://docs.openstack.org/developer/keystone/developing_drivers.html


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] NOVA build timeout: Time to fail a vm from build state

2016-06-15 Thread Matt Riedemann

On 6/15/2016 12:09 PM, David Medberry wrote:

So, there is a nova.conf setting:

instance_build_timeout (default to 0, never timeout)

Does anyone have a "good" value they use for this? In my mind it falls
very much into the specific-to-your-cloud-implementation bucket but just
wondered what folks were usign for this setting (if any).

10 minutes would be way longer than I'd want to wait for a build to fail
but that's probably what we will set this to


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



Hmm, yeah definitely deployment specific. Also, if you're doing boot 
from volume where nova is creating the volume, that's going to add 
additional time depending on the size of the volume, whether the image 
is cached, etc. And there are separate timeouts in the compute manager 
for waiting for the volume to be available for attaching to the server.


--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] NOVA build timeout: Time to fail a vm from build state

2016-06-15 Thread David Medberry
So, there is a nova.conf setting:

instance_build_timeout (default to 0, never timeout)

Does anyone have a "good" value they use for this? In my mind it falls very
much into the specific-to-your-cloud-implementation bucket but just
wondered what folks were usign for this setting (if any).

10 minutes would be way longer than I'd want to wait for a build to fail
but that's probably what we will set this to
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Upgrade OpenStack Juno to Mitaka

2016-06-15 Thread Dan Smith
> +1 to everything Daniel said. Nova really expects release-to-release
> upgrades. We do online data migrations between releases. Maybe one
> reason you're getting this to work is we have a nova-manage command to
> force the migration of data between releases rather than doing the
> online data migration as resources are accessed. But there are some DB
> migrations where we do a full stop until you've migrated the data.

Yeah, this ^

We try to put blocker migrations into the stream at critical
synchronization points to make sure that anything that *has* to be
online migrated will have been complete before we roll forward (usually
because we're about to drop something or introduce a constraint).
However, I'm sure there are some subtleties we don't catch with that.

To echo and expand on what Matt said, if you're going to do an
accelerated upgrade, I would _at least_ recommend deploying the
intermediate code an running all the online migrations to completion
(after applying the intermediate schema updates) before rolling to the
next. This means running this after "db sync" for releases after it was
added:

https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L823

Just deploying the target code and running online migrations isn't
really enough, because we often remove the migration code once we know
it (should have been) run to completion. Since we can't know everyone's
upgrade cadence, and since our official support is "N-1 to N", that's
the price you pay for skipping a release.

--Dan

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Upgrade OpenStack Juno to Mitaka

2016-06-15 Thread Matt Riedemann

On 6/15/2016 10:30 AM, Daniel P. Berrange wrote:

On Wed, Jun 15, 2016 at 03:19:28PM +, Jesse Keating wrote:

I'll offer a counter point.

We're not doing Juno to Mitaka, however we are doing Kilo to Mitaka, skipping
over Liberty.

The database migrations to get from Kilo to Mitaka have ran smoothly for us.


While it is great that it /appeared/ to work correctly for you, that is in
no way guaranteed. There also might be data that has silently been incorrectly
migrated due to missing the intermediate release that could cause problems
at some indeterminate point down the road. While we do test the N -> N+1
upgrade path, there is no CI testing of the N -> N + 2 upgrade path. IOW
while it may have worked for you between these 2 particular releases, there
is again no guarantee it'll work for any future pair of N, N+2 releases.
If you want to accept the risks, that's fine, but I'd certainly not suggest
it is a reasonable thing todo for deployments in general. Also what happened
to work for you may just as easily not work for other people, depending on
characteristics of their deployment configuration & data set.

Regards,
Daniel



+1 to everything Daniel said. Nova really expects release-to-release 
upgrades. We do online data migrations between releases. Maybe one 
reason you're getting this to work is we have a nova-manage command to 
force the migration of data between releases rather than doing the 
online data migration as resources are accessed. But there are some DB 
migrations where we do a full stop until you've migrated the data.


--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Upgrade OpenStack Juno to Mitaka

2016-06-15 Thread Daniel P. Berrange
On Wed, Jun 15, 2016 at 03:19:28PM +, Jesse Keating wrote:
> I'll offer a counter point.
>  
> We're not doing Juno to Mitaka, however we are doing Kilo to Mitaka, skipping
> over Liberty.
>  
> The database migrations to get from Kilo to Mitaka have ran smoothly for us.

While it is great that it /appeared/ to work correctly for you, that is in
no way guaranteed. There also might be data that has silently been incorrectly
migrated due to missing the intermediate release that could cause problems
at some indeterminate point down the road. While we do test the N -> N+1
upgrade path, there is no CI testing of the N -> N + 2 upgrade path. IOW
while it may have worked for you between these 2 particular releases, there
is again no guarantee it'll work for any future pair of N, N+2 releases.
If you want to accept the risks, that's fine, but I'd certainly not suggest
it is a reasonable thing todo for deployments in general. Also what happened
to work for you may just as easily not work for other people, depending on
characteristics of their deployment configuration & data set.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Upgrade OpenStack Juno to Mitaka

2016-06-15 Thread Jesse Keating
I'll offer a counter point.
 
We're not doing Juno to Mitaka, however we are doing Kilo to Mitaka, skipping over Liberty.
 
The database migrations to get from Kilo to Mitaka have ran smoothly for us.
 
https://github.com/blueboxgroup/ursula/blob/master/upgrade.yml
-jlk
 
 
- Original message -From: Melvin Hillsman To: Saverio Proto Cc: OpenStack Operators Subject: Re: [Openstack-operators] Upgrade OpenStack Juno to MitakaDate: Wed, Jun 15, 2016 8:11 AM 
+1 on Saverio's response. You will want to upgrade to Mitaka by NOT jumping releases; J - K - L - M not J - M 
 
On Wed, Jun 15, 2016 at 4:13 AM, Saverio Proto  wrote:

Hello,first of all I suggest you read this article:http://superuser.openstack.org/articles/openstack-upgrading-tutorial-11-pitfalls-and-solutions> What is the best way to performe an upgrade from Juno to Mitaka?I would go for the in place upgrade, but I always upgraded withoutjumping version, that AFAIK is not supported.The main problem I see, is that database migrations are supported whenyou upgrade to the next release, but if you jump form Juno to Mitaka Ihave to idea how the database upgrade cloud be done.Saverio___OpenStack-operators mailing listOpenStack-operators@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___OpenStack-operators mailing listOpenStack-operators@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Mid-Cycle Ops Meetup venue choice - please make your voice heard!

2016-06-15 Thread Chris Morgan
Hi Saverio,
   I just checked my calendar and it seems that if a strong preference for
one of the venues emerges by next Tuesday, we may be able to lock in that
venue in time, since if that happens, there will be plenty of time (more
than 8 weeks) after that decision for obtaining a visa.

I hope a strong consensus emerges (for either venue) and if so I will
strongly suggest we lock it in either during next Tuesday's meeting or
shortly afterwards.

Thanks for pointing out this issue.

Chris

On Wed, Jun 15, 2016 at 4:56 AM, Saverio Proto  wrote:

> Hello all,
>
> I will need a visa to come to the US for the Mid-Cycle Ops Meetup.
>
> The process to obtain a Visa can take up to 8 weeks, and I cannot
> apply until dates and venue are decided.
>
> please set a date at least 8 weeks ahead, or few people that can't
> make it on time to apply for visa will not be able to join.
>
> thank you
>
> Saverio
>
>
> 2016-06-14 18:16 GMT+02:00 Edgar Magana :
> > Chris,
> >
> >
> >
> > Awesome locations! Looking forward to have the final one and the date to
> do
> > the booking.
> >
> >
> >
> > Edgar
> >
> >
> >
> > From: Chris Morgan 
> > Date: Tuesday, June 14, 2016 at 8:09 AM
> > To: OpenStack Operators 
> > Subject: [Openstack-operators] Mid-Cycle Ops Meetup venue choice - please
> > make your voice heard!
> >
> >
> >
> > [DISCLAIMER AT BOTTOM OF EMAIL]
> >
> >
> >
> > Hello Everyone,
> >
> >   There are two possible venues for the next OpenStack Operators
> Mid-Cycle
> > meetup. They both seem suitable and the details are listed here :
> >
> >
> >
> > https://etherpad.openstack.org/p/ops-meetup-venue-discuss
> >
> >
> >
> > To guide the decision making process and since time is drawing short for
> > planning an August event, the Ops Meetups Team meeting today on IRC
> decided
> > to try putting this issue to a poll. Please record *likely* attendance
> > preferences for Seattle, NYC (or neither) here :
> >
> >
> >
> > http://doodle.com/poll/e4heruzps4g94syf
> >
> >
> >
> > The poll is not binding :)
> >
> >
> >
> > The Ops Meetups Team is hoping to see a good number of responses within
> the
> > next SEVEN DAYS.
> >
> >
> >
> > Thanks for your attention.
> >
> >
> >
> > Chris Morgan
> >
> >
> >
> > Disclaimer: I work for Bloomberg LP, we have offered to be the host in
> NYC.
> > However, both proposals seem great to me
> >
> >
> >
> > --
> >
> > Chris Morgan 
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>



-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Issue running Controllers as virtual machines on Vmware hosts

2016-06-15 Thread Pedro Sousa
Hi all,

I'm trying to virtualize some controllers on Vmware hosts, however I have
an issue with networking.

When tripleo enables promiscuous mode on the interfaces inside the VM
operating system, I lose connectivity to the network.  I already permitted
promiscuous mode on vmware vswitch.

Anyone had an issue like this before?

I use

CentOS 7.2 / Mitaka
Kernel: 3.10.0-327.18.2.el7.x86_64

Regards,
Pedro Sousa
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Upgrade OpenStack Juno to Mitaka

2016-06-15 Thread Saverio Proto
Hello,

first of all I suggest you read this article:
http://superuser.openstack.org/articles/openstack-upgrading-tutorial-11-pitfalls-and-solutions

> What is the best way to performe an upgrade from Juno to Mitaka?

I would go for the in place upgrade, but I always upgraded without
jumping version, that AFAIK is not supported.

The main problem I see, is that database migrations are supported when
you upgrade to the next release, but if you jump form Juno to Mitaka I
have to idea how the database upgrade cloud be done.

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Mid-Cycle Ops Meetup venue choice - please make your voice heard!

2016-06-15 Thread Saverio Proto
Hello all,

I will need a visa to come to the US for the Mid-Cycle Ops Meetup.

The process to obtain a Visa can take up to 8 weeks, and I cannot
apply until dates and venue are decided.

please set a date at least 8 weeks ahead, or few people that can't
make it on time to apply for visa will not be able to join.

thank you

Saverio


2016-06-14 18:16 GMT+02:00 Edgar Magana :
> Chris,
>
>
>
> Awesome locations! Looking forward to have the final one and the date to do
> the booking.
>
>
>
> Edgar
>
>
>
> From: Chris Morgan 
> Date: Tuesday, June 14, 2016 at 8:09 AM
> To: OpenStack Operators 
> Subject: [Openstack-operators] Mid-Cycle Ops Meetup venue choice - please
> make your voice heard!
>
>
>
> [DISCLAIMER AT BOTTOM OF EMAIL]
>
>
>
> Hello Everyone,
>
>   There are two possible venues for the next OpenStack Operators Mid-Cycle
> meetup. They both seem suitable and the details are listed here :
>
>
>
> https://etherpad.openstack.org/p/ops-meetup-venue-discuss
>
>
>
> To guide the decision making process and since time is drawing short for
> planning an August event, the Ops Meetups Team meeting today on IRC decided
> to try putting this issue to a poll. Please record *likely* attendance
> preferences for Seattle, NYC (or neither) here :
>
>
>
> http://doodle.com/poll/e4heruzps4g94syf
>
>
>
> The poll is not binding :)
>
>
>
> The Ops Meetups Team is hoping to see a good number of responses within the
> next SEVEN DAYS.
>
>
>
> Thanks for your attention.
>
>
>
> Chris Morgan
>
>
>
> Disclaimer: I work for Bloomberg LP, we have offered to be the host in NYC.
> However, both proposals seem great to me
>
>
>
> --
>
> Chris Morgan 
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Upgrade OpenStack Juno to Mitaka

2016-06-15 Thread Michael Stang
Hi,
 
my name is Michael, I am working at the Baden-Wuerttemberg Cooperative State
University Mannheim.
 
We have an installation of OpenStack with roundabout 14 nodes each 32 cores (1
controller (glance,keystone, etc.), 1 neutron, 2 objectstores, 1 blockstore, 9
compute nodes) the blockstore, the objectstore and the compute nodes uses a
storage node over iSCSI with multipath to store the data, virtual machines etc.
The glance image service uses the objectstore as storage for the images.
 
At the moment we are running Juno release and we want to upgrade the
installation to Mitaka without loosing any data (users, images, volumes, virtual
machines, etc.). I already try to find documentation how such an upgrade should
be performed but I didn't find any well described how to do this.
 
So the following questions has arised:
 
What is the best way to performe an upgrade from Juno to Mitaka?
Is the best way to upgrade from Juno -> Kilo -> Liberty -> Mitaka or is it
possible to migrate directly to mitaka?
Is it better to perform an inplace upgrade or is the beter solution to setup a
new environment?
What is the best way to save the existing data to import it in an new
environment?
Is there any well described how to or best practise guide for such an upgrade?
 
Any ideas or help would be welcome :-)
 
Kind regards,
Michael



Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim

zem...@dhbw-mannheim.de
http://www.dhbw-mannheim.de___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators